Researchers put millions of people at risk to convince USB manufacturers that their security problems are serious
Researchers have shared an exploit that can be used to turn essentially any USB drive into a delivery system for cyberattacks. By publishing details of this exploit, they have put hundreds of millions of people at risk in an effort to convince USB manufacturers that they need to update the technology to fix the exploited issue.
The vulnerability was first revealed during a July security conference by Karsten Nohl, who declined to share the code used in the attack because he didn't want to put unwitting consumers at risk. Now, just a few months later, two other researchers say they've reverse-engineered the exploit and have published the underlying code to GitHub, where anyone can grab it.
One of the researchers, Adam Caudill, explained the decision to make the code public to Wired:
'If this is going to get fixed, it needs to be more than just a talk at Black Hat,' Caudill told WIRED in a followup interview. He argues that the USB trick was likely already available to highly resourced government intelligence agencies like the NSA, who may already be using it in secret. 'If the only people who can do this are those with significant budgets, the manufacturers will never do anything about it,' he says. 'You have to prove to the world that it’s practical, that anyone can do it…That puts pressure on the manufactures to fix the real issue.'That thinking highlights a fundamental problem with widespread security issues: researchers can't make manufacturers or consumers care about them until they demonstrate just how scary they really are, and that often requires putting a few hundred million people at risk of attack.
Companies can't be convinced to fix something that they don't perceive as broken until their customers start to voice their concerns, and getting people to care about security issues with their tech products is like trying to make them care about climate change: most people have already decided whether they're scared or not, and it's hard to make people on either side of the aisle change their minds.
So you reveal the problem, make it easy for others to implement it, and hope that companies are scared enough to fix the vulnerability before too many people are harmed by your mostly-benevolent decision to make exploit public. (Sometimes this works, as shown when Apple added a basic security feature to its iCloud website after code meant to exploit a vulnerability in the site was published to GitHub in the beginning of September.)
All of which means that today many people are vulnerable to an attack that few people even knew about just a few months ago. Worse, its original spotter, Nohl, doesn't think the problem can be patched quickly enough to protect most consumers from the exploit. Everyone who allows a USB device to leave their sight for even a moment is now at risk of being attacked by all manner of malware.
It's clear that this turn of events is for the greater good, as it's likely others have already been taking advantage of this exploit, even prior to the details hitting GitHub. But try convincing the general public of that fact, especially as most won't take the time to learn why the code was published in the first place.