In 2003, security researchers Katie Moussouris and a colleague at enterprise security firm @stake—which would later be acquired by Symantec—found a bad flaw in an encrypted flash drive from Lexar. It was trivial to uncover the password that decrypted the drive’s data. But when they tried to let Lexar know? “Things went wrong,” says Chris Wyspol, who was also working at @stake at the time.

The @stake team had the same two options that anyone does when they discover a vulnerability: either publish the findings openly or go to the developer directly, giving them time to fix the flaw before going public. In theory it seems like the latter would be a win-win, since it reduces the risk that hackers could exploit the bug maliciously. But the reality, in this case and so many others, can quickly get much more complicated and contentious.

Moussouris and her coworkers attempted to contact Lexar through any channel they could find, to no avail. The encryption itself was sound, but an attacker could easily leverage an implementation issue to leak the plaintext password. After two months without success, @stake decided to go public so people would know that data on their purportedly secure drives could in reality become exposed.

“The point was to warn people that the protection was absolutely broken,” Moussouris says. “We recommended treating it like something that has no encryption on it, because that’s what was going on from our perspective.”

That, at least, got Lexar’s attention. The company contacted @stake, saying the disclosure hadn’t been responsible. Wysopal says that when he asked Lexar employees why they hadn’t responded to @stake’s emails and calls, they said they had thought the communications were spam. Eventually Lexar fixed the issue in its next-generation secure flash drive, but the company had no ability to fix it in the model @stake researchers had examined.

Moussouris, now CEO of the disclosure and bug bounty consulting firm Luta Security, and Wysopal, chief technology officer of the application security firm Veracode and former member of the L0pht hacking collective, shared the tale of fraught disclosure as part of a talk Friday at the RSA cybersecurity conference. Too little has changed, they say, since 2003.

Then as now, Moussouris says, researchers may face potential intimidation or legal threats, especially if they don’t work at a firm that can provide institutional protection. “From my career perspective over the last 20 years or so it’s definitely not been a no-brainer kind of a journey for most vendors accepting disclosure,” Moussouris says. “I call it the five stages of vulnerability response grief that they go through. We’re still hearing the same sad disclosure stories from a lot of researchers. It’s not a solved problem.”

Through years of concerted effort, disclosure is now more codified and legitimized than ever. It’s even increasingly common for tech companies to offer so-called bug bounty programs that encourage researchers to submit vulnerability findings in exchange for cash prizes. But even these conduits, which Moussouris has worked hard to champion and normalize, can be abused. Some companies wrongly hold up their bug bounty programs as a magic solution to all security woes. And bug bounties can be restrictive in a counterproductive way, limiting the scope of what researchers can actually examine or even requiring researchers to sign nondisclosure agreements if they want to be eligible for rewards.

A survey completed by Veracode and 451 Research last fall about coordinated disclosure reflects this mixed progress. Of 1,000 respondents in the United States, Germany, France, Italy, and the United Kingdom, 26 percent said that they were disappointed with the efficacy of bug bounties, and 7 percent said the tools are mainly just a marketing push. Similarly, the survey found that 47 percent of organizations represented have bug bounty programs, but only 19 percent of vulnerability reports actually come out of those programs in practice.

“It’s almost like every single software company has to go through this journey of making mistakes and having a problem and having a researcher teach them,” Wysopal says. “In the security industry we’re constantly learning the same lessons over and over again.”


More Great WIRED Stories

You May Also Like

Little-known iPhone hack can save you tons of money on your next handset – and it takes just seconds to do

DETERMINING how much storage you need from your iPhone can save you…

EU privacy laws mean thousands of paedos could go undetected as Facebook forced to turn OFF child abuse detection tools

FACEBOOK has had to turn off some of its child abuse detection…

How Small Businesses Can Leverage AI to Battle Bigger Competitors

July 15, 2020 6 min read Opinions expressed by Entrepreneur contributors are…

Nvidia CEO Jensen Huang Is Powering the AI Revolution

How big is your concern that these constraints will spur China to…