April 20, 2024

You found a security hole. Now what?

The recent conviction of Andrew “Weev” Auernheimer for identity theft and conspiracy has renewed interest in the question of what researchers should do when they find security vulnerabilities in popular products. See, for example, Matt Blaze’s op-ed on how the research community views these matters, and Weev’s own response.

Weev and associates discovered a flaw in AT&T’s handling of consumer information, which allowed anyone to download personal information about users of AT&T’s iPad wireless data service. Weev wrote code that systematically downloaded information on more than 100,000 of those users. Was that enough to get him convicted? Reading between the lines in press accounts, it’s clear that that behavior, plus Weev’s long history of unsavory (though lawful) online speech and his personal eccentricities, were enough to get him convicted.

This will only make researchers more cautious about public discussion of vulnerabilities–which is a shame, because the research community is one of the main sources of public pressure on companies to follow better security practices. Though some companies seem to ignore or downplay security problems in their products–see Jeremy’s recent post for one example–the flow of information about the presence of vulnerabilities plays an important role in helping the market reward good security and punish laxity.

Responsible companies have learned how to work constructively with researchers, with an eye to improving their products, protecting their customers, and learning from their engineering mistakes. There is a long history of responsible researchers and companies cooperating in the public interest. As a researcher I have always felt that when a company is willing to engage constructively, the ethical course is to cooperate with them for the benefit of the public.

That approach becomes harder to sustain when the perceived risk of legal action, whether due to an overzealous lawyer or a research error, gets larger.

At the same time, an alternative outlet for vulnerability information is emerging–selling the information. In principle it could be sold to the maker of the flawed product, but they probably won’t be the high bidder. More likely, the information will be sold to a government, to a company, or to a broker who will presumably re-sell it. (If you’re not familiar with this market, Chris Soghoian’s CITP lecture is a good introduction.)

Some of the uses of purchased vulnerability information will be benign. It might be used to carry out a search of a criminal suspect’s computer, under the authority of a proper search warrant. It might be used to carry out an action like the Stuxnet worm, which exploited several security vulnerabilities to handicap the Iranian nuclear weapons program. Or it might be sold through a broker to a repressive government or an organized crime group. If the supply of vulnerabilities is large enough–and all indications are that it is–then all of these buyers will be able to buy enough information for their purposes.

Crucially, vulnerability information has a higher market value if it is withheld from the maker of the vulnerable product. If the maker finds out, they might close the hole and render the information worthless. So the market in vulnerabilities rewards researchers for making sure that the problems they discover are not fixed–exactly the opposite of the traditional view in the field.

Policymakers should be taking a serious look at this market and thinking about its implications. Do we want to foster an atmosphere where researchers turn away from disclosure, and vulnerability information is withheld from those who can fix problems? Do we want to increase incentives for finding vulnerabilities that won’t be fixed? Do we think we can keep this market from connecting bad guys with the information they want to exploit?

Or do we want to look for ways to protect an alternative approach, where vulnerability information flows to those who are affected by problems and those who can fix them?

Comments

  1. Ed, you probably have more political clout then I have. I have about as much clout as a single grain of salt in the ocean (so take my comments “with a grain of salt”).

    You say “Policymakers” should be looking at the market and thinking about its implication. Which policymakers are you referring to? If you are referring to product companies’ policies, then I agree, they SHOULD be looking at the market and be willing to pay the dough, or at the very least to respond and fix security flaws when they are found.

    But, if you are referring to government entities who write laws or set rules and regulations on commercial products and services. Well, then I think you are barking up the wrong tree. As you mention governments are one of the biggest players in this market. They (at least my experience with various government agencies at all levels in the US) don’t want security products fixed, they want the control of being able to exploit the security bugs for their convenience. Not to mention they know they won’t be held responsible for their own breaking of any laws especially laws regulating digital security.

    (See Utah’s breaking of HIPAA privacy laws when their unsecured Medicaid server allowed someone to download 250,000 unencrypted data files, and not even threat of legal action has occurred.)

    I always attempt to disclose to the companies first any flaws I find… and I am by no means and expert hacker, not even an armature, I find them by accident similar to Jeremy Epstein’s recent example… but I get ZERO response from anybody. Like I said I have about as much clout as a grain of salt in the ocean. And when it is the government (U.S. government at any level), if I get any response it is to find out I was put onto the “naughty list” (e.g. labeled a criminal), just by calling to let them know there is a security problem.

    I am starting to think, it may be better if I actually look into this market you are talking about, perhaps if I were to sell the information, I may actually get a response–because disclosing it privately, gets me nowhere. Not even as far as Jeremy Epstein got with FCPS and Blackboard, and it sounds as if he was frustrated by lack of response, at least they said they would look into it, and forwarded him up a chain. No one I am able to contact will ever even look into it, nor will they allow me to go up in the chain at all.