July 26, 2017

Archives for December 2012

You found a security hole. Now what?

The recent conviction of Andrew “Weev” Auernheimer for identity theft and conspiracy has renewed interest in the question of what researchers should do when they find security vulnerabilities in popular products. See, for example, Matt Blaze’s op-ed on how the research community views these matters, and Weev’s own response.

Weev and associates discovered a flaw in AT&T’s handling of consumer information, which allowed anyone to download personal information about users of AT&T’s iPad wireless data service. Weev wrote code that systematically downloaded information on more than 100,000 of those users. Was that enough to get him convicted? Reading between the lines in press accounts, it’s clear that that behavior, plus Weev’s long history of unsavory (though lawful) online speech and his personal eccentricities, were enough to get him convicted.

This will only make researchers more cautious about public discussion of vulnerabilities–which is a shame, because the research community is one of the main sources of public pressure on companies to follow better security practices. Though some companies seem to ignore or downplay security problems in their products–see Jeremy’s recent post for one example–the flow of information about the presence of vulnerabilities plays an important role in helping the market reward good security and punish laxity.
[Read more…]

What happens when responsible disclosure fails?

The topic of how to handle security vulnerabilities has been discussed for years. Wikipedia defines responsible disclosure as:

Responsible disclosure is a computer security term describing a vulnerability disclosure model. It is like full disclosure, with the addition that all stakeholders agree to allow a period of time for the vulnerability to be patched before publishing the details. Developers of hardware and software often require time and resources to repair their mistakes. Hackers and computer security scientists have the opinion that it is their social responsibility to make the public aware of vulnerabilities with a high impact. Hiding these problems could cause a feeling of false security. To avoid this, the involved parties join forces and agree on a period of time for repairing the vulnerability and preventing any future damage. Depending on the potential impact of the vulnerability, this period may vary between a few weeks and several months.

[Read more…]

When Technology Sanctions Backfire: The Syria Blackout

American policymakers face an increasingly complex set of choices about whether to permit commerce with “repressive regimes” for core internet technologies. The more straightforward cases involve prohibitions on US import of critical network technology from states that we suspect may include surveillance backdoors. For example, fears of “cyber espionage” have fueled a push for import bans on routers and other equipment from China.

Things get more complicated when the United States chooses to place sanctions on technologies that it exports to “repressive regimes.” In October of last year, the Electronic Frontier Foundation revealed that routers made by US-based BlueCoat Systems had been used in Syria to filter dissent. EFF noted that this appeared to violate export controls established by the US Government, and chastised BlueCoat. At the time, this seemed like an odd stance for the EFF. On the one hand, there were clear harms to citizens on the ground. On the other hand, EFF has helped to lead the charge against the ill-fated attempt to criminalize exportation of digital tools. I am somewhat skeptical about the ability to draw a bright line between speech-enhancing tools and tools of oppression — especially when general purpose computers can easily be used for both.
[Read more…]