Ed posted yesterday about Google’s bombshell announcement that it is considering pulling out of China in the wake of a sophisticated attack on its infrastructure. People more knowledgeable than me about China have weighed in on the announcement’s implications for the future of US-Sino relations and the evolution of the Chinese Internet. Rebecca MacKinnon, a China expert who will be a CITP visiting scholar beginning next month, says that “Google has taken a bold step onto the right side of history.” She has a roundup of Chinese reactions here.
One aspect of Google’s post that hasn’t received a lot of attention is Google’s statement that “only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” A plausible explanation for this is provided by this article (via James Grimmelmann) at PC World:
Drummond said that the hackers never got into Gmail accounts via the Google hack, but they did manage to get some “account information (such as the date the account was created) and subject line.”
That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.
Obviously, this report should be taken with a grain of salt since it’s based on a single anonymous source. But it fits a pattern identified by our own Jen Rexford and her co-authors in an excellent 2007 paper: when communications systems are changed to make it easier for US authorities to conduct surveillance, it necessarily increases the vulnerability of those systems to attacks by other parties, including foreign governments.
Rexford and her co-authors point to a 2006 incident in which unknown parties exploited vulnerabilities in Vodafone’s network to tap the phones of dozens of senior Greek government officials. According to news reports, these attacks were made possible because Greek telecommunications carriers had deployed equipment with built-in surveillance capabilities, but had not paid the equipment vendor, Ericsson, to activate this “feature.” This left the equipment in a vulnerable state. The attackers surreptitiously switched on the surveillance capabilities and used it to intercept the communications of senior government officials.
It shouldn’t surprise us that systems built to give law enforcement access to private communications could become vectors for malicious attacks. First, these interfaces are often backwaters in the system design. The success of any consumer product is going to depend on its popularity with customers. Therefore, a vendor or network provider is going to deploy its talented engineers to work on the public-facing parts of the product. It is likely to assign a smaller team of less-talented engineers to work on the law-enforcement interface, which is likely to be both less technically interesting and less crucial to the company’s bottom line.
Second, the security model of a law enforcement interface is likely to be more complex and less well-specified than the user-facing parts of the service. For the mainstream product, the security goal is simple: the customer should be able to access his or her own data and no one else’s. In contrast, determining which law enforcement officials are entitled to which information, and how those officials are to be authenticated, can become quite complex. Greater complexity means a higher likelihood of mistakes.
Finally, the public-facing portions of a consumer product benefit from free security audits from “white hat” security experts like our own Bill Zeller. If a publicly-facing website, cell phone network or other consumer product has a security vulnerability, the company is likely to hear about the problem first from a non-malicious source. This means that at least the most obvious security problems will be noticed and fixed quickly, before the bad guys have a chance to exploit them. In contrast, if an interface is shrouded in secrecy, and only accessible to law enforcement officials, then even obvious security vulnerabilities are likely to go unnoticed and unfixed. Such an interface will be a target-rich environment if a malicious hacker ever does get the opportunity to attack it.
This is an added reason to insist on rigorous public and judicial oversight of our domestic surveillance capabilities in the United States. There has been a recent trend, cemented by the 2008 FISA Amendments toward law enforcement and intelligence agencies conducting eavesdropping without meaningful judicial (to say nothing of public) scrutiny. Last month, Chris Soghoian uncovered new evidence suggesting that government agencies are collecting much more private information than has been publicly disclosed. Many people, myself included, oppose this expansion of domestic surveillance grounds on civil liberties grounds. But even if you’re unmoved by those arguments, you should still be concerned about these developments on national security grounds.
As long as these eavesdropping systems are shrouded in secrecy, there’s no way for “white hat” security experts to even begin evaluating them for potential security risks. And that, in turn, means that voters and policymakers will be operating in the dark. Programs that risk exposing our communications systems to the bad guys won’t be identified and shut down. Which means the culture of secrecy that increasingly surrounds our government’s domestic spying programs not only undermines the rule of law, it’s a danger to national security as well.
Update: Props to my colleague Julian Sanchez, who made the same observation 24 hours ahead of me.