November 22, 2024

Google Attacks Highlight the Importance of Surveillance Transparency

Ed posted yesterday about Google’s bombshell announcement that it is considering pulling out of China in the wake of a sophisticated attack on its infrastructure. People more knowledgeable than me about China have weighed in on the announcement’s implications for the future of US-Sino relations and the evolution of the Chinese Internet. Rebecca MacKinnon, a China expert who will be a CITP visiting scholar beginning next month, says that “Google has taken a bold step onto the right side of history.” She has a roundup of Chinese reactions here.

One aspect of Google’s post that hasn’t received a lot of attention is Google’s statement that “only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” A plausible explanation for this is provided by this article (via James Grimmelmann) at PC World:

Drummond said that the hackers never got into Gmail accounts via the Google hack, but they did manage to get some “account information (such as the date the account was created) and subject line.”

That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.

Obviously, this report should be taken with a grain of salt since it’s based on a single anonymous source. But it fits a pattern identified by our own Jen Rexford and her co-authors in an excellent 2007 paper: when communications systems are changed to make it easier for US authorities to conduct surveillance, it necessarily increases the vulnerability of those systems to attacks by other parties, including foreign governments.

Rexford and her co-authors point to a 2006 incident in which unknown parties exploited vulnerabilities in Vodafone’s network to tap the phones of dozens of senior Greek government officials. According to news reports, these attacks were made possible because Greek telecommunications carriers had deployed equipment with built-in surveillance capabilities, but had not paid the equipment vendor, Ericsson, to activate this “feature.” This left the equipment in a vulnerable state. The attackers surreptitiously switched on the surveillance capabilities and used it to intercept the communications of senior government officials.

It shouldn’t surprise us that systems built to give law enforcement access to private communications could become vectors for malicious attacks. First, these interfaces are often backwaters in the system design. The success of any consumer product is going to depend on its popularity with customers. Therefore, a vendor or network provider is going to deploy its talented engineers to work on the public-facing parts of the product. It is likely to assign a smaller team of less-talented engineers to work on the law-enforcement interface, which is likely to be both less technically interesting and less crucial to the company’s bottom line.

Second, the security model of a law enforcement interface is likely to be more complex and less well-specified than the user-facing parts of the service. For the mainstream product, the security goal is simple: the customer should be able to access his or her own data and no one else’s. In contrast, determining which law enforcement officials are entitled to which information, and how those officials are to be authenticated, can become quite complex. Greater complexity means a higher likelihood of mistakes.

Finally, the public-facing portions of a consumer product benefit from free security audits from “white hat” security experts like our own Bill Zeller. If a publicly-facing website, cell phone network or other consumer product has a security vulnerability, the company is likely to hear about the problem first from a non-malicious source. This means that at least the most obvious security problems will be noticed and fixed quickly, before the bad guys have a chance to exploit them. In contrast, if an interface is shrouded in secrecy, and only accessible to law enforcement officials, then even obvious security vulnerabilities are likely to go unnoticed and unfixed. Such an interface will be a target-rich environment if a malicious hacker ever does get the opportunity to attack it.

This is an added reason to insist on rigorous public and judicial oversight of our domestic surveillance capabilities in the United States. There has been a recent trend, cemented by the 2008 FISA Amendments toward law enforcement and intelligence agencies conducting eavesdropping without meaningful judicial (to say nothing of public) scrutiny. Last month, Chris Soghoian uncovered new evidence suggesting that government agencies are collecting much more private information than has been publicly disclosed. Many people, myself included, oppose this expansion of domestic surveillance grounds on civil liberties grounds. But even if you’re unmoved by those arguments, you should still be concerned about these developments on national security grounds.

As long as these eavesdropping systems are shrouded in secrecy, there’s no way for “white hat” security experts to even begin evaluating them for potential security risks. And that, in turn, means that voters and policymakers will be operating in the dark. Programs that risk exposing our communications systems to the bad guys won’t be identified and shut down. Which means the culture of secrecy that increasingly surrounds our government’s domestic spying programs not only undermines the rule of law, it’s a danger to national security as well.

Update: Props to my colleague Julian Sanchez, who made the same observation 24 hours ahead of me.

Another Privacy Misstep from Facebook

Facebook is once again clashing with its users over privacy. As a user myself, I was pretty unhappy about the recently changed privacy control. I felt that Facebook was trying to trick me into loosening controls on my information. Though the initial letter from Facebook founder Mark Zuckerberg painted the changes as pro-privacy — which led more than 48,000 users to click the “I like this” button — the actual effect of the company’s suggested new policy was to allow more public access to information. Though the company has backtracked on some of the changes, problems remain.

Some of you may be wondering why Facebook users are complaining about privacy, given that the site’s main use is to publish private information about yourself. But Facebook is not really about making your life an open book. It’s about telling the story of your life. And like any autobiography, your Facebook-story will include a certain amount of spin. It will leave out some facts and will likely offer more and different levels of detail depending on the audience. Some people might not get to hear your story at all. For Facebook users, privacy means not the prevention of all information flow, but control over the content of their story and who gets to read it.

So when Facebook tries to monetize users’ information by passing that information along to third parties, such as advertisers, users get angry. That’s what happened two years ago with Facebook’s ill-considered Beacon initiative: Facebook started telling advertisers what you had done — telling your story to strangers. But perhaps even worse, Facebook sometimes added items to your wall about what you had purchased — editing your story, without your permission. Users revolted, and Facebook shuttered Beacon.

Viewed through this lens, Facebook’s business dilemma is clear. The company is sitting on an ever-growing treasure trove of information about users. Methods for monetizing this information are many and obvious, but virtually all of them require either telling users’ stories to third parties, or modifying users’ stories — steps that would break users’ mental model of Facebook, triggering more outrage.

What Facebook has, in other words, is a governance problem. Users see Facebook as a community in which they are members. Though Facebook (presumably) has no legal obligation to get users’ permission before instituting changes, it makes business sense to consult the user community before making significant changes in the privacy model. Announcing a new initiative, only to backpedal in the face of user outrage, can’t be the best way to maximize long-term profits.

The challenge is finding a structure that allows the company to explore new business opportunities, while at the same time securing truly informed consent from the user community. Some kind of customer advisory board seems like an obvious approach. But how would the members be chosen? And how much information and power would they get? This isn’t easy to do. But the current approach isn’t working either. If your business is based on user buy-in to an online community, then you have to give that community some kind of voice — you have to make it a community that users want to inhabit.

The Role of Worst Practices in Insecurity

These days, security advisors talk a lot about Best Practices: establishes procedures that are generally held to yield good results. Deploy Best Practices in your organization, the advisors say, and your security will improve. That’s true, as far as it goes, but often we can make more progress by working to eliminate Worst Practices.

A Worst Practice is something that most of us do, even though we know it’s a bad idea. One current Worst Practice is the way we use passwords to authenticate ourselves to web sites. Sites’ practices drive users to re-use the same password across many sites, and to expose themselves to phishing and keylogging attacks. We know we shouldn’t be doing this, but we keep doing it anyway.

The key to addressing Worst Practices is to recognize that they persist for a reason. If ignorance is the cause, it’s not a Worst Practice — remember that Worst Practices, by definition, are widely known to be bad. There’s typically some kind of collective action problem that sustains a Worst Practice, some kind of Gordian Knot that must be cut before we can eliminate the practice.

This is clearly true for passwords. If you’re building a new web service, and you’re deciding how to authenticate your users, passwords are the easy and obvious choice. Users understand them; they don’t require coordination with any other company; and there’s probably a password-handling module that plugs right into your development environment. Better authentication will be a “maybe someday” feature. Developers make this perfectly rational choice every day — and so we’re stuck with a Worst Practice.

Solutions to this and other Worst Practices will require leadership by big companies. Google, Microsoft, Facebook and others will have to step up and work together to put better practices in place. In the user authentication space we’re seeing some movement with new technologies such as OpenID which reduce the number of places users must log into, thereby easing the move to better practices. But on this and other Worst Practices, we have a long way to go.

Which Worst Practices annoy you? And what can be done to address them? Speak up in the comments.

Election Day; More Unguarded Voting Machines

It’s Election Day in New Jersey. As usual, I visited several polling places in Princeton over the last few days, looking for unguarded voting machines. It’s been well demonstrated that a bad actor who can get physical access to a New Jersey voting machine can modify its behavior to steal votes, so an unguarded voting machine is a vulnerable voting machine.

This time I visited six polling places. What did I find?

The good news — and there was a little — is that in one of the six polling places, the machines were properly secured. I’m not sure where the machines were, but I know that they were not visible anywhere in the accessible areas of the building. Maybe the machines were locked in a storage room, or maybe they hadn’t been delivered yet, but anyway they were probably safe. This is the first time I have ever found a local polling place, the night before the election, with properly secured voting machines.

At the other five polling places, things weren’t so good. At three places, the machines were unguarded in an area open to the public. I walked right up to them and had private time with them. In two other places, the machines were visible from outside the building and protected only by an outside door with an easily defeated lock. I didn’t defeat the locks myself — I wasn’t going to cross that line — but I’ll bet you could have opened them quickly with tools you probably have in your car.

The final scorecard: ten machines totally unprotected, eight machines poorly protected, two machines well-protected. That’s an improvement, but then again any protection at all would have been an improvement. We still have a long way to go.

Sequoia Announces Voting System with Published Code

Sequoia Voting Systems, one of the major e-voting companies, announced Tuesday that it will publish all of the source code for its forthcoming Frontier product. This is great news–an important step toward the kind of transparency that is necessary to make today’s voting systems trustworthy.

To be clear, this will not be a fully open source system, because it won’t give users the right to modify and redistribute the software. But it will be open in a very important sense, because everyone will be free to inspect, analyze, and discuss the code.

Significantly, the promise to publish code covers all of the systems involved in running the election and reporting results, “including precinct and central count digital optical scan tabulators, a robust election management and ballot preparation system, and tally, tabulation, and reporting applications”. I’m sure the research community will be eager to study this code.

The trend toward publishing election system source code has been building over the last few years. Security experts have long argued that public scrutiny tends to increase security, and is one of the best ways to justify public trust in a system. Independent studies of major voting vendors’ source code have found code quality to be disappointing at best, and vendors’ all-out resistance to any disclosure has eroded confidence further. Add to this an increasing number of independent open-source voting systems, and secret voting technologies start to look less and less viable, as the public starts insisting that longstanding principles of election transparency be extended to election technology. In short, the time had come for this step.

Still, Sequoia deserves a lot of credit for being the first major vendor to open its technology. How long until the other major vendors follow suit?