November 24, 2024

Subpoenas and Search Warrants as Security Threats

When I teach computer security, one of the first lessons is on the need to have a clear threat model, that is, a clearly defined statement of which harms you are trying to prevent, and what assumptions you are making about the capabilities and motivation of the adversaries who are trying to cause those harms. Many security failures stem from threat model confusion. Conversely, a good threat model often shapes the solution.

The same is true for security research: the solutions you develop will depend strongly on what threat you are trying to address.

Lately I’ve noticed more and more papers in the computer security research literature that include subpoenas and/or search warrants as part of their threat model. For example, the Vanish paper, which won Best Student Paper (the de facto best paper award) at the recent Usenix Security symposium, uses the word “subpoena” 13 times, in passages like this:

Attackers. Our motivation is to protect against retroactive data disclosures, e.g., in response to a subpoena, court order, malicious compromise of archived data, or accidental data leakage. For some of these cases, such as the subpoena, the party initiating the subpoena is the obvious “attacker.” The final attacker could be a user’s ex-husband’s lawyer, an insurance company, or a prosecutor. But executing a subpoena is a complex process involving many other actors …. For our purposes we define all the involved actors as the “adversary.”

(I don’t mean to single out this particular paper. This is just the paper I had at hand — others make the same move.)

Certainly, subpoenas are no fun for any of the parties involved. They’re costly to deal with, not to mention the ick factor inherent in compelled disclosure to a stranger, even if you’re totally blameless. And certainly, subpoenas are sometimes used to harass, rather than to gather legitimately relevant evidence. But are subpoenas really the biggest threat to email confidentiality? Are they anywhere close to the biggest threat? Almost certainly not.

Usually when the threat model mentions subpoenas, the bigger threats in reality come from malicious intruders or insiders. The biggest risk in storing my documents on CloudCorp’s servers is probably that somebody working at CloudCorp, or a contractor hired by them, will mess up or misbehave.

So why talk about subpoenas rather than intruders or insiders? Perhaps this kind of talk is more diplomatic than the alternative. If I’m talking about the risks of Gmail, I might prefer not to point out that my friends at Google could hire someone who is less than diligent, or less than honest. If I talk about subpoenas as the threat, nobody in the room is offended, and the security measures I recommend might still be useful against intruders and insiders. It’s more polite to talk about data losses that are compelled by a mysterious, powerful Other — in this case an Anonymous Lawyer.

Politeness aside, overemphasizing subpoena threats can be harmful in at least two ways. First, we can easily forget that enforcement of subpoenas is often, though not always, in society’s interest. Our legal system works better when fact-finders have access to a broader range of truthful evidence. That’s why we have subpoenas in the first place. Not all subpoenas are good — and in some places with corrupt or evil legal systems, subpoenas deserve no legitimacy at all — but we mustn’t lose sight of society’s desire to balance the very real cost imposed on the subpoena’s target and affected third parties, against the usefulness of the resulting evidence in administering justice.

The second harm is to security. To the extent that we focus on the subpoena threat, rather than the larger threats of intruders and insiders, we risk finding “solutions” that fail to solve our biggest problems. We might get lucky and end up with a solution that happens to address the bigger threats too. We might even design a solution for the bigger threats, and simply use subpoenas as a rhetorical device in explaining our solution — though it seems risky to mislead our audience about our motivations. If our solution flows from our threat model, as it should, then we need to be very careful to get our threat model right.

Twittering for the Marines

The Marines recently issued an order banning social network sites (Facebook, MySpace, Twitter, etc.). The Pentagon is reviewing this sort of thing across all services. This follows on the heels of a restrictive NFL policy along the same lines. Slashdot has a nice thread, where among other things, we learn that some military personnel will contract with off-base ISPs for private Internet connections.

There are really two separate security issues to be discussed here. First, there’s the issue that military personnel might inadvertently leak information that could be used by their adversaries. This is what the NFL is worried about. The Marines order makes no mention of such leaks, and they would already be covered by rules and regulations, never mind continuing education (see, e.g., loose lips sink ships). Instead, our discussion will focus on the issue explicitly raised in the order: social networks as a vector for attackers to get at our military personnel.

For starters, there are other tools and techniques that can be used to protect people from visiting malicious web sites. There are black-list services, such as Google’s Safe Browsing, built into any recent version of Firefox. There are also better browser architectures, like Google’s Chrome, that isolate one part of the browser from another. The military could easily require the use of a specific web browser. The military could go one step further and provide sacrificial virtual machines, perhaps running on remote hosts and shared by something like VNC, to allow personnel to surf the public Internet. A solution like this seems infinitely preferable to forcing personnel to use third-party ISPs on personal computers, where vulnerable machines may well be compromised, yet go unnoticed by military sysadms. (Or worse, the ISP could itself be compromised, giving a huge amount of intel to the enemy; contrast this with the military, with its own networks and its own crypto, which presumably is designed to leak far less intel to a local eavesdropper.)

Even better, the virtual machine / remote display technique allows the military sysadm to keep all kinds of forensic data. Users’ external network behavior creates a fantastic honeynet for capturing malicious payloads. If your personnel are being attacked, you want to have the evidence in hand to sort out who the attacker is and why you’re being attacked. That helps you block future attacks and formulate any counter-measures you might take. You could do this just as well for email programs as web browsing. Might not work so well for games, but otherwise it’s a pretty powerful technique. (And, oh by the way, we’re talking about the military here, so personnel privacy isn’t as big a concern as it might be in other settings.)

It’s also important to consider the benefits of social networking. Military personnel are not machines. They’re people with spouses, children, and friends back home. Facebook is a remarkably efficient way to keep in touch with large numbers of friends without investing large amounts of time — ideal for the Marine, back from patrol, to get a nice chuckle when winding down before heading off to sleep.

In short, it’s problematic to ban social networking on “official” machines, which only pushes personnel to use these things on “unofficial” machines with “unofficial” ISPs, where you’re less likely to detect attacks and it’s harder to respond to them. Bring them in-house, in a controlled way, where you can better manage security issues and have happier personnel.

Lessons from Amazon's 1984 Moment

Amazon got some well-deserved criticism for yanking copies of Orwell’s 1984 from customers’ Kindles last week. Let me spare you the copycat criticism of Amazon — and the obvious 1984-themed jokes — and jump right to the most interesting question: What does this incident teach us?

Human error was clearly part of the problem. Somebody at Amazon decided that repossessing purchased copies of 1984 would be a good idea. They were wrong about this, as both the public reaction and the company’s later backtracking confirm. But the fault lies not just with the decision-maker, but also with the factors that made the decision more likely, including some aspects of the technology itself.

Some put the blame on DRM, but that’s not the problem here. Even if the Kindle used open formats and let you export and back up your books, Amazon could still have made 1984 disappear from your Kindle. Yes, some users might have had backups of 1984 stored elsewhere, but most users would have lost their only copy.

Some blame cloud computing, but that’s not precisely right either. The Kindle isn’t really a cloud device — the primary storage, computing and user interface for your purchased books are provided by your own local Kindle device, not by some server at Amazon. You can disconnect your Kindle from the network forever (by flipping off the wireless network switch on the back), and it will work just fine.

Some blame the fact that Amazon controls everything about the Kindle’s software, which is a better argument but still not quite right. Most PCs are controlled by a single company, in the sense that that company (Microsoft or Apple) can make arbitrary changes to the software on the PC, including (in principle) deleting files or forcibly removing software programs.

The problem, more than anything else, is a lack of transparency. If customers had known that this sort of thing were possible, they would have spoken up against it — but Amazon had not disclosed it and generally does offer clear descriptions of how the product works or what kinds of control the company retains over users’ devices.

Why has Amazon been less transparent than other vendors? I’m not sure, but let me offer two conjectures. It might be because Amazon controls the whole system. Systems that can run third-party software have to be more open, in the sense that they have to tell the third-party developers how the system works, and they face some pressure to avoid gratuitous changes that might conflict with third-party applications. Alternatively, the lack of transparency might be because the Kindle offers less functionality than (say) a PC. Less functionality means fewer security risks, so customers don’t need as much information to protect themselves.

Going forward, Amazon will face more pressure to be transparent about the Kindle technology and the company’s relationship with Kindle buyers. It seems that e-books really are more complicated than dead-tree books.

U.S. Objects to China's Mandatory Green Dam Censorware

Yesterday, the U.S. Commerce Secretary and Trade Representative sent a letter to China’s government, objecting to China’s order, effective July 1, to require that all new PCs sold in China have preinstalled the Green Dam Youth Escort censorware program.

Here’s today’s New York Times:

Chinese officials have said that the filtering software, known as Green Dam-Youth Escort, is meant to block pornography and other “unhealthy information.”

In part, the American officials’ complaint framed this as a trade issue, objecting to the burden put on computer makers to install the software with little notice. But it also raised broader questions about whether the software would lead to more censorship of the Internet in China and restrict freedom of expression.

The Green Dam requirement puts U.S.-based PC companies, such as HP and Dell, in a tough spot: if they don’t comply they won’t be able to sell PCs in China; but if they do comply they will be censoring their customers’ Internet use and exposing customers to serious security risks.

There are at least two interesting new angles here. The first is the U.S. claim that China’s action violates free trade agreements. The U.S. has generally refrained from treating China’s Internet censorship as a trade issue, even though U.S. companies have often found themselves censored at times when competing Chinese companies were not. This unequal treatment, coupled with the Chinese government’s reported failure to define clearly which actions trigger censorship, looks like a trade barrier — but the U.S. hasn’t said much about it up to now.

The other interesting angle is the direct U.S. objection to censorship of political speech. For some time, the U.S. has tolerated China’s government blocking certain political speech in the network, via the “Great Firewall“. It’s not clear exactly how this objection is framed — the U.S. letter is not public — but news reports imply that political censorship itself, or possibly the requirement that U.S. companies participate in it, is a kind of improper trade barrier.

We’re heading toward an interesting showdown as the July 1 date approaches. Will U.S. companies ship machines with Green Dam? According to the New York Times, HP hasn’t decided, and Dell is dodging the question. The companies don’t want to lose access to the China market — but if U.S. companies participate so directly in political censorship, they would be setting a very bad precedent.

China's New Mandatory Censorware Creates Big Security Flaws

Today Scott Wolchok, Randy Yao, and Alex Halderman at the University of Michigan released a report analyzing Green Dam, the censorware program that the Chinese government just ordered installed on all new computers in China. The researchers found that Green Dam creates very serious security vulnerabilities on users’ computers.

The report starts with a summary of its findings:

The Chinese government has mandated that all PCs sold in the country must soon include a censorship program called Green Dam. This software monitors web sites visited and other activity on the computer and blocks adult content as well as politically sensitive material. We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process. We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

The researchers have released a demonstration attack which will crash the browser of any Green Dam user. Another attack, for which they have not released a demonstration, allows any web page to seize control of any Green Dam user’s computer.

This is a serious blow to the Chinese government’s mandatory censorware plan. Green Dam’s insecurity is a show-stopper — no responsible PC maker will want to preinstall such dangerous software. The software can be fixed, but it will take a while to test the fix, and there is no guarantee that the next version won’t have other flaws, especially in light of the blatant errors in the current version.