November 24, 2024

Interesting Email from Sequoia

A copy of an email I received has been passed around on various mailing lists. Several people, including reporters, have asked me to confirm its authenticity. Since everyone seems to have read it already, I might as well publish it here. Yes, it is genuine.

====

Sender: Smith, Ed [address redacted]@sequoiavote.com
To: ,
Subject: Sequoia Advantage voting machines from New Jersey
Date: Fri, Mar 14, 2008 at 6:16 PM

Dear Professors Felten and Appel:

As you have likely read in the news media, certain New Jersey election officials have stated that they plan to send to you one or more Sequoia Advantage voting machines for analysis. I want to make you aware that if the County does so, it violates their established Sequoia licensing Agreement for use of the voting system. Sequoia has also retained counsel to stop any infringement of our intellectual properties, including any non-compliant analysis. We will also take appropriate steps to protect against any publication of Sequoia software, its behavior, reports regarding same or any other infringement of our intellectual property.

Very truly yours,
Edwin Smith
VP, Compliance/Quality/Certification
Sequoia Voting Systems

[contact information and boilerplate redacted]

Privacy: Beating the Commitment Problem

I wrote yesterday about a market failure relating to privacy, in which a startup company can’t convincingly commit to honoring its customers’ privacy later, after the company is successful. If companies can’t commit to honoring privacy, then customers won’t be willing to pay for privacy promises – and the market will undersupply privacy.

Today I want to consider how to attack this problem. What can be done to enable stronger privacy commitments?

I was skeptical of legal commitments because, even though a company might make a contractual promise to honor some privacy rules, customers won’t have the time or training to verify that the promise is enforceable and free of loopholes.

One way to attack this problem is to use standardized contracts. A trusted public organization might design a privacy contract that companies could sign. Then if a customer knew that a company had signed the standard contract, and if the customer trusted the organization that wrote the contract, the customer could be confident that the contract was strong.

But even if the contract is legally bulletproof, the company might still violate it. This risk is especially acute with a cash-strapped startup, and even more so if the startup might be located offshore. Many startups will have shallow pockets and little presence in the user’s locality, so they won’t be deterred much by potential breach-of-contract lawsuits. If the startup succeeds, it will eventually have enough at stake that it will have to keep the promises that its early self made. But if it fails or is on the ropes, it will be strongly tempted to try cheating.

How can we keep a startup from cheating? One approach is to raise the stakes by asking the startup to escrow money against the possibility of a violation – this requirement could be build into the contract.

Another approach is to have the actual data held by a third party with deeper pockets – the startup would provide the code that implements its service, but the code would run on equipment managed by the third party. Outsourcing of technical infrastructure is increasingly common already, so the only difference from existing practice would be to build a stronger wall between the data stored on the server and the company providing the code that implements the service.

From a technical standpoint, this wall might be very difficult to build, depending on what exactly the service is supposed to do. For some services the wall might turn out to be impossible to build – there are some gnarly technical issues here.

There’s no easy way out of the privacy commitment problem. But we can probably do more to attack it than we do today. Many people seem to have given up on privacy online, which is a real shame.

Cold Boot Attacks: Vulnerable While Sleeping

Our research on cold boot attacks on disk encryption has generated lots of interesting discussion. A few misconceptions seem to be floating around, though. I want to address one of them today.

As we explain in our paper, laptops are vulnerable when they are “sleeping” or (usually) “hibernating”. Frequently used laptops are almost always in these states when they’re not in active use – when you just close the lid on your laptop and it quiets down, it’s probably sleeping.

When a laptop goes to sleep, all of the data that was in memory stays there, but the rest of the system is shut down. When you re-open the lid of the laptop, the rest of the system is activated, and the system goes on running, using the same memory contents as before. (Hibernating is similar, but the contents of memory are copied off to the hard drive instead, then brought back from the hard drive when you re-awaken the machine.) People put their laptops to sleep, rather than shutting them down entirely, because a sleeping machine can wake up in seconds with all of the programs still running, while a fully shut-down machine will take minutes to reboot.

Now suppose an attacker gets hold of your laptop while it is sleeping, and suppose the laptop is using disk encryption. The attacker can take the laptop back to his lair, and then open the lid. The machine will reawaken, with the same information in memory that was there when you put the machine to sleep – and that information includes the secret key that is used to encrypt the files on your hard disk. The machine may be screen-locked – that is, it may require entry of your password before you can interact with the desktop – but the attacker won’t care. All he cares about is that the encryption key is in memory.

The attacker will then insert a special thumb drive into the laptop, yank out the laptop’s battery, quickly replace the battery, and push the power button to reboot the laptop. The encryption key will still be in memory – the memory will not have lost its contents because the laptop was without power only momentarily while the battery was out. It doesn’t matter how long the laptop takes to reboot, because the memory contents are fading only momentarily while the battery is out. When the laptop boots, software from the thumb drive will read the contents of memory, find the secret encryption key, and proceed to unlock the encrypted files on your hard drive.

In short, the adversary doesn’t need to capture your laptop while the laptop is open and in active use. All he needs is to get your laptop while it is sleeping – which it is probably doing most of the time.

New Research Result: Cold Boot Attacks on Disk Encryption

Today eight colleagues and I are releasing a significant new research result. We show that disk encryption, the standard approach to protecting sensitive data on laptops, can be defeated by relatively simple methods. We demonstrate our methods by using them to defeat three popular disk encryption products: BitLocker, which comes with Windows Vista; FileVault, which comes with MacOS X; and dm-crypt, which is used with Linux. The research team includes J. Alex Halderman, Seth D. Schoen, Nadia Heninger, William Clarkson, William Paul, Joseph A. Calandrino, Ariel J. Feldman, Jacob Appelbaum, and Edward W. Felten.

Our site has links to the paper, an explanatory video, and other materials.

The root of the problem lies in an unexpected property of today’s DRAM memories. DRAMs are the main memory chips used to store data while the system is running. Virtually everybody, including experts, will tell you that DRAM contents are lost when you turn off the power. But this isn’t so. Our research shows that data in DRAM actually fades out gradually over a period of seconds to minutes, enabling an attacker to read the full contents of memory by cutting power and then rebooting into a malicious operating system.

Interestingly, if you cool the DRAM chips, for example by spraying inverted cans of “canned air” dusting spray on them, the chips will retain their contents for much longer. At these temperatures (around -50 °C) you can remove the chips from the computer and let them sit on the table for ten minutes or more, without appreciable loss of data. Cool the chips in liquid nitrogen (-196 °C) and they hold their state for hours at least, without any power. Just put the chips back into a machine and you can read out their contents.

This is deadly for disk encryption products because they rely on keeping master decryption keys in DRAM. This was thought to be safe because the operating system would keep any malicious programs from accessing the keys in memory, and there was no way to get rid of the operating system without cutting power to the machine, which “everybody knew” would cause the keys to be erased.

Our results show that an attacker can cut power to the computer, then power it back up and boot a malicious operating system (from, say, a thumb drive) that copies the contents of memory. Having done that, the attacker can search through the captured memory contents, find any crypto keys that might be there, and use them to start decrypting hard disk contents. We show very effective methods for finding and extracting keys from memory, even if the contents of memory have faded somewhat (i.e., even if some bits of memory were flipped during the power-off interval). If the attacker is worried that memory will fade too quickly, he can chill the DRAM chips before cutting power.

There seems to be no easy fix for these problems. Fundamentally, disk encryption programs now have nowhere safe to store their keys. Today’s Trusted Computing hardware does not seem to help; for example, we can defeat BitLocker despite its use of a Trusted Platform Module.

For more details, see the paper site.

Unattended Voting Machines, As Usual

It’s election day, so tradition dictates that I publish some photos of myself with unattended voting machines.

To recap: It’s well known that paperless electronic voting machines are vulnerable to tampering, if an attacker can get physical access to a machine before the election. Most of the vendors, and a few election officials, claim that this isn’t a problem because the machines are well guarded so that no would-be attacker can get to them. Which would be mildly reassuring – if it were true.

Here’s me with two unattended voting machines, taken on Sunday evening in a Princeton polling place:

Here are four more unattended voting machines, taken on Monday evening in another Princeton polling place.

I stood conspicuously next to this second set of machines for fifteen minutes, and saw nobody.

In both cases I had ample opportunity to tamper with the machines – but of course I did not.