November 24, 2024

An Inconvenient Truth About Privacy

One of the lessons we’ve learned from Al Gore is that it’s possible to have too much of a good thing. We all like to tool around in our SUVs, but too much driving leads to global warning. We must all take responsibility for our own carbon emissions.

The same goes for online privacy, except that there the problem is storage rather than carbon emissions. We all want more and bigger hard drives, but what is going to be stored on those drives? Information, probably relating to other people. The equation is simple: more storage equals more privacy invasion.

That’s why I have pledged to maintain a storage-neutral lifestyle. From now on, whenever I buy a new hard drive, I’ll either delete the same amount of old information, or I’ll purchase a storage offset from someone else who has extra data to delete. By bidding up the cost of storage offsets, I’ll help create a market for storage conservation, without the inconvenience of changing my storage-intensive lifestyle.

Government can do its part, too. If the U.S. government adopted a storage-neutral policy, then for every email the NSA recorded, the government would have to delete another email elsewhere – say, at the White House. It’s truly a win-win outcome. And storage conservation technology can help drive the green economy of the twenty-first century.

For private industry, a cap-and-trade system is the best policy. Companies will receive data storage permits, which can be bought and sold freely. When JuicyCampus conserves storage by eliminating its access logs, it can sell the unused storage capacity to ChoicePoint, perhaps for storing information about the same JuicyCampus posters. The free market will allocate the limited storage capacity efficiently, as those who profit by storing less can sell permits to those who profit by storing more.

Debating these policy niceties is all well and good, but the important thing is for all of us to recognize the storage problem and make changes in our own lives. If you and I don’t reduce our storage footprint, who will?

Please join me today in adopting a storage-neutral lifestyle. You can start by not leaving comments on this post.

Privacy: Beating the Commitment Problem

I wrote yesterday about a market failure relating to privacy, in which a startup company can’t convincingly commit to honoring its customers’ privacy later, after the company is successful. If companies can’t commit to honoring privacy, then customers won’t be willing to pay for privacy promises – and the market will undersupply privacy.

Today I want to consider how to attack this problem. What can be done to enable stronger privacy commitments?

I was skeptical of legal commitments because, even though a company might make a contractual promise to honor some privacy rules, customers won’t have the time or training to verify that the promise is enforceable and free of loopholes.

One way to attack this problem is to use standardized contracts. A trusted public organization might design a privacy contract that companies could sign. Then if a customer knew that a company had signed the standard contract, and if the customer trusted the organization that wrote the contract, the customer could be confident that the contract was strong.

But even if the contract is legally bulletproof, the company might still violate it. This risk is especially acute with a cash-strapped startup, and even more so if the startup might be located offshore. Many startups will have shallow pockets and little presence in the user’s locality, so they won’t be deterred much by potential breach-of-contract lawsuits. If the startup succeeds, it will eventually have enough at stake that it will have to keep the promises that its early self made. But if it fails or is on the ropes, it will be strongly tempted to try cheating.

How can we keep a startup from cheating? One approach is to raise the stakes by asking the startup to escrow money against the possibility of a violation – this requirement could be build into the contract.

Another approach is to have the actual data held by a third party with deeper pockets – the startup would provide the code that implements its service, but the code would run on equipment managed by the third party. Outsourcing of technical infrastructure is increasingly common already, so the only difference from existing practice would be to build a stronger wall between the data stored on the server and the company providing the code that implements the service.

From a technical standpoint, this wall might be very difficult to build, depending on what exactly the service is supposed to do. For some services the wall might turn out to be impossible to build – there are some gnarly technical issues here.

There’s no easy way out of the privacy commitment problem. But we can probably do more to attack it than we do today. Many people seem to have given up on privacy online, which is a real shame.

Privacy and the Commitment Problem

One of the challenges in understanding privacy is how to square what people say about privacy with what they actually do. People say they care deeply about privacy and resent unexpected commercial use of information about them; but they happily give that same information to companies likely to use and sell it. If people value their privacy so highly, why do they sell it for next to nothing?

To put it another way, people say they want more privacy than the market is producing. Why is this? One explanation is that actions speak louder than words, people don’t really want privacy very much (despite what they say), and the market is producing an efficient level of privacy. But there’s another possibility: perhaps a market failure is causing underproduction of privacy.

Why might this be? A recent Slate essay by Reihan Salam gives a clue. Salam talks about the quandry faced by companies like the financial-management site Wesabe. A new company building up its business wants to reassure customers that their information will be treated with the utmost case. But later, when the company is big, it will want to monetize the same customer information. Salam argues that these forces are in tension and few if any companies will be able to stick with their early promises to not be evil.

What customers want, of course, is not good intentions but a solid commitment from a company that it will stay privacy-friendly as it grows. The problem is that there’s no good way for a company to make such a commitment. In principle, a company could make an ironclad legal commitment, written into a contract with customers. But in practice customers will have a hard time deciphering such a contract and figuring out how much it actually protects them. Is the contract enforceable? Are there loopholes? The average customer won’t have a clue. He’ll do what he usually does with a long website contract: glance briefly at it, then shrug and click “Accept”.

An alternative to contracts is signaling. A company will say, repeatedly, that its intentions are pure. It will appoint the right people to its advisory board and send its executives to say the right things at the right conferences. It will take conspicuous, almost extravagant steps to be privacy-friendly. This is all fine as far as it goes, but these signals are a poor substitute for a real commitment. They aren’t too difficult to fake. And even if the signals are backed by the best of intentions, everything could change in an instant if the company is acquired – a new management team might not share the original team’s commitment to privacy. Indeed, if management’s passion for privacy is holding down revenue, such an acquisition will be especially likely.

There’s an obvious market failure here. If we postulate that at least some customers want to use web services that come with strong privacy commitments (and are willing to pay the appropriate premium for them), it’s hard to see how the market can provide what they want. Companies can signal a commitment to privacy, but those signals will be unreliable so customers won’t be willing to pay much for them – which will leave the companies with little incentive to actually protect privacy. The market will underproduce privacy.

How big a problem is this? It depends on how many customers would be willing to pay a premium for privacy – a premium big enough to replace the revenue from monetizing customer information. How many customers would be willing to pay this much? I don’t know. But I do know that people might care a lot about privacy, even if they’re not paying for privacy today.

Cold Boot Attacks: Vulnerable While Sleeping

Our research on cold boot attacks on disk encryption has generated lots of interesting discussion. A few misconceptions seem to be floating around, though. I want to address one of them today.

As we explain in our paper, laptops are vulnerable when they are “sleeping” or (usually) “hibernating”. Frequently used laptops are almost always in these states when they’re not in active use – when you just close the lid on your laptop and it quiets down, it’s probably sleeping.

When a laptop goes to sleep, all of the data that was in memory stays there, but the rest of the system is shut down. When you re-open the lid of the laptop, the rest of the system is activated, and the system goes on running, using the same memory contents as before. (Hibernating is similar, but the contents of memory are copied off to the hard drive instead, then brought back from the hard drive when you re-awaken the machine.) People put their laptops to sleep, rather than shutting them down entirely, because a sleeping machine can wake up in seconds with all of the programs still running, while a fully shut-down machine will take minutes to reboot.

Now suppose an attacker gets hold of your laptop while it is sleeping, and suppose the laptop is using disk encryption. The attacker can take the laptop back to his lair, and then open the lid. The machine will reawaken, with the same information in memory that was there when you put the machine to sleep – and that information includes the secret key that is used to encrypt the files on your hard disk. The machine may be screen-locked – that is, it may require entry of your password before you can interact with the desktop – but the attacker won’t care. All he cares about is that the encryption key is in memory.

The attacker will then insert a special thumb drive into the laptop, yank out the laptop’s battery, quickly replace the battery, and push the power button to reboot the laptop. The encryption key will still be in memory – the memory will not have lost its contents because the laptop was without power only momentarily while the battery was out. It doesn’t matter how long the laptop takes to reboot, because the memory contents are fading only momentarily while the battery is out. When the laptop boots, software from the thumb drive will read the contents of memory, find the secret encryption key, and proceed to unlock the encrypted files on your hard drive.

In short, the adversary doesn’t need to capture your laptop while the laptop is open and in active use. All he needs is to get your laptop while it is sleeping – which it is probably doing most of the time.

New Research Result: Cold Boot Attacks on Disk Encryption

Today eight colleagues and I are releasing a significant new research result. We show that disk encryption, the standard approach to protecting sensitive data on laptops, can be defeated by relatively simple methods. We demonstrate our methods by using them to defeat three popular disk encryption products: BitLocker, which comes with Windows Vista; FileVault, which comes with MacOS X; and dm-crypt, which is used with Linux. The research team includes J. Alex Halderman, Seth D. Schoen, Nadia Heninger, William Clarkson, William Paul, Joseph A. Calandrino, Ariel J. Feldman, Jacob Appelbaum, and Edward W. Felten.

Our site has links to the paper, an explanatory video, and other materials.

The root of the problem lies in an unexpected property of today’s DRAM memories. DRAMs are the main memory chips used to store data while the system is running. Virtually everybody, including experts, will tell you that DRAM contents are lost when you turn off the power. But this isn’t so. Our research shows that data in DRAM actually fades out gradually over a period of seconds to minutes, enabling an attacker to read the full contents of memory by cutting power and then rebooting into a malicious operating system.

Interestingly, if you cool the DRAM chips, for example by spraying inverted cans of “canned air” dusting spray on them, the chips will retain their contents for much longer. At these temperatures (around -50 °C) you can remove the chips from the computer and let them sit on the table for ten minutes or more, without appreciable loss of data. Cool the chips in liquid nitrogen (-196 °C) and they hold their state for hours at least, without any power. Just put the chips back into a machine and you can read out their contents.

This is deadly for disk encryption products because they rely on keeping master decryption keys in DRAM. This was thought to be safe because the operating system would keep any malicious programs from accessing the keys in memory, and there was no way to get rid of the operating system without cutting power to the machine, which “everybody knew” would cause the keys to be erased.

Our results show that an attacker can cut power to the computer, then power it back up and boot a malicious operating system (from, say, a thumb drive) that copies the contents of memory. Having done that, the attacker can search through the captured memory contents, find any crypto keys that might be there, and use them to start decrypting hard disk contents. We show very effective methods for finding and extracting keys from memory, even if the contents of memory have faded somewhat (i.e., even if some bits of memory were flipped during the power-off interval). If the attacker is worried that memory will fade too quickly, he can chill the DRAM chips before cutting power.

There seems to be no easy fix for these problems. Fundamentally, disk encryption programs now have nowhere safe to store their keys. Today’s Trusted Computing hardware does not seem to help; for example, we can defeat BitLocker despite its use of a Trusted Platform Module.

For more details, see the paper site.