April 23, 2014

avatar

Does the Great Firewall Violate U.S. Law?

Clayton, Murdoch, and Watson have an interesting new paper describing technical mechanisms that the Great Firewall of China uses to block online access to content the Chinese government doesn’t like.

The Great Firewall works in two parts. One part inspects data packets that cross the border between China and the rest of the world, looking for “bad” content. The other part tries to shut down cross-border connections that have contained “bad” content. I’ll focus here on the shutdown part.

The shutdown part attacks the TCP protocol, which is used (among many other things) to transfer Web pages and email. TCP allows two computers on the Net to establish a virtual “connection” and then send data over that connection. The technical specification for TCP says that either of the two computers can send a so-called Reset packet, which informs the computer on the other end that some unspecified error has occurred so the connection should be shut down immediately.

The Great Firewall tries to sever TCP connections by forging Reset packets. Each endpoint machine is sent a series of Reset packets purporting to come from the other machine (but really coming from the Great Firewall). The endpoints usually respond by shutting down the connection. If they try to connect again, they’ll get more forged Reset packets, and so on.

This trick of forging Reset packets has been used by denial-of-service attackers in the past, and there are well-known defenses against it that have been built into popular networking software. However, these defenses generally don’t work against an attacker who can see legitimate traffic between the target machines, as the Great Firewall can.

What the Great Firewall is doing, really, is launching a targeted denial of service attack on both ends of the connection. If I visit a Chinese website and access certain content, the Great Firewall will send denial of service packets to a machine in China, which probably doesn’t violate Chinese law. But it will also send denial of service packets to my machine, here in the United States. Which would seem to implicate U.S. law.

The relevant U.S. statute is the Computer Fraud and Abuse Act (18 U.S.C. 1030), which makes it an offense to “knowingly cause[] the transmission of a program, information, code, or command, and as a result of such conduct, intentionally cause[] damage without authorization, to a protected computer”, as long as certain other conditions are met (about which more below). Unpacking this, and noting that any computer that can communicate with China will meet the definition of “protected computer”, the only part of this requirement that requires any discussion is “damage”. The statute defines “damage” as “any impairment to the integrity or availability of data, a program, a system, or information”, so that the unavailability to me of the information on the Chinese website I tried to visit would count as damage.

But the offense has another requirement, which is intended to ensure that it is serious enough to merit legal attention. The offense must also cause, or attempt to cause, one of the following types of harm:

(i) loss to 1 or more persons during any 1-year period (and, for purposes of an investigation, prosecution, or other proceeding brought by the United States only, loss resulting from a related course of conduct affecting 1 or more other protected computers) aggregating at least $5,000 in value;

(ii) the modification or impairment, or potential modification or impairment, of the medical examination, diagnosis, treatment, or care of 1 or more individuals;

(iii) physical injury to any person;

(iv) a threat to public health or safety; or

(v) damage affecting a computer system used by or for a government entity in furtherance of the administration of justice, national defense, or national security;

This probably wouldn’t apply to an attack on my computer, but attacks on certain U.S. government entities would trigger part (v), and there is a decent argument that the aggregate effect of such attacks on U.S. persons could add up to more than $5000 in damage, which would trigger part (i). I don’t know whether this argument would succeed. And I’m not a lawyer, so I’m relying on real lawyers to correct me in the comments if I’m missing something here.

But even if the Great Firewall doesn’t violate U.S. law now, the law could be changed so that it did. A law banning the sending of forged packets to the U.S. with intent to deny availability of content lawful in the U.S., would put the Great Firewall on the wrong side of U.S. law. And it would do so without reaching across the border to regulate how the Chinese government interacts with its citizens. If we can’t stop the Chinese government from censoring their own citizens’ access to the Net, maybe we can stop them from launching denial of service attacks against us.

(link via Bruce Schneier)

avatar

Long-Tail Innovation

Recently I saw a great little talk by Cory Ondrejka on the long tail of innovation. (He followed up with a blog entry.)

For those not in the know, “long tail” is one of the current buzzphrases of tech punditry. The term was coined by Chris Anderson in a famous Wired article. The idea is that in markets for creative works, niche works account for a surprisingly large fraction of consumer demand. For example, Anderson writes that about one-fourth of Amazon’s book sales come from titles not among the 135,000 most popular. These books may sell in ones and twos, but there are so many of them that collectively they make up a big part of the market.

Traditional businesses generally did poorly at meeting this demand. A bricks-and-mortar book superstore stocks at most 135,000 titles, leaving at least one-fourth of the demand unmet. But online stores like Amazon can offer a much larger catalog, opening up the market to these long tail works.

Second Life, the virtual world run by Cory’s company, Linden Lab, lets users define the behavior of virtual items by writing software code in a special scripting language. Surprisingly many users do this, and the demand for scripted objects looks like a long tail distribution. If this is true for software innovation in general, Cory asked, what are the implications for business and for public policy?

The implications for public policy are interesting. Much of the innovation in the long tail is not motivated mainly by profit – the authors know that their work will not be popular. Policymakers should remember that not all valuable creativity is financially motivated.

But innovation can be deterred by imposing costs on it. The key issue is transaction costs. If you have to pay $200 to somebody before you can innovate, or if you have to involve lawyers, the innovation won’t happen. Or, just as likely, the innovation will happen anyway, and policymakers will wonder why so many people are ignoring the law. That’s what has happened with music remixes; and it could happen again for code.

avatar

21st Century Wiretapping: Risk of Abuse

Today I’m returning, probably for the last time, to the public policy questions surrounding today’s wiretapping technology. Thus far in the series (1, 2, 3, 4, 5, 6, 7, 8) I have described how technology enables wiretapping based on automated recognition of certain features of a message (rather than individualized suspicion of a person), I have laid out the argument in favor of allowing such content-triggered wiretaps given a suitable warrant, and I have addressed some arguments against allowing them. These counterarguments, I thnk, show that content-triggered wiretaps be used carefully and with suitable oversight, but they do not justify forgoing such wiretaps entirely.

The best argument against content-triggered wiretaps is the risk of abuse. By “abuse” I mean the use of wiretaps, or information gleaned from wiretaps, illegally or for the wrong reasons. Any wiretapping regime is subject to some kind of abuse – even if we ban all wiretapping by the authorities, they could still wiretap illegally. So the risk of abuse is not a new problem in the high-tech world.

But it is a worse problem than it was before. The reason is that to carry out content-triggered wiretaps, we have to build an infrastructure that makes all communications available to devices managed by the authorities. This infrastructure enables new kinds of abuse, for example the use of content-based triggers to detect political dissent or, given enough storage space, the recording of every communication for later (mis)use.

Such serious abuses are not likely, but given the harm they could do, even a tiny chance that they could occur must be taken seriously. The infrastructure of content-triggered wiretaps is the infrastructure of a police state. We don’t live in a police state, but we should worry about building police state infrastructure. To make matters worse, I don’t see any technological way to limit such a system to justified uses. Our only real protections would be oversight and the threat of legal sanctions against abusers.

To sum up, the problem with content-triggered wiretaps is not that they are bad policy by themselves. The problem is that doing them requires some very dangerous infrastructure.

Given this, I think the burden should be on the advocates of content-triggered wiretaps to demonstrate that they are worth the risk. I won’t be convinced by hypotheticals, even vaguely plausible ones. I won’t be convinced, either, by vague hindsight claims that such wiretaps coulda-woulda-shoulda captured some specific badguy. I’m willing to be convinced, but you’ll have to show me some evidence.

avatar

Freeing the Xbox

When Microsoft shipped its Xbox game console, Linux programmers salivated. The Xbox was a pretty nice computer, priced at $149. The Xbox had all the hardware needed to run Linux and its applications. Problem was, Microsoft had tried to lock down the Xbox hardware to prevent unauthorized programs – such as the Linux kernel – from running on it. An article at xbox-linux.org explains how this lockdown plan failed. The technical details are quite interesting, but nontechies can learn from this story too.

Microsoft had two reasons for locking down the hardware. It wanted to stop people from running Xbox games that had been illegally copied. And it wanted to stop people from running other (noninfringing) software such as Linux. The latter goal is the more interesting one. Microsoft did this because it wanted to sell the Xbox hardware at a loss, and make up the difference by charging a premium for games. To do this, it needed to stop unauthorized software – otherwise people might buy the Xbox, install another operating system on it, and never buy an Xbox game.

A group of clever engineers, calling themselves the Xbox Linux Project, set out to discover how Microsoft had tried to lock down the Xbox hardware, and how they could overcome Microsoft’s lockdown and install Linux. We would expect them to succeed – in computer security, physical control of a device almost always can be leveraged to control the device’s behavior – and indeed they did. The bulk of the Xbox-Linux article describes the technical details of how Microsoft’s lockdown worked, how they reverse engineered it, and the tricks they discovered for capturing effective control of the Xbox and installing Linux on it.

Opponents of this kind of tinkering often argue that it is really just a front for copyright infringement – that the tinkerers really just want to run illegally copied games. But the article describes a group of people who just want to run Linux on their Xboxes, and are willing to take steps to stop their work being misappropriated by game copiers. For example, the article says that once they had figured out a trick, which the article calls a “hack”, for installing new software on the Xbox, they tried to use it responsibly:

But the Xbox Linux Project did not blindly release this hack. The first … proof of concept exploit had been finished in January 2003. After that, a lot of energy was invested in finding out a way to free the Xbox for homebrew development and Linux, but not allowing game copies. Microsoft was contacted, but without any success. They just ignored the problem.

Finally in July, the hack was released, with heavy obfuscation, and lockout code for non-Linux use. It was obvious that this would only slow down the “hacking of the hack”, so eventually, people would be able to use this vulnerability for copied games, but since Microsoft showed no interest in finding a solution, there was no other option than full disclosure. The suggestion of the Xbox Linux Project would have been to work together with Microsoft to silently close the security holes and, in return, work on a method to let homebrew and Linux run on the Xbox.

What should public policy have to say about this? Given that the Xbox Linux folks apparently weren’t trying to copy games but simply wanted to run noninfringing software on lawfully purchased hardware, and given that they took steps to hinder the use of their work for infringing purposes, it’s hard to object to their work on copyright grounds. The real action here is in Microsoft’s strategy of selling the Xbox hardware as a loss leader, and the tendency of the Xbox Linux work to frustrate this strategy. Xbox Linux creates value for its users. Should public policy be willing to destroy this value in order to enable Microsoft’s pricing strategy? My instinct is that it should not, though there is a plausible argument on the other side.

What is clear, though, is that this is not really a copyright issue. At bottom, it’s not about the right of Microsoft to be paid for the Xboxes it builds, or about the right of game authors to be paid for the copies of their games that users get. Instead, it’s about whether Microsoft can control how people use its products. In general, the law does not give the maker of a product the right to control its use. Why should the Xbox be any different?

avatar

How I Spent My Summer Vacation

Ah, summer, when a man’s thoughts turn to … ski jumping?

On Sunday I had the chance to try ski jumping, at the Swiss national team’s training center at Einsiedeln. My companions and I – or at least the ones foolish enough to try, which of course included me – donned thick neoprene bodysuits, gloves, helmets, and ski boots. We clumped up a short path then strapped on our skis and set off down the jumping track. Having skied only twice before – and that twenty-five years ago – I found this a real adventure, about which more below.

Ski jumping, it turns out, is a popular summer sport, probably more popular in the summer than the winter. I can understand why the spectators would prefer summer weather, but the jumpers’ gear felt better suited for winter than for the 25-degree Centigrade (80-degree Fahrenheit) day we had. The bodysuits are so hot, in fact, that every male jumper, whether expert or rookie, unzipped his suit and peeled it back to the waist when not jumping, exposing a tanned muscular torso, in some cases.

Skiing in summer requires a special surface. On the jumping ramp, the skis ride in two parallel tracks made of a ceramic material speckled with pea-sized bumps sticking upward so that the skis touch only the bumps. The landing area is thatched with countless thin, green plastic sticks about 20 cm (eight inches) long, anchored on one end with the other end pointing downhill. These are hosed down with water to reduce friction. At the bottom of the hill is a stopping area covered with wood chips.

After our own attempts at jumping (be patient; I’ll tell all below), we watched some expert Swiss jumpers practicing, from several vantage points. (YouTube video) From the top where the jumpers start, the ramp seems impossibly steep and stretches far below. It must take considerable courage to look down the ramp and then let oneself go. As the coach told us, once you start, there’s no turning back – you’re going to end up at the bottom of the hill one way or another. You have to be fearless. Unsurprisingly, the jumpers we saw all looked like seventeen-year-old young men.

Though the jumpers go fast, they never get very high above the ground. At the jump point, the ramp is 1.5 meters (five feet) above the ground. Below the jump point, the hillside curves down parabolically, parallel to the flight path that a jumper would follow in the absence of air resistance. So a jumper who doesn’t spring upward is never more than 1.5 meters off the ground. A skilled jumper might reach three meters (ten feet) above the ground. But then again he’ll be going very fast and will fly more than 100 meters (330 feet) down the hill.

The best view is from the takeoff point. You see the jumper start, far above. As he accelerates down the ramp toward you, his body locked in a compact crouch, you hear a ferocious clattering as his skis drum over the bumps in the ceramic tracks. Suddenly the ramp vanishes beneath him as he springs upward. At that moment he flashes past you, and the clattering is replaced all at once with an insistent whooshing, hissing sound as the jumper floats down the hill, his ski tips spread, his body leaning forward like a more aerodynamic Superman. In a second he vanishes behind the downward curve of the hill, and you wait for the faint but firm slapping noise of a safe landing. He returns to view at the base of the hill as he coasts to a stop on a flat grassy area. Nonchalantly, he removes his helmet, gloves, and skis, and half-unzips his body suit, as the next jumper begins.

The jump used by our group of rookies was maybe one-tenth the size of the big jump. Still, it looked distressingly long and steep from the top. On our first trip down the hill, we started just below the jump point, to practice skiing down the landing ramp. The trick is to maintain your balance on the strange green-stick surface as the hill curves sharply downward and then levels out again. Without ski poles, and with your ankles held rigid by the ski boots, you have little leverage to shift your weight forward or backward. Side to side balance is easy, but that’s not the problem.

As I awaited my first run, one of my companions jokingly (in German) asked the onsite medical expert how far it was to the hospital. Not to worry, she answered, the hospital was practically at the bottom of the hill. With that reassurance, I levered my skis out onto the top of the landing ramp and faced down the hill, as the coach held me motionless.

And then I was moving jerkily downhill as my skis fought the initial friction of the green sticks. I recovered my balance and picked up speed as the hill reached what seemed like a 45-degree angle. For a moment I tried to remember whether skiing on snow felt like this, but the thought was thrust aside by a rush of adrenaline and the sense that I was starting to lose my balance. As I reached the bottom of the hill any illusion of balance was gone, and I tumbled to a stop on the wood chips. I lay on my side, my skis angled awkwardly behind me, still attached to my feet. But I was unhurt and decided to try again.

On my second run, things went awry almost immediately. As I started I sensed that I was leaning ever so slightly backward. The downward curve of the hill started to tilt me backward even more. I struggled to get my center of gravity over my feet but it was hopeless. I ended up laying myself down on one hip and sliding comfortably down the last part of the hill. I was none the worse for wear, but my bodysuit was drenched from sliding over recently-watered green sticks.

I felt fine but decided not to try again. Two falls is enough for a guy of my age. I cheered on my companions as some of them managed to make short jumps and reach the bottom of the hill gracefully. Then we headed off for a delightful lunch and a tour of the Einsiedeln monastery and a stunningly restored church crowded, for some reason, with thousands of visiting Sri Lankans.

Having learned firsthand how hard ski jumping is, I was even more amazed to see the skilled jumpers casually launch themselves off the big jump. Next Winter Olympics, ski jumping will be a must-watch.

(My companions are invited to identify themselves and/or tell their own stories, in the comments.)

avatar

Syndromic Surveillance: 21st Century Data Harvesting

[This article was written by a pseudonymous reader who calls him/herself Enigma Foundry. I'm publishing it here because I think other readers would find it interesting. – Ed Felten]

The recent posts about 21st Century Wiretapping described a government program which captured, stored, filtered and analyzed large quantities of information, information which the government had not previously had access to without special court permission. On reading these posts, it had struck me that there were other government programs that are in the process of being implemented that will also capture, store, filter and analyze large quantities of information that had not been previously available to governmental authorities.

In contrast to the NSA wiretap program described in previous posts, the program I am going to describe has not yet generated any significant amount of public controversy, although its development has taken place in nearly full public view for the past decade. Also, unlike the NSA program, this program is still hypothetical, although a pilot project is underway.

The systems that have been used to detect disease outbreaks to date primarily rely on the recognition and reporting of health statistics that fit recognized disease patterns. (See, e.g., the summary for the CDC’s Morbidity and Mortality weekly Report.) These disease surveillance systems works well enough for outbreaks of recognized and ‘reportable’ diseases which, by virtue of having a long clinically described history, have distinct and well-known symptoms and, in almost all cases, definitive tests exist for their diagnosis. But what if an emerging infectious disease or a bio-terrorist attack used an agent that did not fit a recognized pattern, and therefore there existed no well-defined set of symptoms, let alone a clinically meaningful test for identifying it?

If the initial symptoms are severe enough, as in the case of S.A.R.S., the disease will quickly come to light. (Although it is important to note that that did not happen in China, where the press was tightly controlled) If the initial symptoms are not severe, however, the recognition that an attack has even occurred may be delayed many months (or using certain types of agents, conceivably even years) after the event had occurred. To give Health Authorities the ability to see events that are outside the set of diseases that are required to be reported, the creation of a large database, which would collate information such as: workplace and school absenteeism, prescription and OTC (over the counter) medicine sales, symptoms reported at schools, numbers of doctor and Emergency Department visits, even weather patterns and veterinary conditions reported could serve a very useful function in identifying a disease outbreak, and bringing it to the attention of Public Health Authorities. Such a data monitoring system has been given the name ‘Syndromic Surveillance,’ to separate it from the traditional ‘Disease Surveillance’ programs.

You don’t need to invoke the specter of bioterrorism to make a strong case for the value of such a system. The example frequently cited is a 1993 outbreak in Milwaukee of cryptosporidium (an intestinal parasite) which eventually affected over 400,000 people. In that case, sales of anti-diarrhea medicines spiked some three weeks before officials became aware of the outbreak. If the sales of OTC medications had been monitored, perhaps officials could have been alerted to the outbreak earlier.

Note that this system, as currently proposed does not necessarily create or require records that can be tied to particular individuals, although certain data about each individual such as place of work and residence, occupation, recent travel are all of interest. The data would probably tie individual reports to census tract, or perhaps census block. So the concerns about individual privacy being violated seem to be less then in the case of the NSA data mining of telephone records, since the information is not tied to an individual and the type of information is very different from that harvested by the NSA program.

There are three interesting problems created by the database used by a Syndromic Surveillance system: (1) The problem of False Positives, (2) Issues relating to access to and control of the data base & (3) What to do if the Syndromic Surveillance system actually works.

First with regard to the false positives, even a very minor rate error rate can lead to many false alarms, and the consequences of a false alarm are much greater than in the case of the NSA data filtering program:

For instance, thousands of syndromic surveillance systems soon will be running simultaneously in cities and counties throughout the United States. Each might analyze data from 10 or more data series—symptom categories, separate hospitals, OTC sales, and so on. Imagine if every county in the United States had in place a single syndromic surveillance system with a 0.1 percent false-positive rate; that is, the alarm goes off inappropriately only once in a thousand days. Because there are about 3,000 counties in the United States, on average three counties a day would have a false-positive alarm. The costs of excessive false alarms are both monetary, in terms of resources needed to respond to phantom events, and operational, because too many false events desensitize responders to real events….

There are obviously many issues relating to public policy regarding to access and dissemination of information generated by such a public health database, but there are two particular items providing contradictory information which I’d like to present, and hear your reactions and thoughts:

Livingston, NJ -When news of former President Bill Clinton’s experience with chest pains and his impending cardiac bypass surgery hit the streets, hospital emergency departments and urgent care centers in the Northeast reportedly had an increase in cardiac patients. Referred to as “the Bill Clinton Effect,” the talked-about increase in cardiac patients seeking care has now been substantiated by Emergency Medical Associates’ (EMA) bio-surveillance system.

Reports of Clinton’s health woes were first reported on September 3rd, with newspaper accounts appearing nationally in September 4th editions. On September 6th, EMA’s bio-surveillance noted an 11% increase in emergency department visits with patients complaining of chest pain (over the historical average for that date), followed by a 76% increase in chest pain visits on September 7th, and a 53% increase in chest pain visits on September 8th.

The second story has to do with my own personal experience and observation of the Public Health authorities’ actions in Warsaw immediately following the Chernobyl accident. In Warsaw, the authorities had prepared for the event, and children were immediately given iodine to prevent the uptake of radioactive iodine. This has been widely credited with preventing many deaths due to cancer. In Warsaw, the Public Health Authorities also very promptly informed the public about the level of ambient radiation. Certainly, there was great concern among the populace but panic was largely averted. My empirical evidence is of course limited, but my gut feeling is that much dislocation was averted by (1) the obvious signs of organized preparation for such an event, and (2) the transparency with which data concerning public health were disseminated.

Links:
article summarizing ‘Syndromic Surveillance’
CDC article
epi-x, CDC’s epidemic monitoring program

avatar

The Last Mile Bottleneck and Net Neutrality

When thinking about the performance of any computer system or network, the first question to ask is “Where is the bottleneck?” As demand grows, one part of the system reaches its capacity first, and limits performance. That’s the bottleneck. If you want to improve performance, often the only real options are to use the bottleneck more efficiently or to increase the bottleneck’s capacity. Fiddling around with the rest of the system won’t make much difference.

For a typical home broadband user, the bottleneck for Internet access today is the “last mile” wire or fiber connecting their home to their Internet Service Provider’s (ISP’s) network. This is true today, and I’m going to assume from here on that it will continue to be true in the future. I should admit up front that this assumption could turn out to be wrong – but if it’s right, it has interesting implications for the network neutrality debate.

Two of the arguments against net neutrality regulation are that (a) ISPs need to manage their networks to optimize performance, and (b) ISPs need to monetize their networks in every way possible so they can get enough revenue to upgrade the last mile connections. Let’s consider how the last mile bottleneck affects each of these arguments.

The first argument says that customers can get better performance if ISPs (and not just customers) have more freedom to manage their networks. If the last mile is the bottleneck, then the most important management question is which packets get to use the last mile link. But this is something that each customer can feasibly manage. What the customer sends is, of course, under the customer’s control – and software on the customer’s computer or in the customer’s router can prioritize outgoing traffic in whatever way best serves that customer. Although it’s less obvious to nonexperts, the customer’s equipment can also control how the link is allocated among incoming data flows. (For network geeks: the customer’s equipment can control the TCP window size on connections that have incoming data.) And of course the customer knows better than the ISP which packets can best serve the customer’s needs.

Another way to look at this is that every customer has their own last mile link, and if that link is not shared then different customers’ links can be optimized separately. The kind of global optimization that only an ISP can do – and that might be required to ensure fairness among customers – just won’t matter much if the last mile is the bottleneck. No matter which way you look at it, there isn’t much ISPs can do to optimize performance, so we should be skeptical of ISPs’ claims that their network management will make a big difference for users. (All of this assumes, remember, that the last mile will continue to be the bottleneck.)

The second argument against net neutrality regulation is that ISPs need to be able to charge everybody fees for everything, so there is maximum incentive for ISPs to build their next-generation networks. If the last mile is the bottleneck, then building new last-mile infrastructure is one of the most important steps that can be taken to improve the Net, and so paying off the ISPs to build that infrastructure might seem like a good deal. Giving them monopoly rents could be good policy, if that’s what it takes to get a faster Net built – or so the argument goes.

It seems to me, though, that if we accept this last argument then we have decided that the residential ISP business is naturally not very competitive. (Otherwise competition will erode those monopoly rents.) And if the market is not going to be competitive, then our policy discussion will have to go beyond the simple “let the market decide” arguments that we hear from some quarters. Naturally noncompetitive communications markets have long posed difficult policy questions, and this one looks like no exception. We can only hope that we have learned from the regulatory mistakes of the past.

Lets hope that the residential ISP business turns out instead to be competitive. If technologies like WiMax or powerline networking turn out to be practical, this could happen. A competitive market is the best outcome for everybody, letting the government safely keeps its hands off the Internet, if it can.

avatar

The Exxon Valdez of Privacy

Recently I moderated a panel discussion, at Princeton Reunions, about “Privacy and Security in the Digital Age”. When the discussion turned to public awareness of privacy and data leaks, one of the panelists said that the public knows about this issue but isn’t really mobilized, because we haven’t yet seen “the Exxon Valdez of privacy” – the singular, dramatic event that turns a known area of concern into a national priority.

Scott Craver has an interesting response:

An audience member asked what could possibly comprise such a monumental disaster. One panelist said, “Have you ever been a victim of credit card fraud? Well, multiply that by 500,000 people.”

This is very corporate thinking: take a loss and multiply it by a huge number. Sure that’s a nightmare scenario for a bank, but is that really a national crisis that will enrage the public? Especially since cardholders are somewhat sheltered from fraud. Also consider how many people are already victims of identity theft, and how much money it already costs. I don’t see any torches and pitchforks yet.

Here’s what I think: the “Exxon Valdez” of privacy won’t be $100 of credit card fraud multiplied by a half million people. It will instead be the worst possible privacy disruption that can befall a single individual, and it doesn’t have to happen to a half million people, or even ten thousand. The number doesn’t matter, as long as it’s big enough to be reported on CNN …

[...]

So back to the question: what is the worst, the most sensational privacy disaster that can befall an individual – that in a batch of, oh say 500-5,000 people, will terrify the general public? I’m not thinking of a disaster that is tangentially aided by a privacy loss, like a killer reading my credit card statement to find out what cafe I hang out at. I’m talking about a direct abuse of the private information being the disaster itself.

What would be the Exxon Valdez of privacy? I’m not sure. I don’t think it will just be a loss of money – Scott explained why it won’t be many small losses, and it’s hard to imagine a large loss where the privacy harm doesn’t seem incidental. So it will have to be a leak of information so sensitive as to be life-shattering. I’m not sure exactly what that is.

What do you think?

avatar

Twenty-First Century Wiretapping: False Positives

Lately I’ve been writing about the policy issues surrounding government wiretapping programs that algorithmically analyze large amounts of communication data to identify messages to be shown to human analysts. (Past posts in the series: 1; 2; 3; 4; 5; 6; 7.) One of the most frequent arguments against such programs is that there will be too many false positives – too many innocent conversations misidentified as suspicious.

Suppose we have an algorithm that looks at a set of intercepted messages and classifies each message as either suspicious or innocuous. Let’s assume that every message has a true state that is either criminal (i.e., actually part of a criminal or terrorist conspiracy) or innocent. The problem is that the true state is not known. A perfect, but unattainable, classifier would label a message as suspicious if and only if it was criminal. In practice a classifier will make false positive errors (mistakenly classifying an innocent message as suspicious) and false negative errors (mistakenly classifying a criminal message as innocuous).

To illustrate the false positive problem, let’s do an example. Suppose we intercept a million messages, of which ten are criminal. And suppose that the classifier correctly labels 99.9% of the innocent messages. This means that 1000 innocent messages (0.1% of one million) will be misclassified as suspicious. All told, there will be 1010 suspicious messages, of which only ten – about 1% – will actually be criminal. The vast majority of messages labeled as suspicious will actually be innocent. And if the classifier is less accurate on innocent messages, the imbalance will be even more extreme.

This argument has some power, but I don’t think it’s fatal to the idea of algorithmically classifying intercepts. I say this for three reasons.

First, even if the majority of labeled-as-suspicous messages are innocent, this doesn’t necessarily mean that listening to those messages is unjustified. Letting the police listen to, say, ten innocent conversations is a good tradeoff if the eleventh conversation is a criminal one whose interception can stop a serious crime. (I’m assuming that the ten innocent conversations are chosen by some known, well-intentioned algorithmic process, rather than being chosen by potentially corrupt government agents.) This only goes so far, of course – if there are too many innocent conversations or the crime is not very serious, then this type of wiretapping will not be justified. My point is merely that it’s not enough to argue that most of the labeled-as-suspcious messages will be innocent.

Second, we can learn by experience what the false positive rate is. By monitoring the operation of the system, we can see learn how many messages are labeled as suspicious and how many of those are actually innocent. If there is a warrant for the wiretapping (as I have argued there should be), the warrant can require this sort of monitoring, and can require the wiretapping to be stopped or narrowed if the false positive rate is too high.

Third, classification algorithms have (or can be made to have) an adjustable sensitivity setting. Think of it as a control knob that can be moved continuously between two extremes, where one extreme is labeled “avoid false positives” and the other is labeled “avoid false negatives”. Adjusting the knob trades off one kind of error for the other.

We can always make the false positive rate as low as we like, by turning the knob far enough toward “avoid false positives”. Doing this has a price, because turning the knob in that direction also increases the number of false negatives, that is, it causes some criminal messages to be missed. If we turn the knob all the way to the “avoid false positives” end, then there will be no false positives at all, but there might be many false negatives. Indeed, we might find that when the knob is turned to that end, all messages, whether criminal or not, are classified as innocuous.

So the question is not whether we can reduce false positives – we know we can do that – but whether there is anywhere we can set the knob that gives us an acceptably low false positive rate yet still manages to flag some messages that are criminal.

Whether there is an acceptable setting depends on the details of the classification algorithm. If you forced me to guess, I’d say that for algorithms based on today’s voice recognition or speech transcription technology, there probably isn’t an acceptable setting – to catch any appreciable number of criminal conversations, we’d have to accept huge numbers of false positives. But I’m not certain of that result, and it could change as the algorithms get better.

The most important thing to say about this is that it’s an empirical question, which means that it’s possible to gather evidence to learn whether a particular algorithm offers an acceptable tradeoff. For example, if we had a candidate classification algorithm, we could run it on a large number of real-world messages and, without recording any of those messages, simply count how many messages the algorithm would have labeled as suspicious. If that number were huge, we would know we had a false positive problem. We could do this for different settings of the knob, to see where we had to get an acceptable false positive rate. Then we could apply the algorithm with that knob setting to a predetermined set of known-to-be-criminal messages, to see how many it flagged.

If governments are using algorithmic classifiers – and the U.S. government may be doing so – then they can do these types of experiments. Perhaps they have. It doesn’t seem too much to ask for them to report on their false positive rates.

avatar

Twenty-First Century Wiretapping: Reconciling with the Law

When the NSA’s wiretapping program first came to light, the White House said, mysteriously, that they didn’t get warrants for all of their wiretaps because doing so would have been impractical. Some people dismissed that as empty rhetoric. But for the rest of us, it was a useful hint about how the program worked, implying that the wiretapping was triggered by the characteristics of a call (or its contents) rather than following individuals who were specifically suspected of being terrorists.

As I wrote previously, content-based triggering is a relatively recent phenomenon, having become practical only with the arrival of the digital revolution. Our laws about search, seizure, and wiretapping mostly assume the pre-digital world, so they don’t do much to address the possibility of content-based triggering. The Fourth Amendment, for example, says that search warrants must “particularly describ[e] the place to be searched, and the persons or things to be seized.” Wiretapping statutes similarly assume wiretaps are aimed at identified individuals.

So when the NSA and the White House wanted to do searches with content-based triggering, there was no way to get a warrant that would allow them to do so. That left them with two choices: kill the program, or proceed without warrants. They chose the latter, and they now argue that warrants aren’t legally necessary. I don’t know whether their legal arguments hold water (legal experts are mostly skeptical) but I know it would be better if there were a statute that specifically addressed this situation.

The model, procedurally at least, would follow the Foreign Intelligence Surveillance Act (FISA). In FISA, Congress established criteria under which U.S. intelligence agencies could wiretap suspected spies and terrorists. FISA requires agencies to get warrants for such wiretaps, by applying to a special secret court, in a process designed to balance national security against personal privacy. There are also limited exceptions; for example, there is more leeway to wiretap in the first days of a war. Whether or not you like the balance point Congress chose in FISA, you’ll agree, I hope, that it’s good for the legislature to debate these tradeoffs, to establish a general policy, rather than leaving everything at the discretion of the executive branch.

If it took up this issue, Congress might decide to declare that content-based triggering is never acceptable. More likely, it would establish a set of rules and principles to govern wiretaps that use content-based triggering. Presumably, the new statute would establish a new kind of warrant, perhaps granted by the existing FISA court, and would say what justification needed to be submitted to the court, and what reporting needed to done after a warrant was granted. Making these choices wisely would mitigate some of the difficulties with content-based triggering.

Just as important, it would create a constructive replacement for the arguments over the legality of the current NSA program. Today, those arguments are often shouting matches between those who say the program is far outside the law, and those who say that the law is outdated and is blocking necessary and reasonable intelligence-gathering. A debate in Congress, and among citizens, can help to break this rhetorical stalemate, and can re-establish the checks and balances that keep government’s power vital but limited.