April 26, 2024

Archives for May 2004

More on End-User Liability

My post yesterday on end-user liability for security breaches elicited some interesting responses.

Several people debated the legal question of whether end-users are already liable under current law. I don’t know the answer to that question, and my post yesterday was more in the nature of a hypothetical than a statement about current law. Rob Heverly, who appears to be a lawyer, says that because there is, in general, no duty to protect strangers from harm, end-users are not liable under current law for harm to others caused by intruders. Others say an unprotected machine may be an attractive nuisance. I’ll leave it to the lawyers to duke that one out.

Others objected that it would be unfair to hold liable an end-user, if that user took all reasonable protective steps, or if he failed to take some extra step. To see why this objection might be wrong, consider a hypothetical where an attacker breaks into Alice’s machine, and uses it to cause harm to Bob. It seems unfair to make Alice pay for this harm. But the alternative is to leave Bob to pay for it, which may be even more unfair, depending on circumstances. From a theoretical standpoint, it makes sense to send the bill to the party who was best situated to prevent the harm. If that turns out to be Alice, then one can argue that she should be liable for the harm. And this argument is plausible even if Alice has very little power address the harm – as long as Bob has even less power to address it.

Others objected that novice users would be unable to protect themselves. That’s true, but by itself it’s not a good argument against liability. Imposing liability would cause many novice users to get help, by hiring competent people to manage their systems. If an end-user can spend $N to reduce the expected harm to others by more than $N, then we want them to do so.

Others objected that liability for breaches would be a kind of reverse lottery, with a few unlucky users being hit with large bills, because their systems happened to be used to cause serious harm, while other similarly situated users got off scot-free. The solution to this problem is insurance, which is an effective mechanism for spreading this kind of risk. (Eventually, this might be a standard rider on homeowner’s or renter’s insurance policies.) Insurance companies would also have the resources to study whether particular products or practices increase or reduce expected liability. They might impose a surcharge on people who use a risky operating system, or provide a discount for the use of effective defensive tools. This, in turn, would give end-users economic incentives to make socially beneficial choices.

Finally, some people responded to my statement that liability might work poorly where harm is diffuse. Seth Finkelstein suggested class actions suits as a remedy. Class actions would make sense where the aggregate harm is large and the victims easy to identify. Rob Heverly suggests that large institutions like companies or universities would be likely lawsuit targets, because their many computers might cause enough harm to make a suit worthwhile. Both are good points, but I still believe that a great deal of harm – perhaps the majority – would be effectively shielded from recovery because of the costs of investigation and enforcement.

Should End-Users Be Liable for Security Breaches?

Eric Rescorla reports that, in a talk at WEIS, Dan Geer predicted (or possibly advocated) that end-users will be held liable for security breaches in their machines that cause harm to others.

As Eric notes, there is a good theoretical argument for this:

There are two kinds of costs to not securing your computer:

  • Internal costs: the costs to you of having your own machine broken into.
  • External costs: the costs to others of having your machine being broken into, primarily your machine being used as a platform for other attacks.

Currently, the only incentive you currently have is the internal costs. That incentive clearly isn’t that strong, as lots of people don’t upgrade their systems. The point of liability is to get you to also bear the external costs, which helps give you the right incentive to secure your systems.

Eric continues, astutely, by wondering whether it’s actually worthwhile, economically, for users to spend lots of money and effort trying to secure their systems. If the cost of securing your computer exceeds the cost (internal and external) of not doing so, then the optimal choice is simply to accept the cost of breaches; and that’s what you’ll do, even if you’re liable.

There’s at least one more serious difficulty with end-user liability. Today, many intrusions into end-user machines lead to the installation of “bots” that the intruder uses later to send spam, launch denial of service attacks, or make other mischief. The harm caused by these bots is often diffuse.

For example, suppose Alice’s machine is compromised and the intruder uses it to send 100,000 spam emails, each of which costs its recipient five cents to delete. Alice’s insecurity has led to $5,000 of total harm. But who is going to sue Alice? No individual has suffered more than a few cents’ worth of harm. Even if all of the affected parties can somehow put together an action against Alice, the administrative and legal costs of the action (not to mention the cost of identifying Alice in the first place) will be much more than $5,000. In aggregate, all of the world’s Alices may be causing plenty of harm, but the costs of holding each particular Alice responsible may be excessive.

So, to the extent that the external costs of end-user insecurity are diffuse, end-user liability may do very little good. Maybe there is another way to internalize the external costs of end-user insecurity; but I’m not sure what it might be.

Florida Voting Machines Mis-recorded Votes

In Miami-Dade County, Florida, an internal county memo has come to light, documenting misrecording of votes by ES&S e-voting machines in a May 2003 election, according to a Matthew Haggman story in the Miami Daily Business Review.

The memo, written by Orlando Suarez, head of the county’s Enterprise Technology Services Department, describes Mr. Suarez’s examination of the electronic record of the May 2003 election in one precinct. The ES&S machines in question provide two reports at the end of an election. One report, the “vote image report”, gives the vote tabulation (i.e., number of votes cast for each candidate) for each voting machine, and the other gives an audit log of significant events, such as initialization of the machine and the casting of a vote (but not who the vote was cast for), for each machine.

Mr. Suarez’s examination found that the two records were inconsistent with each other, and that both were inconsistent with reality.

In his memo, Suarez analyzed a precinct where just nine electronic voting machines were used. He first examined the audit logs for all nine machines, which was compiled onto one combined audit log. He found that the audit log made no mention of two of the machines used in the precinct.

In addition, he found that the audit log reported the serial number of a machine that was not used in that precinct. The phantom machine that appeared on the audit showed a count of ballots cast that equaled the count of the two missing machines.

Then he looked at the vote image report that was an aggregate of all nine voting machines. He discovered that three of the machines were not reported in the vote image report. But a serial number for a machine not used in the precinct appeared on the vote image report. That phantom machine showed a vote count equal to the vote count on the two missing machines. The other missing machine showed no activity.

Further examination revealed 38 votes that appeared in the vote image report but not in the audit log.

There is some evidence that the software used in this election was uncertified.

County officials don’t see much of a problem here:

Nevertheless, [county elections supervisor Constance] Kaplan insisted that Suarez’s analysis did not demonstrate any basic problems with the accuracy of the vote counts produced by the county’s iVotronic system. “The Suarez memo has nothing to do with the tabulation process,” she said. “It is very annoying that the coalition keeps equating the tabulation function with the audit function.”

Maybe I’m being overly picky here, but isn’t the vote tabulation supposed to match the audit trail? And isn’t the vote tabulation report supposed to match reality?

Very annoying, indeed.

Microsoft: No Security Updates for Infringers

Microsoft, reversing a previous decision, says it will not provide security updates to unlicensed users of Windows XP. Microsoft is obviously entitled to do this if it wants, since it has no obligation to provide product support to people who didn’t buy the product in the first place. A more interesting question is whether this was the best decision from the standpoint of Microsoft and its existing customers. The answer is far from obvious.

Before I go further, let me make two assumptions clear. First, I’m assuming Microsoft has a reliable way to tell which copies of Windows are legitimate, so that they never deny updates mistakenly to legitimate customers. Second, I’m assuming Microsoft doesn’t care about the welfare of infringers and feels no obligation at all to help them.

Helping infringers could easily hurt Microsoft’s business, if doing so makes infringement a more attractive option. If patches are one of the benefits of buying the product, then people are more likely to buy; but if they can get patches even without buying, some will choose to infringe, thereby costing Microsoft sales.

On the other hand, if there is a sizable population of unpatched infringing copies out there, this hurts Microsoft’s legitimate customers, because an infringing customer might infect a legitimate customer. A large reservoir of unpatched (infringing) machines will aggravate an already serious malware problem, by making Windows an even more attractive target to malware authors, and by speeding the spread of new malware.

But wait, it gets even more complicated. If infringing copies are susceptible to existing malware, then some of the bad guys will be satisfied to reuse old malware, since there is still a population of (infringing) machines it can attack. But if infringing copies are patched, then the bad guys may create more new malware which is not stopped by patches; and this new malware will affect legitimate and infringing copies alike. So refusing to update infringing copies may leave the infringers as decoys who draw fire away from legitimate customers.

There are even more factors in play, but I’ve probably written too much about this already. The effect of all this on Microsoft’s reputation is particularly interesting. Ultimately, I have no idea whether Microsoft made the right choice. And I doubt that Microsoft knows either.

Valenti Quotes Me

In his testimony at the House DMCA-reform hearing today, Jack Valenti quoted me, in support of a point he wanted to make. The quote comes from last year’s Berkeley DRM Conference, from my response to a question asked by Prof. Pam Samuelson. Here’s the relevant section from Mr. Valenti’s testimony (emphasis in original):

Keep in mind that, once copy protection is circumvented, there is no known technology that can limit the number of copies that can be produced from the original. In a recent symposium on the DMCA, Professor Samuelson of UC Berkeley posed the question: “whether it was possible to develop technologies that would allow…circumvention for fair uses without opening up the Pandora’s Box so that allowing these technologies means that you’re essentially repealing the anti-circumvention laws.”

The question was answered by the prominent computer scientist and outspoken opponent of the DMCA, Professor Ed Felton [sic] of Princeton: “I think this is one of the most important technical questions surrounding DRM – whether we know, whether we can figure out how to accommodate fair use and other lawful use without opening up a big loophole. The answer, I think, right now, is that we don’t know how to do that. Not effectively.

Moreover, there is no known device that can distinguish between a “fair use” circumvention and an infringing one. Allowing copy protection measures to be circumvented will inevitably result in allowing anyone to make hundreds of copies – thousands – thereby devastating the home video market for movies. Some 40 percent of all revenues to the movie studios come from home video. If this marketplace decays, it will cripple the ability of copyright owners to retrieve their investment, and result in fewer and less interesting choices at the movie theater.

Here’s the full excerpt from the DRM Conference transcript:

Question from Prof. Pam Samuelson:

So yesterday when I was doing the tutorial, Alex Alben asked me a question which, because I’m not a technologist, I was not in a very good position to try to answer, but since there are several technologists on this panel who are interested in information flows. The question that was put to me was a question about whether it was possible to develop technologies that would allow circumvention for fair use or other non-infringing purposes. Is it possible to sort of think creatively about anti-circumvention laws that might allow some room for circumvention for fair uses without opening up the Pandora’s box so that allowing these technology means that you’ve essentially repealed the anti-circumvention laws.

[Other panelists’ answers omitted.]

Answer by Ed Felten:

I think this is one of the most important technical questions around DRM, whether we know, whether we can figure out how to accommodate fair use and other lawful use without opening up a big loophole. And the answer is, I think, right now, is that we don’t know how to do that. Not effectively. A lot of people would like to know whether we can do that or how we go about doing it, but it’s a big open question right now.

Let’s leave aside for now the flaws in Mr. Valenti’s argument, and focus just on his use of the quote. Note that he artfully excerpts segments from Prof. Samuelson’s question, to make it appear that she asked a different question than she really did. Also note that he removes an important part of my answer: the last sentence, where I talk about the technological relation between DRM and fair use as being a “big open question”.

Which brings us back to the bill being discussed today. If we want to answer the “big open question” I mentioned, we need to do more research. But the DMCA severely limits some of the key research that we would need to do. The Boucher-Doolittle bill would open the door to this research, by creating a research exemption to the DMCA. But that issue is apparently not up for discussion today.

[Note: This post is based on Mr. Valenti’s written testimony, of which I have a copy. I did not hear his live testimony. Seth Finkelstein reports that Mr. Valenti did use the quote in his oral testimony.]