December 24, 2024

Archives for 2004

Still More About End-User Liability

At the risk of alienating readers, here is one more post about the advisability of imposing liability on end-users for harm to third parties that results from break-ins to the end-users’ computers. I promise this is the last post on this topic, at least for this week.

Rob Heverly, in a very interesting reply to my last post, focuses on the critical question regarding liability policy: who is in the best position to avert harm. Assuming a scenario where an adversary breaks in to Alice’s computer, and uses it as a launching pad for attacks that harm Bob, the critical question is whether Alice or Bob is better positioned to prevent the harm to Bob.

Mr. Heverly (I won’t call him Rob because that’s too close to my hypothetical Bob’s name; and it’s an iron rule in security discussions that the second party in any example must be named Bob) says that it will always be easier for Bob to protect himself from the attack than for Alice to block the attack by preventing the compromise of her machine. I disagree. It’s not that his general rule is always wrong; but I think it will prove to be wrong often enough that one will have to look at individual cases. To analyze a specific case, we’ll have to look at a narrow class of attacks, evaluate the effectiveness and cost of Bob’s countermeasures against that attack, and compare that evaluation to what we know about Alice’s measures to protect herself. The result of such an evaluation is far from clear, even for straightforward attack classes such as spamming and simple denial of service attacks. Given our limited understanding of security technology, I don’t think experts will agree on the answer.

So the underlying policy question – whether to hold Alice liable for harm to Bob – depends on technical considerations that we don’t yet understand. Ultimately, the right answer may be different for different types of attacks; but drawing complicated distinctions between attack classes, and using different liability rules for different classes, would probably make the law too complicated. At this point, we just don’t know enough to mess with liability rules for end-users.

More on End-User Liability

My post yesterday on end-user liability for security breaches elicited some interesting responses.

Several people debated the legal question of whether end-users are already liable under current law. I don’t know the answer to that question, and my post yesterday was more in the nature of a hypothetical than a statement about current law. Rob Heverly, who appears to be a lawyer, says that because there is, in general, no duty to protect strangers from harm, end-users are not liable under current law for harm to others caused by intruders. Others say an unprotected machine may be an attractive nuisance. I’ll leave it to the lawyers to duke that one out.

Others objected that it would be unfair to hold liable an end-user, if that user took all reasonable protective steps, or if he failed to take some extra step. To see why this objection might be wrong, consider a hypothetical where an attacker breaks into Alice’s machine, and uses it to cause harm to Bob. It seems unfair to make Alice pay for this harm. But the alternative is to leave Bob to pay for it, which may be even more unfair, depending on circumstances. From a theoretical standpoint, it makes sense to send the bill to the party who was best situated to prevent the harm. If that turns out to be Alice, then one can argue that she should be liable for the harm. And this argument is plausible even if Alice has very little power address the harm – as long as Bob has even less power to address it.

Others objected that novice users would be unable to protect themselves. That’s true, but by itself it’s not a good argument against liability. Imposing liability would cause many novice users to get help, by hiring competent people to manage their systems. If an end-user can spend $N to reduce the expected harm to others by more than $N, then we want them to do so.

Others objected that liability for breaches would be a kind of reverse lottery, with a few unlucky users being hit with large bills, because their systems happened to be used to cause serious harm, while other similarly situated users got off scot-free. The solution to this problem is insurance, which is an effective mechanism for spreading this kind of risk. (Eventually, this might be a standard rider on homeowner’s or renter’s insurance policies.) Insurance companies would also have the resources to study whether particular products or practices increase or reduce expected liability. They might impose a surcharge on people who use a risky operating system, or provide a discount for the use of effective defensive tools. This, in turn, would give end-users economic incentives to make socially beneficial choices.

Finally, some people responded to my statement that liability might work poorly where harm is diffuse. Seth Finkelstein suggested class actions suits as a remedy. Class actions would make sense where the aggregate harm is large and the victims easy to identify. Rob Heverly suggests that large institutions like companies or universities would be likely lawsuit targets, because their many computers might cause enough harm to make a suit worthwhile. Both are good points, but I still believe that a great deal of harm – perhaps the majority – would be effectively shielded from recovery because of the costs of investigation and enforcement.

Should End-Users Be Liable for Security Breaches?

Eric Rescorla reports that, in a talk at WEIS, Dan Geer predicted (or possibly advocated) that end-users will be held liable for security breaches in their machines that cause harm to others.

As Eric notes, there is a good theoretical argument for this:

There are two kinds of costs to not securing your computer:

  • Internal costs: the costs to you of having your own machine broken into.
  • External costs: the costs to others of having your machine being broken into, primarily your machine being used as a platform for other attacks.

Currently, the only incentive you currently have is the internal costs. That incentive clearly isn’t that strong, as lots of people don’t upgrade their systems. The point of liability is to get you to also bear the external costs, which helps give you the right incentive to secure your systems.

Eric continues, astutely, by wondering whether it’s actually worthwhile, economically, for users to spend lots of money and effort trying to secure their systems. If the cost of securing your computer exceeds the cost (internal and external) of not doing so, then the optimal choice is simply to accept the cost of breaches; and that’s what you’ll do, even if you’re liable.

There’s at least one more serious difficulty with end-user liability. Today, many intrusions into end-user machines lead to the installation of “bots” that the intruder uses later to send spam, launch denial of service attacks, or make other mischief. The harm caused by these bots is often diffuse.

For example, suppose Alice’s machine is compromised and the intruder uses it to send 100,000 spam emails, each of which costs its recipient five cents to delete. Alice’s insecurity has led to $5,000 of total harm. But who is going to sue Alice? No individual has suffered more than a few cents’ worth of harm. Even if all of the affected parties can somehow put together an action against Alice, the administrative and legal costs of the action (not to mention the cost of identifying Alice in the first place) will be much more than $5,000. In aggregate, all of the world’s Alices may be causing plenty of harm, but the costs of holding each particular Alice responsible may be excessive.

So, to the extent that the external costs of end-user insecurity are diffuse, end-user liability may do very little good. Maybe there is another way to internalize the external costs of end-user insecurity; but I’m not sure what it might be.

Florida Voting Machines Mis-recorded Votes

In Miami-Dade County, Florida, an internal county memo has come to light, documenting misrecording of votes by ES&S e-voting machines in a May 2003 election, according to a Matthew Haggman story in the Miami Daily Business Review.

The memo, written by Orlando Suarez, head of the county’s Enterprise Technology Services Department, describes Mr. Suarez’s examination of the electronic record of the May 2003 election in one precinct. The ES&S machines in question provide two reports at the end of an election. One report, the “vote image report”, gives the vote tabulation (i.e., number of votes cast for each candidate) for each voting machine, and the other gives an audit log of significant events, such as initialization of the machine and the casting of a vote (but not who the vote was cast for), for each machine.

Mr. Suarez’s examination found that the two records were inconsistent with each other, and that both were inconsistent with reality.

In his memo, Suarez analyzed a precinct where just nine electronic voting machines were used. He first examined the audit logs for all nine machines, which was compiled onto one combined audit log. He found that the audit log made no mention of two of the machines used in the precinct.

In addition, he found that the audit log reported the serial number of a machine that was not used in that precinct. The phantom machine that appeared on the audit showed a count of ballots cast that equaled the count of the two missing machines.

Then he looked at the vote image report that was an aggregate of all nine voting machines. He discovered that three of the machines were not reported in the vote image report. But a serial number for a machine not used in the precinct appeared on the vote image report. That phantom machine showed a vote count equal to the vote count on the two missing machines. The other missing machine showed no activity.

Further examination revealed 38 votes that appeared in the vote image report but not in the audit log.

There is some evidence that the software used in this election was uncertified.

County officials don’t see much of a problem here:

Nevertheless, [county elections supervisor Constance] Kaplan insisted that Suarez’s analysis did not demonstrate any basic problems with the accuracy of the vote counts produced by the county’s iVotronic system. “The Suarez memo has nothing to do with the tabulation process,” she said. “It is very annoying that the coalition keeps equating the tabulation function with the audit function.”

Maybe I’m being overly picky here, but isn’t the vote tabulation supposed to match the audit trail? And isn’t the vote tabulation report supposed to match reality?

Very annoying, indeed.

Microsoft: No Security Updates for Infringers

Microsoft, reversing a previous decision, says it will not provide security updates to unlicensed users of Windows XP. Microsoft is obviously entitled to do this if it wants, since it has no obligation to provide product support to people who didn’t buy the product in the first place. A more interesting question is whether this was the best decision from the standpoint of Microsoft and its existing customers. The answer is far from obvious.

Before I go further, let me make two assumptions clear. First, I’m assuming Microsoft has a reliable way to tell which copies of Windows are legitimate, so that they never deny updates mistakenly to legitimate customers. Second, I’m assuming Microsoft doesn’t care about the welfare of infringers and feels no obligation at all to help them.

Helping infringers could easily hurt Microsoft’s business, if doing so makes infringement a more attractive option. If patches are one of the benefits of buying the product, then people are more likely to buy; but if they can get patches even without buying, some will choose to infringe, thereby costing Microsoft sales.

On the other hand, if there is a sizable population of unpatched infringing copies out there, this hurts Microsoft’s legitimate customers, because an infringing customer might infect a legitimate customer. A large reservoir of unpatched (infringing) machines will aggravate an already serious malware problem, by making Windows an even more attractive target to malware authors, and by speeding the spread of new malware.

But wait, it gets even more complicated. If infringing copies are susceptible to existing malware, then some of the bad guys will be satisfied to reuse old malware, since there is still a population of (infringing) machines it can attack. But if infringing copies are patched, then the bad guys may create more new malware which is not stopped by patches; and this new malware will affect legitimate and infringing copies alike. So refusing to update infringing copies may leave the infringers as decoys who draw fire away from legitimate customers.

There are even more factors in play, but I’ve probably written too much about this already. The effect of all this on Microsoft’s reputation is particularly interesting. Ultimately, I have no idea whether Microsoft made the right choice. And I doubt that Microsoft knows either.