November 22, 2024

USENIX Panel

Today I’ll be speaking on a panel at the USENIX Conference in Boston, on “The Politicization of [Computer] Security.” The panel is 10:30-noon, Eastern time. The other panelists are Jeff Grove (ACM), Gary McGraw (Cigital), and Avi Rubin (Johns Hopkins).

If you’re attending the panel, feel free to provide real-time narration/feedback/discussion in the comments section of this post. I’ll be reading the comments periodically during the panel, and I’ll encourage the other panelists to do so too.

Landsburg's Modest Proposal

Steven E. Landsburg has a somewhat creepy piece over at Slate, calling for the death penalty for computer worm authors. Ernest Miller responds.

UPDATE (12:15 AM): James Grimmelmann has some interesting thoughts on Landsburg’s proposal.

Still More About End-User Liability

At the risk of alienating readers, here is one more post about the advisability of imposing liability on end-users for harm to third parties that results from break-ins to the end-users’ computers. I promise this is the last post on this topic, at least for this week.

Rob Heverly, in a very interesting reply to my last post, focuses on the critical question regarding liability policy: who is in the best position to avert harm. Assuming a scenario where an adversary breaks in to Alice’s computer, and uses it as a launching pad for attacks that harm Bob, the critical question is whether Alice or Bob is better positioned to prevent the harm to Bob.

Mr. Heverly (I won’t call him Rob because that’s too close to my hypothetical Bob’s name; and it’s an iron rule in security discussions that the second party in any example must be named Bob) says that it will always be easier for Bob to protect himself from the attack than for Alice to block the attack by preventing the compromise of her machine. I disagree. It’s not that his general rule is always wrong; but I think it will prove to be wrong often enough that one will have to look at individual cases. To analyze a specific case, we’ll have to look at a narrow class of attacks, evaluate the effectiveness and cost of Bob’s countermeasures against that attack, and compare that evaluation to what we know about Alice’s measures to protect herself. The result of such an evaluation is far from clear, even for straightforward attack classes such as spamming and simple denial of service attacks. Given our limited understanding of security technology, I don’t think experts will agree on the answer.

So the underlying policy question – whether to hold Alice liable for harm to Bob – depends on technical considerations that we don’t yet understand. Ultimately, the right answer may be different for different types of attacks; but drawing complicated distinctions between attack classes, and using different liability rules for different classes, would probably make the law too complicated. At this point, we just don’t know enough to mess with liability rules for end-users.

More on End-User Liability

My post yesterday on end-user liability for security breaches elicited some interesting responses.

Several people debated the legal question of whether end-users are already liable under current law. I don’t know the answer to that question, and my post yesterday was more in the nature of a hypothetical than a statement about current law. Rob Heverly, who appears to be a lawyer, says that because there is, in general, no duty to protect strangers from harm, end-users are not liable under current law for harm to others caused by intruders. Others say an unprotected machine may be an attractive nuisance. I’ll leave it to the lawyers to duke that one out.

Others objected that it would be unfair to hold liable an end-user, if that user took all reasonable protective steps, or if he failed to take some extra step. To see why this objection might be wrong, consider a hypothetical where an attacker breaks into Alice’s machine, and uses it to cause harm to Bob. It seems unfair to make Alice pay for this harm. But the alternative is to leave Bob to pay for it, which may be even more unfair, depending on circumstances. From a theoretical standpoint, it makes sense to send the bill to the party who was best situated to prevent the harm. If that turns out to be Alice, then one can argue that she should be liable for the harm. And this argument is plausible even if Alice has very little power address the harm – as long as Bob has even less power to address it.

Others objected that novice users would be unable to protect themselves. That’s true, but by itself it’s not a good argument against liability. Imposing liability would cause many novice users to get help, by hiring competent people to manage their systems. If an end-user can spend $N to reduce the expected harm to others by more than $N, then we want them to do so.

Others objected that liability for breaches would be a kind of reverse lottery, with a few unlucky users being hit with large bills, because their systems happened to be used to cause serious harm, while other similarly situated users got off scot-free. The solution to this problem is insurance, which is an effective mechanism for spreading this kind of risk. (Eventually, this might be a standard rider on homeowner’s or renter’s insurance policies.) Insurance companies would also have the resources to study whether particular products or practices increase or reduce expected liability. They might impose a surcharge on people who use a risky operating system, or provide a discount for the use of effective defensive tools. This, in turn, would give end-users economic incentives to make socially beneficial choices.

Finally, some people responded to my statement that liability might work poorly where harm is diffuse. Seth Finkelstein suggested class actions suits as a remedy. Class actions would make sense where the aggregate harm is large and the victims easy to identify. Rob Heverly suggests that large institutions like companies or universities would be likely lawsuit targets, because their many computers might cause enough harm to make a suit worthwhile. Both are good points, but I still believe that a great deal of harm – perhaps the majority – would be effectively shielded from recovery because of the costs of investigation and enforcement.

Should End-Users Be Liable for Security Breaches?

Eric Rescorla reports that, in a talk at WEIS, Dan Geer predicted (or possibly advocated) that end-users will be held liable for security breaches in their machines that cause harm to others.

As Eric notes, there is a good theoretical argument for this:

There are two kinds of costs to not securing your computer:

  • Internal costs: the costs to you of having your own machine broken into.
  • External costs: the costs to others of having your machine being broken into, primarily your machine being used as a platform for other attacks.

Currently, the only incentive you currently have is the internal costs. That incentive clearly isn’t that strong, as lots of people don’t upgrade their systems. The point of liability is to get you to also bear the external costs, which helps give you the right incentive to secure your systems.

Eric continues, astutely, by wondering whether it’s actually worthwhile, economically, for users to spend lots of money and effort trying to secure their systems. If the cost of securing your computer exceeds the cost (internal and external) of not doing so, then the optimal choice is simply to accept the cost of breaches; and that’s what you’ll do, even if you’re liable.

There’s at least one more serious difficulty with end-user liability. Today, many intrusions into end-user machines lead to the installation of “bots” that the intruder uses later to send spam, launch denial of service attacks, or make other mischief. The harm caused by these bots is often diffuse.

For example, suppose Alice’s machine is compromised and the intruder uses it to send 100,000 spam emails, each of which costs its recipient five cents to delete. Alice’s insecurity has led to $5,000 of total harm. But who is going to sue Alice? No individual has suffered more than a few cents’ worth of harm. Even if all of the affected parties can somehow put together an action against Alice, the administrative and legal costs of the action (not to mention the cost of identifying Alice in the first place) will be much more than $5,000. In aggregate, all of the world’s Alices may be causing plenty of harm, but the costs of holding each particular Alice responsible may be excessive.

So, to the extent that the external costs of end-user insecurity are diffuse, end-user liability may do very little good. Maybe there is another way to internalize the external costs of end-user insecurity; but I’m not sure what it might be.