My post yesterday on end-user liability for security breaches elicited some interesting responses.
Several people debated the legal question of whether end-users are already liable under current law. I don’t know the answer to that question, and my post yesterday was more in the nature of a hypothetical than a statement about current law. Rob Heverly, who appears to be a lawyer, says that because there is, in general, no duty to protect strangers from harm, end-users are not liable under current law for harm to others caused by intruders. Others say an unprotected machine may be an attractive nuisance. I’ll leave it to the lawyers to duke that one out.
Others objected that it would be unfair to hold liable an end-user, if that user took all reasonable protective steps, or if he failed to take some extra step. To see why this objection might be wrong, consider a hypothetical where an attacker breaks into Alice’s machine, and uses it to cause harm to Bob. It seems unfair to make Alice pay for this harm. But the alternative is to leave Bob to pay for it, which may be even more unfair, depending on circumstances. From a theoretical standpoint, it makes sense to send the bill to the party who was best situated to prevent the harm. If that turns out to be Alice, then one can argue that she should be liable for the harm. And this argument is plausible even if Alice has very little power address the harm – as long as Bob has even less power to address it.
Others objected that novice users would be unable to protect themselves. That’s true, but by itself it’s not a good argument against liability. Imposing liability would cause many novice users to get help, by hiring competent people to manage their systems. If an end-user can spend $N to reduce the expected harm to others by more than $N, then we want them to do so.
Others objected that liability for breaches would be a kind of reverse lottery, with a few unlucky users being hit with large bills, because their systems happened to be used to cause serious harm, while other similarly situated users got off scot-free. The solution to this problem is insurance, which is an effective mechanism for spreading this kind of risk. (Eventually, this might be a standard rider on homeowner’s or renter’s insurance policies.) Insurance companies would also have the resources to study whether particular products or practices increase or reduce expected liability. They might impose a surcharge on people who use a risky operating system, or provide a discount for the use of effective defensive tools. This, in turn, would give end-users economic incentives to make socially beneficial choices.
Finally, some people responded to my statement that liability might work poorly where harm is diffuse. Seth Finkelstein suggested class actions suits as a remedy. Class actions would make sense where the aggregate harm is large and the victims easy to identify. Rob Heverly suggests that large institutions like companies or universities would be likely lawsuit targets, because their many computers might cause enough harm to make a suit worthwhile. Both are good points, but I still believe that a great deal of harm – perhaps the majority – would be effectively shielded from recovery because of the costs of investigation and enforcement.