November 10, 2024

Still More About End-User Liability

At the risk of alienating readers, here is one more post about the advisability of imposing liability on end-users for harm to third parties that results from break-ins to the end-users’ computers. I promise this is the last post on this topic, at least for this week.

Rob Heverly, in a very interesting reply to my last post, focuses on the critical question regarding liability policy: who is in the best position to avert harm. Assuming a scenario where an adversary breaks in to Alice’s computer, and uses it as a launching pad for attacks that harm Bob, the critical question is whether Alice or Bob is better positioned to prevent the harm to Bob.

Mr. Heverly (I won’t call him Rob because that’s too close to my hypothetical Bob’s name; and it’s an iron rule in security discussions that the second party in any example must be named Bob) says that it will always be easier for Bob to protect himself from the attack than for Alice to block the attack by preventing the compromise of her machine. I disagree. It’s not that his general rule is always wrong; but I think it will prove to be wrong often enough that one will have to look at individual cases. To analyze a specific case, we’ll have to look at a narrow class of attacks, evaluate the effectiveness and cost of Bob’s countermeasures against that attack, and compare that evaluation to what we know about Alice’s measures to protect herself. The result of such an evaluation is far from clear, even for straightforward attack classes such as spamming and simple denial of service attacks. Given our limited understanding of security technology, I don’t think experts will agree on the answer.

So the underlying policy question – whether to hold Alice liable for harm to Bob – depends on technical considerations that we don’t yet understand. Ultimately, the right answer may be different for different types of attacks; but drawing complicated distinctions between attack classes, and using different liability rules for different classes, would probably make the law too complicated. At this point, we just don’t know enough to mess with liability rules for end-users.

Comments

  1. I just discovered your site today, so I hope it’s OK to jump in on this very interesting discussion….

    1. You say a critical question is “who is in the best position to avert harm?” The answer is clear: it’s the hacker. It always annoys me when a discussion about who should be liable for some criminal act focuses on everyone except the criminal.

    There needs to be some recognition that Alice and Bob are both victims here.

    Indeed, the whole discussion seems to be motivated by the fact that the hacker is unlikely to be caught. So if we can’t punish the person who is really responsible, let’s find someone else to punish.

    2. If this sort of liability is imposed on end-users, then it may be that few end-users could afford to have computers, because of the risk of financially devastating liability. The problem is that no matter what Alice does to protect herself, she can’t reduce the risk to zero. And the idea that someone will sell Alice insurance for this risk is laughable.

    An analogy is worker’s compensation laws, which generally prohibit employees from suing their employers for on-the-job injuries. Without such laws, there would be few jobs because an employer could be wiped out by even a single on-the-job injury, and there’s no way to eliminate the risk. (Yes, there is worker’s comp insurance, but it doesn’t cover the punitive damages and pain-and-suffering damages that would be assessed if such lawsuits were allowed.)

    3. It is far from clear that imposing liability on end-users is the most economically efficient way to address the problem. Let’s say that it costs Alice $1000 to secure her system. There are hundreds of millions of Alices; we’re talking about a total expenditure of hundreds of billions of dollars.

    For hundreds of billions of dollars, there are an awful lot of things that could be done to improve security on the Internet. There could be firewalls on Internet backbones; Microsoft could be induced to completely re-write Windows and make it really secure; and so on. A hundred billion, spent to improve the design of the Internet and the computers attached to it, would be far more effective and less expensive than having every newbie user trying to figure out how to secure her own PC.

    4. This sort of liability would lead to a slippery slope where Alice has to spend more and more money on security, even if it offers little benefit, just to shield herself from lawsuits. You say you were running an antivirus program? My lawyer says you should have installed a second one, just to be safe. You say you had firewall software? My lawyer says you should have had a hardware firewall too. You say you were running Windows? My lawyer says you could have used Linux.

    It’s like how malpractice litigation forces doctors to practice “defensive medicine.” They order lots of unnecessary tests that are of no value to the patient, just so a lawyer can’t say in court that there’s some other test they could have ordered. Meanwhile, the cost of medical care for everyone is driven way up, with little benefit.

    5. A hacker with a grudge against Alice could use this liability as a weapon. The hacker could use Alice’s computer to attack Bob, making sure that the attack can be traced back to Alice. Then Bob sues Alice, and the hacker has a good laugh reading the court documents.

  2. Sean Ellis says

    Mr. Heverly argues that “It will always be easier for Bob to protect himself from the attack than for Alice to block the attack by preventing the compromise of her machine.”

    The cost of defense isn’t just monetary cost, it also includes time, system resources, effort, and goodwill.

    The economics of Mr Heverly’s argument may stack up in favor of Bob paying for defense if Alice’s machine attacks only him. This can happen in case of a DDOS attack.

    However, the balance shifts dramatically if Alice’s machine is actually attacking Bob, Rob, Robert, Bobby, Bobbi, Roberto, Roberta and Robin. Zombie machines are often used to distribute spam to many hundreds of recipients; they fall into this camp.

    For Mr Heverly’s argument to make economic sense, the cost of defense against spam would have to be hundreds of times cheaper than the cost of defense against compromise.

    And I don’t believe it is.

  3. John Marquiss says

    I don’t think you have to worry about alienating any of us by continuing what is turning into an interesting discussion. The more I have been thinking about this the more I am coming to the conclusion that yes end-users can, but not always, be liable for damages caused by their system.

    Basically my reasoning boils down to the fact that computer networks are becoming more of a public utility akin to roads where the actions of one person can cause losses to other members of the community. Because of this each user of a public network has a responsibility to utilize this resource in a manner that is considerate of others. Now, my reasoning is, if we agree there is a responsibility of proper conduct then there is also a liability when that responsibility is not upheld.

    There are two major conclusions that I have come to while thinking on this. First, software and hardware manufacturers should not be able to shift all of the liability to the user for the use of their product through the acceptance of an EULA. This would be akin to an auto manufacturer producing a vehicle that was unsafe to drive then, when an accident happened because of defects in workmanship say “Hey, it isn’t our fault. You drive at your own risk”. Producers of software and hardware have to take responsibility for damage caused by defects in their workmanship. The second conclusion is, as I pointed out, end users do have a responsibility and a liability in their use of public networks. Part of this responsibility is proper maintenance of their systems. Yes the manufacturer is responsible for defects in workmanship, but once a defect is found and corrected end users are responsible for seeing that these fixes are applied to their systems. This is technical and requires understanding on how networks and computers interact… because of this I would ALMOST be in favor of a system that required some form of licensing to use an open (un-fire walled) network.

    Such a system might take on a form where for a basic user no license or certification would be required to get an Internet account with an ISP but that account would sit behind a fire wall and packet filter that only allowed benign traffic in AND out. If you wanted to have an open connection with out any upstream fire wall or packet filtering you would have to have a license that was attached to a skills/knowledge test that certified that I know what I am doing and I understand that I have a responsibility to manage my systems correctly to minimize any risk I might pose to other users.

    Now I said I would almost be in favor of such a system… while there are a lot of things I find appealing about the idea, I have strong reservations about this because I find it inherently dangerous to place controls and regulations on any means of communication. I am not convinced, though, that these concerns could not be adequately addressed making a user licensing/system certification system more appealing. (An other difficulty to overcome with this is that it would require the codification of what the responsibilities of a user are and how this is to be tested, certified and upheld. A daunting task but also one that may still be possible.)