October 19, 2017

Archives for November 2009

Robots and the Law

Stanford Law School held a panel Thursday on “Legal Challenges in an Age of Robotics“. I happened to be in town so I dropped by and heard an interesting discussion.

Here’s the official announcement:

Once relegated to factories and fiction, robots are rapidly entering the mainstream. Advances in artificial intelligence translate into ever-broadening functionality and autonomy. Recent years have seen an explosion in the use of robotics in warfare, medicine, and exploration. Industry analysts and UN statistics predict equally significant growth in the market for personal or service robotics over the next few years. What unique legal challenges will the widespread availability of sophisticated robots pose? Three panelists with deep and varied expertise discuss the present, near future, and far future of robotics and the law.

The key questions are how robots differ from past technologies, and how those differences change the law and policy issues we face.

Three aspects of robots seemed to recur in the discussion: robots take action that is important in the world; robots act autonomously; and we tend to see robots as beings and not just machines.

The last issue — robots as beings — is mostly a red herring for our purposes, notwithstanding its appeal as a conversational topic. Robots are nowhere near having the rights of a person or even of a sentient animal, and I suspect that we can’t really imagine what it would be like to interact with a robot that qualified as a conscious being. Our brains seem to be wired to treat self-propelled objects as beings — witness the surprising acceptance of robot “dogs” that aren’t much like real dogs — but that doesn’t mean we should grant robots personhood.

So let’s set aside the consciousness issue and focus on the other two: acting in the world, and autonomy. These attributes are already present in many technologies today, even in the purely electronic realm. Consider, for example, the complex of computers, network equipment, and software make up Google’s data centers. Its actions have significant implications in the real world, and it is autonomous, at least in the sense that the panelists seemed to using the term “autonomous” — it exhibits complex behavior without direct, immediate human instruction, and its behavior is often unpredictable even to its makers.

In the end, it seemed to me that the legal and policy issues raised by future robots will not be new in kind, but will just be extrapolations of the issues we’re already facing with today’s complex technologies — and not a far extrapoloation but more of a smooth progression from where we are now. These issues are important, to be sure, and I was glad to hear smart panelists debating them, but I’m not convinced yet that we need a law of the robot. When it comes to the legal challenges of technology, the future will be like the past, only more so.

Still, if talking about robots will get policymakers to pay more attention to important issues in technology policy, then by all means, let’s talk about robots.

Targeted Copyright Enforcement vs. Inaccurate Enforcement

Let’s continue our discussion about copyright enforcement against online infringers. I wrote last time about how targeted enforcement can deter many possible violators even if the enforcer can only punish a few violators. Clever targeting of enforcement can destroy the safety-in-numbers effect that might otherwise shelter a crowd of would-be violators.

In the online copyright context, the implication is that large copyright owners might be able to use lawsuit threats to deter a huge population of would-be infringers, even if they can only manage to sue a few infringers at a time. In my previous post, I floated some ideas for how they might do this.

Today I want to talk about the implications of this. Let’s assume, for the sake of argument, that copyright owners have better deterrence strategies available — strategies that can deter more users, more effectively, than they have managed so far. What would this imply for copyright policy?

The main implication, I think, is to shed doubt on the big copyright owners’ current arguments in favor or broader, less accurate enforcement. These proposed enforcement strategies go by various names, such as “three strikes” and “graduated response”. What defines them is that they reduce the cost of each enforcement action, while at the same time reducing the assurance that the party being punished is actually guilty.

Typically the main source of cost reduction is the elimination of due process for the accused. For example, “three strikes” policies typically cut off someone’s Internet connection if they are accused of infringement three times — the theory being that making three accusations is much cheaper than proving one.

There’s a hidden assumption underlying the case for cheap, inaccurate enforcement: that the only way to deter infringement is to launch a huge number of enforcement actions, so that most of the would-be violators will expect to face enforcement. The main point of my previous post is that this assumption is not necessarily true — that it’s possible, at least in principle, to deter many people with a moderate number of enforcement actions.

Indeed, one of the benefits of an accurate enforcement strategy — a strategy that enforces only against actual violators — is that the better it works, the cheaper it gets. If there are few violators, then few enforcement actions will be needed. A high-compliance, low-enforcement equilibrium is the best outcome for everybody.

Cheap, inaccurate enforcement can’t reach this happy state.

Let’s say there are 100 million users, and you’re using an enforcement strategy that punishes 50% of violators, and 1% of non-violators. If half of the people are violators, you’ll punish 25 million violators, and you’ll punish 500,000 non-violators. That might seem acceptable to you, if the punishments are small. (If you’re disconnecting 500,000 people from modern communications technology, that would be a different story.)

But now suppose that user behavior shifts, so that only 1% of users are violating. Then you’ll be punishing 500,000 violators (50% of the 1,000,000 violators) along with 990,000 non-violators (1% of the 99,000,000 non-violators). Most of the people you’ll be punishing are innocent, which is clearly unacceptable.

Any cheap, inaccurate enforcement scheme will face this dilemma: it can be accurate, or it can be fair, but it can’t be both. The better is works, the more unfair it gets. It can never reach the high-compliance, low-enforcement equilibrium that should be the goal of every enforcement strategy.

Targeted Copyright Enforcement: Deterring Many Users with a Few Lawsuits

One reason the record industry’s strategy of suing online infringers ran into trouble is that there are too many infringers to sue. If the industry can only sue a tiny fraction of infringers, then any individual infringer will know that he is very unlikely to be sued, and deterrence will fail.

Or so it might seem — until you read The Dynamics of Deterrence, a recent paper by Mark Kleiman and Beau Kilmer that explains how to deter a great many violators despite limited enforcement capacity.

Consider the following hypothetical. There are 26 players, whom we’ll name A through Z. Each player can choose whether or not to “cheat”. Every player who cheats gets a dollar. There’s also an enforcer. The enforcer knows exactly who cheated, and can punish one (and only one) cheater by taking $10 from him. We’ll assume that players have no moral qualms about cheating — they’ll do whatever maximizes their expected profit.

This situation has two stable outcomes, one in which nobody cheats, and the other in which everybody cheats. The everybody-cheats outcome is stable because each player figures that he has only a 1/26 chance of facing enforcement, and a 1/26 chance of losing $10 is not enough to scare him away from the $1 he can get by cheating.

It might seem that deterrence doesn’t work because the cheaters have safety in numbers. It might seem that deterrence can only succeed by raising the penalty to more than $26. But here comes Kleiman and Kilmer’s clever trick.

The enforcer gets everyone together and says, “Listen up, A through Z. From now on, I’m going to punish the cheater who comes first in the alphabet.” Now A will stop cheating, because he knows he’ll face certain punishment if he cheats. B, knowing that A won’t cheat, will then realize that if he cheats, he’ll face certain punishment, so B will stop cheating. Now C, knowing that A and B won’t cheat, will reason that he had better stop cheating too. And so on … with the result that nobody will cheat.

Notice that the trick still works even if punishment is not certain. Suppose each cheater has an 80% chance of avoiding detection. Now A is still deterred, because even a 20% chance of being fined $10 outweighs the $1 benefit of cheating. And if A is deterred, then B is deterred for the same reason, and so on.

Notice also that this trick might work even if some of the players don’t think things through. Suppose A through J are all smart enough not to cheat, but K is clueless and cheats anyway. K will get punished. If he cheats again, he’ll get punished again. K will learn quickly, by experience, that cheating doesn’t pay. And once K learns not to cheat, the next clueless player will be exposed and will start learning not to cheat. Eventually, all of the clueless players will learn not to cheat.

Finally, notice that there’s nothing special about using alphabetical order. The enforcer could use reverse alphabetical or any other order, and the same logic would apply. Any ordering will do, as long as each player knows where he is in the order.

Now let’s apply this trick to copyright deterrence. Suppose the RIAA announces that from now on they’re going to sue the violators who have the lowest U.S. IP addresses. Now users with low IP addresses will have a strong incentive to avoid infringing, which will give users with slightly higher IP addresses a stronger incentive to avoid infringing, and so on.

You might object that infringers aren’t certain to get caught, or that infringers might be clueless or irrational, or that IP address order is arbitrary. But I explained above why these objections aren’t necessarily showstoppers. Players might still be deterred even if detection is a probability rather than a certainty; clueless players might still learn by experience; and an arbitrary ordering can work perfectly well.

Alternatively, the industry could use time as an ordering, by announcing, for example, that starting at 8:00 PM Eastern time tomorrow evening, they will sue the first 1000 U.S. users they see infringing. This would make infringing at 8:00 PM much riskier than normal, which might keep some would-be infringers offline at that hour, which in turn would make infringing at 8:00 PM even riskier, and so on. The resulting media coverage (“I infringed at 8:02 and now I’m facing a lawsuit”) could make the tactic even more effective next time.

(While IP address or time ordering might work, many other orderings are infeasible. For example, they can’t use alphabetical ordering on the infringers’ names, because they don’t learn names until later in the process. The ideal ordering is one that can be applied very early in the investigative process, so that only cases at the beginning of the ordering need to be investigated. IP address and time ordering work well in this respect, as they are evident right away and are evident to would-be infringers.)

I’m not claiming that this trick will definitely work. Indeed, it would be silly to claim that it could drive online infringement to zero. But there’s a chance that it would deter more infringers, for longer, than the usual approach of seemingly random lawsuits has managed to do.

This approach has some interesting implications for copyright policy, as well. I’ll discuss those next time.