November 21, 2024

Design is a poor guide to authorization

James Grimmelmann has a great post on the ambiguity of the concept of “circumvention” in the law. He writes about the Computer Fraud and Abuse Act (CFAA) language banning “exceeding authorized access” to a system.

There are, broadly speaking, two ways that a computer user could “exceed[] authorized access.” The computer’s owner could use words to define the limits of authorization, using terms of service or a cease-and-desist letter to say, “You may do this, but not that.” Or she could use code, by programming the computer to allow certain uses and prohibit others.

The conventional wisdom is that word-based restrictions are more problematic.

He goes on to explain the conventional wisdom that basing CFAA liability on word-based restrictions such as website Terms of Use is indeed problematic. But the alternative, as James points out, is perhaps even worse: defining authorization in terms of the technical functioning of the system. The problem is that everything that the attacker gets the system to do will be something that the system as actually constructed could do.

What this means, in other words, is that the “authorization” conferred by a computer program—and the limits to that “authorization”—cannot be defined solely by looking at what the program actually does. In every interesting case, the defendant will have been able to make the program do something objectionable. If a program conveys authorization whenever it lets a user do something, there would be no such thing as “exceeding authorized access.” Every use of a computer would be authorized.

The only way out of this trap—short of giving up altogether the notion of “authorization” by technology—is to say that the designer’s intent that matters.

[This approach] requires us to ask what a person in the defendant’s position would have understood the computer’s programmers as intending to authorize. What the program does matters, not because of what it consents to, but of what it communicates about the programmer’s consent.

But even this underestimates the difficulty of relying on behavior. To see why, consider one of James’s examples: an ATM that was programmed so that when it did not have a network connection, it would dispense $200 cash to anyone, whether or not they even had an account at the bank. An Australian court convicted a Mr. Kennison who withdrew money without having a valid account. Notice that everything about the system’s behavior conveys the message that cash should be dispensed to anyone when there is not a network connection. This behavior of the system was pretty clearly not an error but a deliberate choice by the designers. If the system’s behavior conveyed anything to Kennison, it was that cash was supposed to be dispensed, and that the designers had chosen to make it behave that way. If you conclude Kennison’s use was unauthorized, then you have to get there by arguing that there was an understanding, not expressed in any words or behavior, that spoke more loudly than the system’s behavior. The lack of authorization did not stem from code, and it did not stem from words. Kennison was just supposed to know that the act was unauthorized. This seems plausible for ATM withdrawals, but it can’t extend very far into less settled technical areas.

Why did the ATM’s designers choose to make it dispense money? Presumably they figured that almost all of the users who asked for $200 would in fact have valid accounts of at least $200, and they wanted to serve those customers even at the risk of dispensing some cash that they wouldn’t have dispensed under normal circumstances. But this design decision seems to assume that people won’t do what Kennison did—that people will not take advantage of the behavior. It’s tempting to argue, then, that it is precisely the lack of technical barriers to Kennison’s act that conveys the designers’ belief that acts of that type were not authorized. But this argument would prove too much—if the existence of a fence conveys lack of authorization, then the non-existence of a fence cannot also prove lack of authorization. The conclusion must be that a system’s behavior is not a very reliable signpost for authorization.

Is there any case where a system’s behavior is a reliable guide to authorization? One possibility is where the system is clearly designed with a particular behavior in mind, but there was an obvious engineering error that created a loophole. For example, if a system requires passwords for account access, but the implementation treats a zero-length password as valid to access every account. Contentious CFAA cases are rarely like this. Text-based definitions of authorization may be problematic; but behavior-based restrictions are often worse.

Comments

  1. Authorization by design is really authorization by hindsight. Which puts any bright explorer who gets on the wrong side of a system’s owners in a parlous position.

    I’m currently having a variation of this discussion with my 8-year-old: he finds it difficult to understand that if he clicks a few places on one of the school computers and types a few simple commands into a terminal window the police will arrive, expecting to arrest an evil hacker.

  2. Two words: Konami Code.

    For those of us who’ve grown up playing video games, it’s not a design failure, nor a bug, nor an authorization marker, but an exploit, or an easter egg. And the percentage of each cohort that consists of gamers has increased with every decade since the 80’s. This alone makes authorization by design unworkable.

    But if the law can make a distinction between criminal trespass and non-criminal trespass, then surely any future computer misuse codes can do likewise.

  3. John Millington says

    Defining authorization as whatever happens, does seem silly. On the other hand, how bad would it be, if CFAA turned into a giant NOP? We still probably have whatever laws were on the books in 1970.

    Our generation didn’t _invent_ fraud. Whether it was 2012 or 1970, if someone tricked or lied to (or made an honest mistake with) a _human_ teller, or the human teller made a mistake or used a foolish policy (“give $200 to anyone who asks, whether they’re an account holder or not”), the legal system probably still handled that fairly well, didn’t it? How did our grandparents get by? Did they fail to administer fair justice, or did they know something we don’t know, about the wisdom of trying to enumerate every little detail of what counts as an “authorized” transaction and what doesn’t?