November 23, 2024

Layers

Lawrence Solum and Minn Chung have a new paper, “The Layers Principle: Internet Architecture and the Law,” in which they argue that layering is an essential part of the Internet’s architecture and that Internet regulation should therefore respect the Internet’s layered nature. It’s a long paper, so no short commentary can do it justice, but here are a few reactions.

First, there is no doubt that layering is a central design principle of the Internet, or of any well-designed multipurpose network. When we teach computer science students about networks, layering is one of the most important concepts we try to convey. Solum and Chung are right on target about the importance of layering.

They’re on shakier ground, though, when they relate their layering principle to the end-to-end principle that Lessig has popularized in the legal/policy world. (The end-to-end principle says that most of the “brains” in the Internet should be at the endpoints, e.g. in end users’ computers, rather than in the core of the network itself.) Solum and Chung say that end-to-end is a simple consequence of their layering principle. That’s true, but only because the end-to-end principle is built in to their assumptions, in a subtle way, from the beginning. In their account, layering occurs only at the endpoints, and not in the network itself. While this is not entirely accurate, it’s not far wrong, since the layering is much deeper at the endpoints than in the core of the Net. But the reason this is true is that the Net is designed on the end-to-end principle. There are alternative designs that use deep layering everywhere, but those were not chosen because they would have violated the end-to-end principle. End-to-end is not necessarily a consequence of layering; but end-to-end is, tautologically, a consequence of the kind of end-to-end style layering that Solum and Chung assume.

Layering and end-to-end, then, are both useful rules of thumb for understanding how the Internet works. It follows, naturally, that regulation of the Net should be consistent with both principles. Any regulatory proposal, in any sphere of human activity, is backed by a story about how the proposed regulation will lead to a desirable result. And unless that story makes sense – unless it is rooted in an understanding of how the sphere being regulated actually works – then the proposal is almost certainly a bad one. So regulatory plans that are inconsistent with end-to-end or layering are usually unwise.

Of course, these two rules of thumb don’t give us the complete picture. The Net is more complicated, and sometimes a deeper understanding is needed to evaluate a policy proposal. For example, a few widespread and helpful practices such as Network Address Translation violate both the end-to-end principle and layering; and so a ban on address translation would be consistent with end-to-end and layering, but inconsistent with the actual Internet. Rules of thumb are at best a lesser substitute for detailed knowledge about how the Net really works. Thus far, we have done a poor job of incorporating that knowledge into the regulatory process. Solum and Chunn’s paper has its flaws, but it is a step in the right direction.

[UPDATE (Sept. 11, 2003): Looking back at this entry, I realize that by devoting most of my “ink” to my area of disagreement with Solum and Chunn, I might have given the impression that I didn’t like their paper. Quite the contrary. It’s a very good paper overall, and anyone serious about Internet policy should read it.]

SearchKing Suit Dismissed

Stefanie Olsen at CNet News.com reports that SearchKing’s lawsuit against Google has been dismissed. The judge ruled, as expected, that Google’s page rankings are opinions, and that Google has a First Amendment right to state its opinions.

Here’s the background: SearchKing sells a service that claims to raise people’s page rankings on the Google search engine. Google adjusted their page ranking algorithm to demote SearchKing’s pages. SearchKing sued Google, asking the court to grant a preliminary injunction requiring Google to restore the page rankings of SearchKing’s pages. The court has now dismissed SearchKing’s suit. For a longer analysis of the case, see James Grimmelmann’s old LawMeme posting.

Aimster, Napster, and the Betamax

An interesting amicus brief has been filed in the Aimster case, on behalf of public-interest groups including EFF, PublicKnowledge, and the Home Recording Rights Coalition; library groups including the American Library Association; and industry groups including the Computing and Communications Industry Association and the Consumer Electronics Association. A trial court found Aimster liable for indirect copyright infringement for selling a sort of file-sharing product. The amicus brief was filed with the Court of Appeals that is considering Aimster’s appeal.

The brief does not take a position on whether Aimster should be found liable, but it does argue forcefully that the trial court misinterpreted the Supreme Court’s ruling in the 1984 Sony Betamax case. In Betamax, the Supreme Court found that Sony was not liable for selling VCRs, even though VCRs were often used to infringe copyrights. The Court found, essentially, that if a product has both infringing and noninfringing uses, then the product’s maker cannot be held liable simply for selling that product. The Betamax rule has been rehashed in recent cases, including Napster (which was found liable for participating in the infringing activity) and Grokster (which was found not liable under the Betamax rule). How the Betamax rule will be interpreted is one of the key legal issues for would-be designers of innovative media products. Courts have not been entirely consistent in their reading of Betamax.

The new brief urges the Court of Appeals to narrow the lower court’s reading of the Betamax rule. According to the brief, the lower court’s reading of Betamax would impose liability on the makers of common devices such as photocopiers and digital cameras, and the Court of Appeals, regardless of its ultimate decision about Aimster’s liability, should make clear that the lower court misread Betamax.

I won’t write any more on this , since the brief is relatively short and well-written – if I’m not careful, my summary of the brief will be longer than the brief itself!

Thanks for bringing the brief to my attention go to Aimee Deep, who, despite Frank Field’s occasional doubts, appears to really exist.

DVDCCA v. Bunner in California Supreme Court

DVDCCA v. Bunner – the “California DVD case” – was argued yesterday in the California Supreme Court. DVDCCA, which is basically the movie industry, sued Andrew Bunner for re-publishing the DeCSS program on his web site. DeCSS, you may recall, is a program for decrypting DVDs.

A previous case in Federal court, Universal v. Remeirdes (also known as “Universal v. Corley”, the “2600 case”, or the “New York DVD case”), led to a ruling that posting DeCSS violated the Digital Millennium Copyright Act (DMCA). There was no DMCA claim in Bunner; the movie industry argued instead that DeCSS contained their trade secrets, and so was illegal for Bunner to publish.

Bunner lost in the trial court but he won a reversal in the appeals court, with the appeals court ruling that DeCSS was speech and that an injunction against its publication would therefore be an unconstitutional prior restraint on speech.

Wired has a pretty poor story about this (bylined “Reuters”). Better is Lisa Bowman’s story at CNet News.com. Alex McGillivray was there and offers a deeper account of the legal arguments.

As usual in these cases, the plaintiffs’ lawyers offered strained analogies. California Attorney General Bill Lockyer called DeCSS a tool for “breaking, entering, and stealing”, ignoring that DeCSS only allows one to “break into” one’s own property. (The theory that using DeCSS amounts to a break-in was already rejected by a Norwegian court in the Johansen case.)

DVDCCA lawyer Robert Sugarman said something even odder. Bowman’s story quotes Sugarman as telling the court that DeCSS is designed “to allow individuals to steal a trade secret and, by virtue of that, hack into a system that protects the trade secrets of motion picture makers.” This description is wrong on several counts. First, it is at odds with the DVDCCA’s position, which is not that that DeCSS protects their trade secrets, but that it contains their trade secrets. Second, the only things “protected” by DeCSS are the digital versions of the movies, and movies in broad distribution can’t be trade secrets.

In any case, I have never understood why the industry’s basic trade secret argument wasn’t laughed out of court. By the time Bunner got hold of DeCSS and re-published it, it was available at hundreds of places on the Net, and had been available for months. Anybody who cared to know this “trade secret” already knew it, through no fault of Bunner’s. (I filed a declaration to that effect with the original trial court.) The industry never claimed that Bunner did anything illegal to get the “trade secret”; nor did they even prove that anybody else had done anything illegal to get it.

Kerr on Cybercrime Laws

Orin Kerr has written an.interesting paper, “Cybercrime’s Scope: Interpreting ‘Access’ and ‘Authorization’ in Computer Misuse Statutes,” in which he argues for a new way of understanding the prohibition, in the Computer Fraud and Abuse Act (CFAA) and other laws, on “access … without authorization” to a computer. It’s a long, dense law review article, but it’s definitely worth reading if you are interested in cybercrime law.

Both “access” and “authorization” turn out to be harder to interpret than one might think. Kerr argues convincingly that courts have interpreted these words inconsistently, and that the trend has been toward an overly broad interpretation that would effectively criminalize any violation of the Terms of Use of any online service. While such violations may be breaches of contract subject to civil lawsuit, it is unwise to criminalize every breach of contract. Criminal law is a sharp tool to be used only when necessary.

While he would narrow the interpretation of the CFAA, Kerr would not eliminate the CFAA entirely. He provides two main examples of the kind of acts he would still criminalize. The first example involves stealing or guessing a password to gain access to a password-protected service running on somebody else’s computer. More generally, he would ban any circumvention of an authentication mechanism used to control access to somebody else’s computer. The second example involves computer attacks that exploit a program bug (such as a buffer overflow) to seize control of a program running on somebody else’s computer.

Thus far, I was reasonably convinced by Kerr’s arguments. But now we come to the part that I found harder to swallow, in which he argues that “courts …should narrow the scope of unauthorized access statutes to circumvention of code-based restrictions on computer privileges.”

Talk of banning “circumvention” may raise ugly comparisons to the DMCA, but that’s a red herring. Kerr makes clear that he is talking only about code-based restrictions on access to other people’s computers. The egregious aspects of the DMCA, by contrast, are, first, that it allows someone to lock you out of parts of your own computer, and second, that it includes a broad ban on certain technologies. Kerr’s proposal suffers from neither of these flaws. While enshrining “circumvention” as a central concept in cybercrime law might be inconvenient rhetorically for DMCA opponents, it’s no problem substantively.

My skepticism about Kerr’s formulation is based instead on two issues. First, I suspect that “circumvention” may turn out to be just as slippery a term as “authorization.” Password-guessing is clearly circumvention, but that’s an easy case. When the facts are more complicated, judges will have a harder time figuring out what is circumvention and what is just clever action.

Here’s an example. Suppose you lock the front door of your house. If I pick the lock, that’s circumvention. But suppose I enter through the back door. Have I circumvented the front door lock? What if I crawl in an open window next to the front door? Is that a circumvention? “Circumvention,” like “authorization,”ends up entangled in a subtle calculus of expectations and social norms.

Kerr’s example of a buffer-overflow attack illustrates another problem with “circumvention.” Suppose that a bad guy sends your computer a sort of “ping of death” packet, and that because of a bug in your operating system, this packet allows him to seize control of your computer. What exactly is the “code-based restriction” that he has circumvented? You could argue that he has circumvented the absence of a method for controlling your machine from afar; but it seems like a stretch to claim that that absence is a “code-based restriction.”

What really happened in this example is that the bad guy exploited a difference between the way you thought your system worked, and the way it actually did work. This is a useful distinction that courts have recognized (as Kerr notes), but it doesn’t seem to fit neatly within Kerr’s framework.

My second objection to Kerr’s conclusion is more fundamental. Kerr’s strong argument for carefully tailored cybercrime law compels him to justify having a broader “circumvention” ban rather than a set of more narrow bans on specific actions, such as circumvention of certain authentication features. He does offer some justification, but I am not yet convinced. (It’s also worth noting that Kerr’s approach may be expedient, even if it’s not the best possible solution from a purely theoretical standpoint. For example, it may be easier to convince courts to adopt a “circumvention” interpretation of the CFAA than it would be to get either courts or Congress to rewrite cybercrime law around a family of narrower prohibitions.)

Finally, Kerr’s paper is a valuable reminder of how much we rely on the discretion of prosecutors and judges to make cybercrime law work. So far, this discretion has moderated the defects in current law, but that’s no excuse for complacency. We need to talk about what the law should be. Kerr’s paper is a valuable contribution to that discussion.