Lawrence Solum and Minn Chung have a new paper, “The Layers Principle: Internet Architecture and the Law,” in which they argue that layering is an essential part of the Internet’s architecture and that Internet regulation should therefore respect the Internet’s layered nature. It’s a long paper, so no short commentary can do it justice, but here are a few reactions.
First, there is no doubt that layering is a central design principle of the Internet, or of any well-designed multipurpose network. When we teach computer science students about networks, layering is one of the most important concepts we try to convey. Solum and Chung are right on target about the importance of layering.
They’re on shakier ground, though, when they relate their layering principle to the end-to-end principle that Lessig has popularized in the legal/policy world. (The end-to-end principle says that most of the “brains” in the Internet should be at the endpoints, e.g. in end users’ computers, rather than in the core of the network itself.) Solum and Chung say that end-to-end is a simple consequence of their layering principle. That’s true, but only because the end-to-end principle is built in to their assumptions, in a subtle way, from the beginning. In their account, layering occurs only at the endpoints, and not in the network itself. While this is not entirely accurate, it’s not far wrong, since the layering is much deeper at the endpoints than in the core of the Net. But the reason this is true is that the Net is designed on the end-to-end principle. There are alternative designs that use deep layering everywhere, but those were not chosen because they would have violated the end-to-end principle. End-to-end is not necessarily a consequence of layering; but end-to-end is, tautologically, a consequence of the kind of end-to-end style layering that Solum and Chung assume.
Layering and end-to-end, then, are both useful rules of thumb for understanding how the Internet works. It follows, naturally, that regulation of the Net should be consistent with both principles. Any regulatory proposal, in any sphere of human activity, is backed by a story about how the proposed regulation will lead to a desirable result. And unless that story makes sense – unless it is rooted in an understanding of how the sphere being regulated actually works – then the proposal is almost certainly a bad one. So regulatory plans that are inconsistent with end-to-end or layering are usually unwise.
Of course, these two rules of thumb don’t give us the complete picture. The Net is more complicated, and sometimes a deeper understanding is needed to evaluate a policy proposal. For example, a few widespread and helpful practices such as Network Address Translation violate both the end-to-end principle and layering; and so a ban on address translation would be consistent with end-to-end and layering, but inconsistent with the actual Internet. Rules of thumb are at best a lesser substitute for detailed knowledge about how the Net really works. Thus far, we have done a poor job of incorporating that knowledge into the regulatory process. Solum and Chunn’s paper has its flaws, but it is a step in the right direction.
[UPDATE (Sept. 11, 2003): Looking back at this entry, I realize that by devoting most of my “ink” to my area of disagreement with Solum and Chunn, I might have given the impression that I didn’t like their paper. Quite the contrary. It’s a very good paper overall, and anyone serious about Internet policy should read it.]