December 14, 2024

Sizing Up "Code" with 20/20 Hindsight

Code and Other Laws of Cyberspace, Larry Lessig’s seminal work on Internet regulation, turns ten years old this year. To mark the occassion, the online magazine Cato Unbound (full disclosure: I’m a Cato adjunct scholar) invited Lessig and three other prominent Internet scholars to weigh in on Code‘s legacy: what it got right, where it went wrong, and what implications it has for the future of Internet regulation.

The final chapter of Code was titled “What Declan Doesn’t Get,” a jab at libertarians like CNet’s Declan McCullagh who believed that government regulation of the Internet was likely to do more harm than good. It’s fitting, then, that Declan got to kick things off with an essay titled (what else?) “What Larry Didn’t Get.” There were responses from Jonathan Zittrain (largely praising Code) and my co-blogger Adam Thierer (mostly criticizing it), and the Lessig got the last word. I think each contributor will be posting a follow-up essay in the coming days.

My ideological sympathies are with Declan and Adam, but rather than pile on to their ideological critiques, I want to focus on some of the specific technical predictions Lessig made in Code. People tend to forget that in addition to describing some key theoretical insights about the nature of Internet regulation, Lessig also made some pretty specific predictions about how cyberspace would evolve in the early years of the 21st Century. I think that enough time has elapsed that we can now take a careful look at those predictions and see how they’ve panned out.

Lessig’s key empirical claim was that as the Internet became more oriented around commerce, its architecture would be transformed in ways that undermined free speech and privacy. He thought that e-commerce would require the use of increasingly sophisticated public-key infrastructure that would allow any two parties on the net to easily and transparently exchange credentials. And this, in turn, would make anonymous browsing much harder, undermining privacy and making the Internet easier to regulate.

This didn’t happen, although for a couple of years after the publication of Code, it looked like a real possibility. At the time, Microsoft was pushing a single sign-on service called Passport that could have been the foundation of the kind of client authentication facility Lessig feared. But then passport flopped. Consumers weren’t enthusiastic about entrusting their identities to Microsoft, and businesses found that lighter-weight authentication processes were sufficient for most transactions. By 2005 companies like eBay started dropping Passport from their sites. The service has been rebranded Windows Live ID and is still limping along, but no one seriously expects it to become the kind of comprehensive identity-management system Lessig feared.

Lessig concedes that he was “wrong about the particulars of those technologies,” but he points to the emergence of a new generation of surveillance technologies—IP geolocation, deep packet inspection, and cookies—as evidence that his broader thesis was correct. I could quibble about whether any of these are really new technologies. Lessig discusses cookies in Code, and the other two are straightforward extensions of technologies that existed a decade ago. But the more fundamental problem is that these examples don’t really support Lessig’s original thesis. Remember that Lessig’s prediction was that changes to Internet architecture—such as the introduction of robust client authentication to web browsers—would transform the previously anarchic network into one that’s more easily regulated. But that doesn’t describe these technologies at all. Cookies, DPI, and geo-location are all technologies that work with vanilla TCP/IP, using browser technologies that were widely deployed in 1999. Technological changes made cyberspace more susceptible to regulation without any changes to the Internet’s architecture.

Indeed, it’s hard to think of any policy or architectural change that could have forestalled the rise of these technologies. The web would be extremely inconvenient if we didn’t have something like cookies. The engineering constraints on backbone routers make roughly geographical IP assignment almost unavoidable, and if IP addresses are tied to geopgrahy it’s only a matter of time before someone builds a database of the mapping. Finally, any unencrypted networking protocol is susceptible to deep packet inspection. Short of mandating that all traffic be encrypted, no conceivable regulatory intervention could have prevented the development of DPI tools.

Of course, now that these technologies exist, we can have a debate about whether to regulate their use. But Lessig was making a much stronger claim in 1999: that the Internet’s architecture (and, therefore, its susceptibility to regulation) circa 2009 would be dramatically different depending on the choices policymakers made in 1999. I think we can now say that this wasn’t right. Or, at least, the technologies he points to now aren’t good examples of that thesis.

It seems to me that the Internet is rather less malleable than Lessig imagined a decade ago. We would have gotten more or less the Internet we got regardless of what Congress or the FCC did over the last decade. And therefore, Lessig’s urgent call to action—his argument that we must act in 1999 to ensure that we have the kind of Internet we want in 2009—was misguided. In general, it works pretty well to wait until new technologies emerge and then debate whether to regulate them after the fact, rather than trying to regulate preemptively to shape the kinds of technologies that are developed.

As I wrote a few months back, I think Jonathan Zittrain’s The Future of the Internet and How to Stop It makes the same kind of mistake Lessig made a decade ago: overestimating regulators’ ability to shape the evolution of new technologies and underestimating the robustness of open platforms. The evolution of technology is mostly shaped by engineering and economic constraints. Government policies can sometimes force new technologies underground, but regulators rarely have the kind of fine-grained control they would need to promote “generative” technologies over sterile ones, any more than they could have stopped the emergence of cookies or DPI if they’d made different policy choices a decade ago.

Comments

  1. Lee wrote: “As I wrote a few months back, I think Jonathan Zittrain’s The Future of the Internet and How to Stop It makes the same kind of mistake Lessig made a decade ago: overestimating regulators’ ability to shape the evolution of new technologies and underestimating the robustness of open platforms. The evolution of technology is mostly shaped by engineering and economic constraints. Government policies can sometimes force new technologies underground, but regulators rarely have the kind of fine-grained control they would need to promote “generative” technologies over sterile ones, any more than they could have stopped the emergence of cookies or DPI if they’d made different policy choices a decade ago.”

    Assuming regulators don’t ban specific technology, the only other tool in their arsenal is to modify economic incentives. We would have a very different Internet today if one of the following things happened in 1999:

    (1) Congress banned cross-site tracking and forced web sites to obtain opt-in from users to keep PII data about them. Forget cookies. Cookies aren’t necessary to accomplish this. This step would have increased the costs of advertisers and would have encouraged different business models. Outcome unknown. This regulatory change was discussed by Congress and could have happened.

    (2) U.S. gov requires Microsoft to ship every copy of Windows that it buys with a non-admin user as default, no privilege escalation without keyboard affirm, ActiveX disabled; javascript disabled in email; a browser in a locked down VM, and a solid personal firewall enabled by default. This was also discussed. And by making this purchasing decision, it would have changed the landscape of PC virus/worm infection for years.

    (3) Certified email. Could spam survive without ease of infection and ease of impersonation?

    (4) Liability to businesses for security failures. If your server gets hacked and spews junk, you pay. Think about the economic impact that could have had.

    These are the trade-offs that regulators should be thinking about. Not whether cookies or deep packet inspection are good for America. Those issues are noise.

  2. Mitch Golden says

    I think you neglect the point that a very important reason many of the changes Lessig feared never happened was that the ISPs have been kept in check by the *fear* that the government would step in and regulate what they are doing. Certainly the big ISPs have repeatedly made noises about doing all sorts of things with DPI, or about violating network neutrality (as always with a cover of protecting copyright or fighting child porn or some such) – but the plans have repeatedly been forestalled not by technical issues or open standards but by the concern that an aroused government would take action against them. (This was, for example, what was behind Time-Warner’s recent climbdown regarding their plan to raise charges for those who use lots of bandwidth.)

    In other countries, where the public is not as vigilant or the government bends more to the will of the ISPs, the situation is not as bright. To my knowledge, for example, no one in the US is snooped on by Phorm, but British Telecom does use it. This is pretty much exactly the sort of thing Lessig feared. (See http://en.wikipedia.org/wiki/Phorm )

    And of course, Australia has a bad situation and getting worse. They are essentially building a Chinese-like internet, where government-disapproved content is censored at the ISP level. See http://www.news.com.au/heraldsun/story/0,21985,24568137-2862,00.html and http://en.wikipedia.org/wiki/Censorship_in_Australia

    For the moment, we have in the US the best of all possible worlds – we get the benefits of government regulation without the government having to actually do anything more than glower from time to time.

    • Anonymous says

      “And of course, Australia has a bad situation and getting worse. They are essentially building a Chinese-like internet, where government-disapproved content is censored at the ISP level.”

      But this IS government intervention in the private space, so you are just proving that the Libertarians were right all along and government should keep out.

      • Mitch Golden says

        I am not sure what you’re saying. Some government intervention is good, some bad. It was, after all, a government program that *created* the internet. Just because there are some bad interventions doesn’t mean they all are.

        • Anonymous says

          Lessig’s worst fears are coming to pass primarily BECAUSE McCullagh’s worst fears are coming to pass. Lessig has been looking for the bogey man under the wrong rock.

  3. I think that both you and Lessig (at different times) have committed a Moore’s Law fallacy. Lessig, because as a nontechnologist he didn’t really grasp what a 50-100 fold increase in processing power and memory would mean to his issues (although many technologists have gotten those issues wrong as well). You, because in hindsight it’s hard to see where the internet was at the time Lessig wrote.

    10 years ago, the internet was mostly about the leaves: packets started at one end, got routed to the destinations in their headers, delivered, processed. Replies came back the same way. Nobody but the endpoint machines cared what was in the packets. (OK, there’s a little of the myth of the pastoral in there).

    Now it’s about the fabric: packets get inspected with greater or lesser depth — destination ports, content, apparent encryption — depending on the policies of the carrier in question. They then get redirected (akamai) or dropped (non-carrier VOIP), sometimes with forged replacements (BitTorrent under Comcast), or they get diverted for logging and processing before revised versions head in the direction of their original destination (Phorm).

    Of course all of this still happens using IP, because that’s the underlying fabric we’ve got, and changing that would be horrifically difficult (another thing Lessig missed). But claiming that the architecture of the internet is therefore unchanged is a little like claiming that Snow Leopard is a minor modification of Windows XP because Intel won the processor wars.

    (What’s interesting to me is that it’s precisely the nonproliferation of widely-used serious crypto for authentication that has enabled the particular version of surveillance ecosystem we have. Under certain circumstances a cryptographically secure internet could have been more anonymous, or at least pseudonymous.)