December 6, 2024

Sizing Up "Code" with 20/20 Hindsight

Code and Other Laws of Cyberspace, Larry Lessig’s seminal work on Internet regulation, turns ten years old this year. To mark the occassion, the online magazine Cato Unbound (full disclosure: I’m a Cato adjunct scholar) invited Lessig and three other prominent Internet scholars to weigh in on Code‘s legacy: what it got right, where it went wrong, and what implications it has for the future of Internet regulation.

The final chapter of Code was titled “What Declan Doesn’t Get,” a jab at libertarians like CNet’s Declan McCullagh who believed that government regulation of the Internet was likely to do more harm than good. It’s fitting, then, that Declan got to kick things off with an essay titled (what else?) “What Larry Didn’t Get.” There were responses from Jonathan Zittrain (largely praising Code) and my co-blogger Adam Thierer (mostly criticizing it), and the Lessig got the last word. I think each contributor will be posting a follow-up essay in the coming days.

My ideological sympathies are with Declan and Adam, but rather than pile on to their ideological critiques, I want to focus on some of the specific technical predictions Lessig made in Code. People tend to forget that in addition to describing some key theoretical insights about the nature of Internet regulation, Lessig also made some pretty specific predictions about how cyberspace would evolve in the early years of the 21st Century. I think that enough time has elapsed that we can now take a careful look at those predictions and see how they’ve panned out.

Lessig’s key empirical claim was that as the Internet became more oriented around commerce, its architecture would be transformed in ways that undermined free speech and privacy. He thought that e-commerce would require the use of increasingly sophisticated public-key infrastructure that would allow any two parties on the net to easily and transparently exchange credentials. And this, in turn, would make anonymous browsing much harder, undermining privacy and making the Internet easier to regulate.

This didn’t happen, although for a couple of years after the publication of Code, it looked like a real possibility. At the time, Microsoft was pushing a single sign-on service called Passport that could have been the foundation of the kind of client authentication facility Lessig feared. But then passport flopped. Consumers weren’t enthusiastic about entrusting their identities to Microsoft, and businesses found that lighter-weight authentication processes were sufficient for most transactions. By 2005 companies like eBay started dropping Passport from their sites. The service has been rebranded Windows Live ID and is still limping along, but no one seriously expects it to become the kind of comprehensive identity-management system Lessig feared.

Lessig concedes that he was “wrong about the particulars of those technologies,” but he points to the emergence of a new generation of surveillance technologies—IP geolocation, deep packet inspection, and cookies—as evidence that his broader thesis was correct. I could quibble about whether any of these are really new technologies. Lessig discusses cookies in Code, and the other two are straightforward extensions of technologies that existed a decade ago. But the more fundamental problem is that these examples don’t really support Lessig’s original thesis. Remember that Lessig’s prediction was that changes to Internet architecture—such as the introduction of robust client authentication to web browsers—would transform the previously anarchic network into one that’s more easily regulated. But that doesn’t describe these technologies at all. Cookies, DPI, and geo-location are all technologies that work with vanilla TCP/IP, using browser technologies that were widely deployed in 1999. Technological changes made cyberspace more susceptible to regulation without any changes to the Internet’s architecture.

Indeed, it’s hard to think of any policy or architectural change that could have forestalled the rise of these technologies. The web would be extremely inconvenient if we didn’t have something like cookies. The engineering constraints on backbone routers make roughly geographical IP assignment almost unavoidable, and if IP addresses are tied to geopgrahy it’s only a matter of time before someone builds a database of the mapping. Finally, any unencrypted networking protocol is susceptible to deep packet inspection. Short of mandating that all traffic be encrypted, no conceivable regulatory intervention could have prevented the development of DPI tools.

Of course, now that these technologies exist, we can have a debate about whether to regulate their use. But Lessig was making a much stronger claim in 1999: that the Internet’s architecture (and, therefore, its susceptibility to regulation) circa 2009 would be dramatically different depending on the choices policymakers made in 1999. I think we can now say that this wasn’t right. Or, at least, the technologies he points to now aren’t good examples of that thesis.

It seems to me that the Internet is rather less malleable than Lessig imagined a decade ago. We would have gotten more or less the Internet we got regardless of what Congress or the FCC did over the last decade. And therefore, Lessig’s urgent call to action—his argument that we must act in 1999 to ensure that we have the kind of Internet we want in 2009—was misguided. In general, it works pretty well to wait until new technologies emerge and then debate whether to regulate them after the fact, rather than trying to regulate preemptively to shape the kinds of technologies that are developed.

As I wrote a few months back, I think Jonathan Zittrain’s The Future of the Internet and How to Stop It makes the same kind of mistake Lessig made a decade ago: overestimating regulators’ ability to shape the evolution of new technologies and underestimating the robustness of open platforms. The evolution of technology is mostly shaped by engineering and economic constraints. Government policies can sometimes force new technologies underground, but regulators rarely have the kind of fine-grained control they would need to promote “generative” technologies over sterile ones, any more than they could have stopped the emergence of cookies or DPI if they’d made different policy choices a decade ago.

Adam Thierer on the First Amendment Twilight Zone

Thursday’s lunch talk here at CITP was by my co-blogger Adam Thierer of the Progress and Freedom Foundation. Adam is a leading voice in the debate over online free speech, with a particular focus on how to protect children from harmful online material while preserving First Amendment freedoms. In his lunch talk, Adam focused on the implications of technological convergence for First Amendment law. Traditionally, we’ve had completely separate regulatory regimes and constitutional standards for different media technologies—broadcast, cable, satellite, and Internet. The courts have repeatedly struck down efforts to censor the Internet. In contrast, in cases such as FCC v. Pacifica, the Supreme Court has given Congress and the FCC free rein to censor the airwaves. Adam calls broadcasting’s second-class citizenship the “First Amendment Twilight Zone.”

Adam argues that the premise at the heart of these precedents—the idea that “broadcast,” “cable,” and “Internet” are distinct categories that can be regulated differently—is rapidly being undermined by technological progress. There are far more ways to get content than in the past, and it’s far more difficult to draw clear distinctions among them. As technologies converge, the question is whether the law will converge with it? And more importantly, if the law converges, which direction will it go? Will the Internet be subject to the more censorious standards of broadcast television? Or will Internet-based replacements for broadcast television enjoy the same robust protections as online content does today?

Adam has made a screencast of his presentation, in which he answers these questions and more. It’s a great talk and I encourage you to check it out:

RIP Rocky Mountain News

The Rocky Morning News, Colorado’s oldest newspaper, closed its doors Friday. On their front page they have this incredibly touching video:

Final Edition from Matthew Roberts on Vimeo.

The closing of a large institution like a daily newspaper is an incredibly sad event, and my heart goes out to all the people who suddenly find their lives upended by sudden unemployment. Many talented and dedicated employees lost their jobs today, and some of them will have to scramble to salvage their careers and support their families. The video does a great job of capturing the shock and sadness that the employees of the paper feel—not just because they lost their jobs, but also because in some sense they’re losing their life’s work.

With that said, I do think it’s unfortunate that part of the video was spent badmouthing people, like me, who don’t subscribe to newspapers. One gets the impression that newspapers are failing because kids these days are so obsessed with swapping gossip on MySpace that they’ve stopped reading “real” news. No doubt, some people fit that description, but I think the more common case is something like the opposite: those of us with the most voracious appetite for news have learned that newsprint simply can’t compete with the web for breadth, depth, or timeliness. When I pick up a newspaper, I’m struck by how limited it is: the stories are 12 to 36 hours old, the range of topics covered is fairly narrow, and there’s no way to dig deeper on the stories that interest me most. That’s not the fault of the newspaper’s editors and reporters; newsprint is just an inherently limited medium.

As more newspapers go out of business in the coming years, I think it’s important that our sympathy for individual employees not translate into the fetishization of newsprint as a medium. And it’s especially important that we not confuse newsprint as a medium with journalism as a profession. Newsprint and journalism have been strongly associated in the past, but this an accident of technology, not something inherent to journalism. Journalism—the process of gathering, summarizing, and disseminating information about current events—has been greatly enriched by the Internet. Journalists have vastly more tools available for gathering the news, and much more flexible tools for disseminating it. The replacement of static newspapers with dynamic web pages is progress.

But that doesn’t mean it’s not a painful process. The web’s advantages are no consolation for Rocky employees who have spent their careers building skills connected to a declining technology. And the technical superiority of web will be of little consolation to Denver area readers who will, in the short run, have less news and information available about their local communities. So my thoughts and sympathy today are with the employees of the Rocky Mountain News.

New Site Tests Crowd-Sourced Transparency

Some of my colleagues here at CITP have written about the importance of open data formats for promoting government transparency and achieving government accountability. Another leading thinker in this area is my friend Jerry Brito, a George Mason University scholar who contributed a post here at Freedom to Tinker last year. Jerry wrote one of the first papers on the importance of mashups using government data. Now, Jerry and a few collaborators have put his ideas into action by building a site called Stimulus Watch that will facilitate crowd-sourced analysis of the hundreds of billions of dollars of deficit spending that President Obama has made a centerpiece of his economic agenda.

Jerry and his collaborators parsed a report containing more than 10,000 “shovel ready” spending proposals from the nation’s mayors. Many of these proposals will likely be funded if Congress approves Obama’s spending bill. Using the site, ordinary Americans across the country can review the proposals in their own metropolitan areas and provide feedback on which proposals deserve the highest priority. As the site grows in popularity, it may prove extremely valuable for federal officials deciding where to allocate money. And if there are turkeys like the “Bridge to Nowhere” among the mayors’ requests, the site will allow citizens to quickly identify and publicize these proposals and perhaps shame government officials into canceling them.

The Supreme Court and Software Patents

I’m very excited that Doug Lichtman, a sharp law professor at UCLA, has decided to take up podcasting. His podcast, Intellectual Property Colloquium, features monthly, in-depth discussions of copyright and patent law. The first installment (mp3) featured a lively discussion between Lichtman and EFF’s inimitable Fred von Lohmann about the Cablevision decision and its implications for copyright law. November’s episode focused on In Re Bilski, the widely-discussed decision by the United States Court of Appeals for the Federal Circuit limiting patents on abstract concepts like software and business methods. The podcast featured two law professors, John Duffy (who argued the Bilski case before the Federal Circuit) and Rob Merges.

As I noted at the time it was decided, people care about Bilski largely because of what it says about legality of software patents. Software patents are intensely controversial, with many geeks arguing that the software industry would be better off without them. What I found striking about the conversation was that both guests (and perhaps the host, although he didn’t tip his hand as much) took it as self-evident that there needed to be patents on software and business methods. As one of the guests (I couldn’t tell if it was Merges and Duffy, but they seemed to largely agree) said around minute 47:

The easiest criticism of the [Bilski] opinion is that it invites this kind of somewhat pointless metaphysical investigation. What you say is “look, I’ve got an invention, I wrote some code, I’d like to a patent for that.” Why do we have to play this kind of sophomoric philosophical game of “well, what changes in the real world when my code runs?” The [Supreme Court] case law arose fairly early in the information technology revolution. We’re kind of stuck with this artifactual, residual overhang of physicality. It’s just the price we have to pay to get a software patent these days. Someday maybe it will drop away or wither away, but that’s where we find ourselves now.

On this view, the Supreme Court’s historical hostility toward patents on software is merely an historical accident—a “residual overhang” that we’d do well to get beyond. Guided by a strong policy preference for the patentability of software and business methods, Duffy and Merges seem to feel that the Federal Circuit should give little weight to Supreme Court decisions that they regard as out of touch with the modern realities of the software industry. After all, this is “just the price we have to pay to get a software patent these days.”

I don’t agree with this perspective. I’ve long sympathized with software patent critics such as Ben Klemens who argue that the Supreme Court’s precedents place clear limits on the patenting of software. But I thought it would be interesting to take a closer look at the Supreme Court’s classic decisions and talk to some patent scholars to see if I can understand why there are such divergent opinions about the Supreme Court’s jurisprudence. The result is a new feature article for Ars Technica, where I review the Supreme Court’s classic trilogy of software patent cases and ponder how those cases should be applied to the modern world.

Like most Supreme Court decisions, these three opinions are not the clearest in the world. The justices, like most of the legal profession, seem slightly confused about the relationships among mathematical algorithms, software, and computer programs. It’s certainly possible to find phrases in these cases that support either side of the software patent debate. However, a clear theme emerges from all three cases: mathematics is ineligible for patent protection, and software algorithms are mathematics. The high court struggled with what to do in cases where software is one part of an otherwise-patentable machine. But it’s hard to avoid the conclusion that many of the “pure” software patents that have generated so much controversy in recent years cannot be reconciled with the Supreme Court’s precedents. For example, it’s hard to read those precedents in a way that would allow Amazon’s famous “one-click” patent.

I also argue that this result is a good one from a public policy perspective. Software has several important properties that make it fundamentally different than the other categories of now-patentable subject matter. As Klemens points out, almost every significant firm has an IT department that creates software. That means that every significant firm is a potential target for software patent lawsuits. This is a very different situation than, say, pharmaceutical patents which only affect a tiny fraction of the American economy. Second, software is already eligible for copyright protection, rendering software patents largely redundant. Most important, we now have 15 years of practical experience with software patents, and the empirical results have not been encouraging. I don’t think it’s a coincidence that the explosion of patent litigation over the last fifteen years has been concentrated in the software industry.

As the Federal Circuit struggles to craft new rules for patent-eligible software patents, it should take a close look at the far more restrictive rules for patent eligibility that were applied in the 1970s and early 1980s.