February 25, 2024

Archives for October 2008

DMCA Week, Part I: How the DMCA Was Born

Ten years ago tomorrow, on October 28, 1998, the Digital Millennium Copyright Act was signed into law. The DMCA’s anti-circumvention provisions, which became 17 USC Section 1201, made it a crime under most circumstances to “circumvent a technological measure that effectively controls access to” a copyrighted work, or to “traffic in” circumvention tools. In the default case, the new law meant that a copyright holder who used DRM to control access to her copyrighted material could exercise broad new control over how her material was used. If an album or film were distributed with DRM allowing it to be played only on alternate Tuesdays, or only in certain geographic regions, then these limits enjoyed the force of law–to go around them might not involve a a violation of copyright per se, but it would involve circumventing the access control, an activity that the DMCA made a felony.

Over the course of this week, Freedom to Tinker will be taking stock of the DMCA. What do ten years’ experience tell us about this law in particular, and about technology law and policy more generally?

Today, I’ll focus on the law’s creation. It passed in the Senate by unanimous consent, and in the House by a voice vote. But as Jessica Litman, among others, has pointed out, there was a lively debate leading up to that seemingly consensus moment. As a starting point for discussion, I’ll briefly summarize chapters six through nine of her 2001 book, Digital Copyright: Protecting Intellectual Property on the Internet.

In the early days of the Clinton administration, as part of a broader effort to develop policy responses to what was then known as the “Information Superhighway,” a working group was convened under Patent Commissioner Bruce Lehman to suggest changes to copyright law and policy. This group produced a 267 page white paper in September 1995. It argued that additional protections were necessary because

Creators and other owners of intellectual property rights will not be willing to put their interests at risk if appropriate systems — both in the U.S. and internationally — are not in place to permit them to set and enforce the terms and conditions under which their works are made available in the NII [National Information Infrastructure] environment.

In its section on Technological Protection (from pages 230-234), the white paper offers the meat of its recommendation for what became section 1201, the anti-circumvention rules:

Therefore, the Working Group recommends that the Copyright Act be amended to include a new Chapter 12, which would include a provision to prohibit the
importation, manufacture or distribution of any device, product or component incorporated into a device or product, or the provision of any service, the primary purpose or effect of which is to avoid, bypass, remove, deactivate, or otherwise circumvent, without authority of the copyright owner or the law, any process, treatment, mechanism or system which prevents or inhibits the violation of any of the exclusive rights under Section 106. The provision will not eliminate the risk that protection systems will be defeated, but it will reduce it.

In its prediction that anti-circumvention law would “reduce” “the risk that protection systems will be defeated,” the white paper offers a concise statement of the primary rationale for section 1201. That prediction hasn’t panned out: the anti-circumvention rules were enacted, but did not meaningfully reduce the risk of defeat faced by DRM systems. The defeat of such systems is, despite the DMCA, a routine eventuality following their introduction.

As Professor Litman tells the story, the Lehman white paper’s recommendations met with domestic resistance, which prompted Lehman to “press for an international diplomatic conference in Geneva hosted by the World Intellectual Property Organizaton (WIPO).” The upshot was a new treaty incorporating many of the white paper’s elements. It required participating nations to “provide adequate legal protection and effective legal remedies against the circumvention of effective technological measures that are used by authors… [to] restrict acts… which are not authorized by the authors concerned or permitted by law.”

Did this treaty actually require something like the DMCA? Before the DMCA’s passage, copyright law already included secondary liability for those who knowingly “induce, cause, or materially contribute to” the infringing conduct of another (contributory infringement liability), or who have the right and ability to control the infringing actions of another party and receive a financial benefit from the infringement (vicarious infringement liability). Clear precedent, and subsequent decisions like MGM v. Grokster confirm that creators of infringement-enabling technologies can be held liable under copyright law, even without the DMCA. Nonetheless, the treaty’s language was clearly intended by its American framers and promoters to provide a rationale for the DMCA’s anti-circumvention provisions.

One impact of this maneuver was to allow the DMCA to be promoted under the rubric of harmonization—aside from its merits as policy, DMCA proponents could claim that it was necessary in order to meet American treaty obligations. The fact that Clinton administration negotiators had been instrumental in creating the relevant international obligations in the first place was lost in the noise. And overall, America’s interest in meeting its international obligations in the intellectual property arena is quite strong. The economics of patents, rather than of copyright, dominate: U.S. patent holders in pharmaceuticals, high technology and elsewhere find themselves fighting foreign infringement. U.S. legislators are therefore apt to assign very high priority to encouraging global compliance with the intellectual property treaty regime, regardless of concerns they may have about the details of a particular measure.

A second long term impact was to lead to DMCA-like laws around the world. Other countries often took a narrow reading of the treaty obligation and declined, based on it, to adopt anti-circumvention rules. But, perhaps emboldened by the success of the international-negotiations-first approach to copyright, the U.S. executive branch has used free trade negotiations as a wedge to force other countries to adopt DMCA-like statutes. Anti-circumvention requirements make surprising cameos in the United States’s bilateral free trade agreements with Jordan, Singapore, Chile, Australia and several other countries (more information here).

What lessons can we draw from this experience? First, it is a cautionary tale about international law. One often hears appeals to international law, in domestic political debates, that attach special normative value to the fact that a given provision is required by a treaty. These appeals may be generally justified, but the DMCA/WIPO experience at least argues that they deserve to be evaluated critically rather than taken at face value. Second, it serves as a powerful reminder that the unanimous votes leading to the passage of the DMCA mask an intricate series of negotiations and controversies.

Thirdly, and most importantly, the globalized birth of the DMCA provides a cautionary tale for the future. The currently proposed ACTA (Anti-Counterfeiting Trade Agreement), is a next-generation treaty that would cover online piracy, among other matters. Its exact contents are under wraps–the public outcry and litigation that have surrounded the measure stem mostly from a leaked memo outlining possible principles for inclusion in the treaty. Proposals include creating or strengthening penalties for those who promote infringement non-commercially, and enhanced ability to seize and destroy infringing media at international borders. Absent the text of a proposed agreement, it’s hard to respond in detail to ACTA. But if the genesis of the DMCA teaches us anything, it is that these international agreements deserve close scrutiny. When an agreement is created in opaque, closed-door negotiations, and then presented to the legislature as a fait accompli, it deserves close and skeptical scrutiny.

Maybe "Open Source" Cars Aren't So Crazy After All

I wrote last week about the case for open source car software and, lo and behold, BMW might be pushing forward with the idea– albeit not in self-driving cars quite yet. 😉

Tangentially, I put “open source” in scare quotes because the car scenario highlights a new but important split in the open source and free software communities. The Open Source Initiative’s open source definition allows use of the term ‘open source’ to describe code which is available and modifiable but not installable on the intended device. Traditionally, the open source community assumed that if source was available, you could make modifications and install those modifications on the targeted hardware. This was a safe assumption, since the hardware in question was almost always a generative, open PC or OS. That is no longer the case- as I mentioned in my original car article, one might want to sign binaries so that not just anyone could hack on their cars, for example. Presumably even open source voting machines would have a similar restriction.

Another example appears to be the new ‘google phone’ (G1 or Android). You can download several gigs of source code now, appropriately licensed, so that the code can be called ‘open source’ under the OSI’s definition. But apparently you can’t yet modify that code and install the modified binaries to your own phone.

The new GPL v3 tries to address this issue by requiring (under certain circumstances) that GPL v3’d code be installable on devices with which it is shipped. But to the best of my knowledge no other license is yet requiring this, and the v3 is not yet widespread enough to put a serious dent in this trend.

Exactly how ‘open’ code like this is is up for discussion. It meets the official definition, but the inability to actually do much with the code seems like it will limit the growth of constructive community around the software for these types of devices- phones, cars, or otherwise. This issue bears keeping in mind when thinking about openness for source code of closed hardware- you will certainly see ‘open source’ tossed around a lot, but in this context, it may not always mean what you think it does.

An Illustration of Wikipedia's Vast Human Resources

The Ashley Todd incident has given us a nice illustration of the points I made on Friday about “free-riding” and Wikipedia. As Clay Shirky notes, there’s a quasi-ideological divide within Wikipedia between “deletionists” who want to tightly control the types of topics that are covered on Wikipedia and “inclusionists” who favor a more liberal policy. On Friday, the Wikipedia page on Ashley Todd became the latest front in the battle between them. You can see the argument play out here. For the record, both Shirky and I came down on the inclusionists’ side. The outcome of the debate was that the article was renamed from “Ashley Todd” to “Ashley Todd mugging hoax,” an outcome I was reasonably happy with.

Notice how the Wikipedia process reverses the normal editorial process. If Brittanica were considering an article on Ashley Todd, some Brittanica editor would first perform a cost-benefit analysis to decide whether the article would be interesting enough to readers to justify the the cost of creating the article. If she thought it was, then she would commission someone to write it, and pay the writer for his work. Once the article was written, she would almost always include the article in the encyclopedia, because she had paid good money for it.

In contrast, the Wikipedia process is that some people go ahead and create an article and then there is frequently an argument about whether the article should be kept. The cost of creating the article is so trivial, relative to Wikipedia’s ample resources of human time and attention, that it’s not even mentioned in the debate over whether to keep the article.

To get a sense for the magnitude of this, consider that in less than 24 hours, dozens of Wikipedians generated a combined total of about 5000 words of arguments for and against deleting an article that is itself only about 319 words. The effort people (including me) spent arguing about whether to have the article dwarfed the effort required to create the article in the first place.

Not only does Wikipedia have no difficulties overcoming a “free rider” problem, but the site actually has so many contributors that it can afford to squander vast amounts of human time and attention debating whether to toss out work that has already been done but may not meet the community’s standards.

The Trouble with "Free Riding"

This week, one of my favorite podcasts, EconTalk, features one of my favorite Internet visionaries, Clay Shirky. I interviewed Shirky when his book came out back in April. The host, Russ Roberts, covered some of the same ground, but also explored some different topics, so it was an enjoyable listen.

I was struck by something Prof. Roberts said about 50 minutes into the podcast:

One of the things that fascinates me about [Wikipedia] is that I think if you’d asked an economist in 1950, 1960, 1970, 1980, 1990, even 2000: “could Wikipedia work,” most of them would say no. They’d say “well it can’t work, you see, because you get so little glory from this. There’s no profit. Everyone’s gonna free ride. They’d love to read Wikipedia if it existed, but no one’s going to create it because there’s a free-riding problem.” And those folks were wrong. They misunderstood the pure pleasure that overcomes some of that free-rider problem.

He’s right, but I would make a stronger point: the very notion of a “free-rider problem” is nonsensical when we’re talking about a project like Wikipedia. When Roberts says that Wikipedia solves “some of” the free-rider problem, he seems to be conceding that there’s some kind of “free rider problem” that needs to be overcome. I think even that is conceding too much. In fact, talking about “free riding” as a problem the Wikipedia community needs to solve doesn’t make any sense. The overwhelming majority of Wikipedia users “free ride,” and far from being a drag on Wikipedia’s growth, this large audience acts as a powerful motivator for continued contribution to the site. People like to contribute to an encyclopedia with a large readership; indeed, the enormous number of “free-riders”—a.k.a. users—is one of the most appealing things about being a Wikipedia editor.

This is more than a semantic point. Unfortunately, the “free riding” frame is one of the most common ways people discuss the economics of online content creation, and I think it has been an obstacle to clear thinking.

The idea of “free riding” is based on a couple of key 20th-century assumptions that just don’t apply to the online world. The first assumption is that the production of content is a net cost that must either be borne by the producer or compensated by consumers. This is obviously true for some categories of content—no one has yet figured out how to peer-produce Hollywood-quality motion pictures, for example—but it’s far from universal. Moreover, the real world abounds in counterexamples. No one loses sleep over the fact that people “free ride” off of watching company softball games, community orchestras, or amateur poetry readings. To the contrary, it’s understood that the vast majority of musicians, poets, and athletes find these activities intrinsically enjoyable, and they’re grateful to have an audience “free ride” off of their effort.

The same principle applies to Wikipedia. Participating in Wikipedia is a net positive experience for both readers and editors. We don’t need to “solve” the free rider problem because there are more than enough people out there for whom the act of contributing is its own reward.

The second problem with the “free riding” frame is that it fails to appreciate that the sheer scale of the Internet changes the nature of collective action problems. With a traditional meatspace institution like a church, business or intramural sports league, it’s essential that most participants “give back” in order for the collective effort to succeed. The concept of “free riding” emphasizes the fact that traditional offline institutions expect and require reciprocation from the majority of their members for their continued existence. A church in which only, say, one percent of members contributed financially wouldn’t last long. Neither would an airline in which only one percent of the customers paid for their tickets.

On Wikipedia—and a lot of other online content-creation efforts—the ratio of contributors to users just doesn’t matter. Because the marginal cost of copying and distributing content is very close to zero, institutions can get along just fine with arbitrarily high “free riding” rates. All that matters is whether the absolute number of contributors is adequate. And because some fraction of new users will always become contributors, an influx of additional “free riders” is almost always a good thing.

Talking about peer production as solving a “free-rider problem,” then, gets things completely backwards. The biggest danger collaborative online projects face is not “free riding” but obscurity. A tiny free software project in which every user contributes code is in a much worse position than a massively popular software project like Firefox in which 99.9 percent of users “free ride.” Obviously, every project would like to have more of its users become contributors. But the far more important objective for an online collaborative effort to is grow the total size of the user community. New “free riders” are better than nothing.

I think this misplaced focus on free-riding relates to the Robert Laughlin talk I discussed on Wednesday. I suspect that one of the reasons Laughlin is dismissive of business models that involved giving away software is because he’s used to traditional business models in which the marginal customer always imposes non-trivial costs. Companies that sell products made out of atoms would obviously go bankrupt if they tried to give away an unlimited number of their products. ” We’ve never before had goods that could be replicated infinitely and distributed at close to zero cost, and so it’s not surprising that our intuitions and our economic models have trouble dealing with them. But they’re not going away, so we’re going to have to adjust our models accordingly. Dispensing with the concept of “free riding” is a good place to start.

In closing, let me recommend Mark Lemley’s excellent paper on the economics of free riding as it applies to patent and copyright debates. He argues persuasively that eliminating “free riding” is not only undesirable, but that it’s ultimately not even a coherent objective.

Abandoning the Envelope Analogy (What Your Mailman Knows Part 2)

Last time, I commented on NPR’s story about a mail carrier named Andrea in Seattle who can tell us something about the economic downturn by revealing private facts about the people she serves on her mail route. By critiquing the decision to run the story, I drew a few lessons about the way people value and weigh privacy. In Part 2 of this series, I want to tie this to NebuAd and Phorm.

It’s probably a sign of the deep level of monomania to which I’ve descended that as I listened to the story, I immediately started drawing connections between Andrea and NebuAd/Phorm. Technology policy almost always boils down to a battle over analogies, and many in the ISP surveillance/deep packet inspection debate embrace the so-called envelope analogy. (See, e.g., the comments of David Reed to Congress about DPI, and see the FCC’s Comcast/BitTorrent order.) Just as mail carriers are prohibited from opening closed envelopes, so a typical argument goes, so too should packet carriers be prohibited from looking “inside” the packets they deliver.

As I explain in my article, I’m not a fan of the envelope analogy. The NPR story gives me one more reason to dislike it: envelopes–the physical kind–don’t mark as clear a line of privacy as we may have thought. Although Andrea is restricted by law from peeking inside envelopes, every day her mail route is awash in “metadata” that reveal much more than the mere words scribbled on the envelopes themselves. By analyzing all of this metadata, Andrea has many ways of inferring what is inside the envelopes she delivers, and she feels pretty confident about her guesses.

There are metadata gleaned from the envelopes themselves: certified letters usually mean bad economic news; utility bills turn from white to yellow to red as a person slides toward insolvency. She also engages in traffic analysis–fewer credit card offers might herald the credit crunch. She picks up cues from the surroundings, too: more names on a mailbox might mean that a young man who can no longer make rent has moved in with grandma. Perhaps most importantly, she interacts with the human recipients of these envelopes, reporting in the story about a guy who runs a cafe who jokes about needing credit card offers in order to pay the bill, or describing the people who watch her approach with “a real desperation in their eyes; when they see me their face falls; what am I going to bring today?”

So let’s stop using the envelope analogy, because it makes a comparison that doesn’t really fit well. But I have a deeper objection to the use of the envelope analogy in the DPI/ISP surveillance debate: It states a problem rather than proposes a solution, and it assumes away all of the hard questions. Saying that there is an “inside” and an “outside” to a packet is the same thing as saying that we need to draw a line between permissible and impermissible scrutiny, but it offers no guidance about how or where to draw that line. The promise of the envelope analogy is that it is clear and easy to apply, but the solutions proposed to implement the analogy are rarely so clear.