January 13, 2025

DMCA Week: Where's My DVD Jukebox?

A difficult challenge in thinking about public policy is understanding which innovations have not happened as a result of bad government policies. For example, it’s generally believed that the Bell phone monopoly stifled innovation in the telecommunications sector during the 1950s and 1960s. But if we had been assessing things from the standpoint of the mid-1960s, it would have been hard to say exactly which innovations were missing. It wasn’t until after the Carterfone decision in 1968, and further liberalizations in the 1970s and 1980s, that we started to see just how many innovations could be unleashed by a competitive market: modems, answering machines, fax machines, competitive long distance service, etc.

We face a somewhat analogous situation with the Digital Millennium Copyright Act. Like a lot of other people, I’ve made the argument that the DMCA has stifled important high-tech innovations. And the DMCA has been on the books long enough that if we’re right, then we’re probably missing out on some important innovations. But it’s difficult to say exactly what they are; they’re what Bastiat called “what is not seen”, and what Don Rumsfeld called unknown unknowns.

But while we can’t say for certain which innovations have been precluded by the DMCA, we can find plenty of hints. Just last week, Apple CEO Steve Jobs commented that “the whole category [of ‘digital living room’ video devices] is still a hobby right now. I don’t think anybody has succeeded at it. And actually the experimentation has slowed down. A lot of the early companies that were trying things have faded away.”

What’s striking about this is how different the evolution of digital home video products is from the explosion of digital audio products a decade ago. The period between the introduction of the first MP3 players in 1998 and the release of the iPod in 2001 was a period of fevered innovation in both hardware and software. Numerous companies, some of them fairly small, introduced music players. By the time Apple entered the scene in 2001, it was entering an already-crowded market. In contrast, we’ve seen only a trickle of new digital video devices. The Video iPod, Slingbox, and YouTube were all introduced in 2005. These are all great products, but they’re also curiously limited. We haven’t seen general-purpose video devices that replace DVD players and cable boxes the way the iPod has largely replaced CD players and is gradually replacing the radio. Today’s home video experience would be completely unsurprising to someone from 10 years ago.

How is the video market different from the audio market was? There are obviously a lot of factors, but it seems to me that the DMCA is one of the most important ones. It has made it effectively illegal to rip video from their DVDs the way people ripped audio from their CDs a decade ago. And the ability to rip CDs into MP3 files created the foundation on which the digital music device market was built, eventually leading to the iPod.

Consider two products that have not been widely adopted, due largely to DMCA-related legal problems. One is the XBMC Media Center, which used to be known as the XBox Media Center before Microsoft’s lawyers came knocking. Two years ago, I pointed out that the XBMC had significantly more functionality than a lot of “legitimate” media players. I think that’s still true today. One of the most important features was the ability to rip DVDs and store them on your hard drive for later playback. Unfortunately, the DMCA makes it essentially impossible for mainstream technology companies to duplicate this functionality.

XMBC, or other software like it, could have been the WinAmp of video, allowing law-abiding people to build libraries of legitimate video in an open format. That, in turn, would have created a market for digital video hardware to store, play, and manipulate these files, just as WinAmp and other MP3 software made the MP3 player market possible. But because the DMCA makes DVD-ripping effectively illegal, there is no legal way for people to get their existing DVD libraries into an open format, which drastically reduces the demand for open video devices.

Or consider Kaleidescape, an innovative, and very expensive, DVD-jukebox device that was introduced almost five years ago and has faced legal trouble almost from its inception. As Ed put it back in 2004:

DVD-CCA [the cartel that controls the DVD standard] is trying to maintain its control over all technology related to DVDs. In the good old days, copyright law gave copyright owners the right to sue infringers but gave no right to stop noninfringing uses just because the copyright owner didn’t like them. These days, copyright interests seem to want broad control over technology design.

Kaleidescape ultimately won its lawsuit, but the decision turned on fairly narrow contractual grounds that don’t provide much room for others to enter the market. The bottom line is that it’s still effectively illegal to sell a product that will rip DVDs to an open video format.

It’s not like Hollywood hasn’t been trying to produce a viable video platform. Way back in 2003, Hollywood had two proprietary download services called MovieLink and CinemaNow. Unfortunately, they crashed and burned. This part of the story is actually strikingly similar to the music industry, which had a proprietary download services of its own that did just as poorly.

What’s different is that video entrepreneurs don’t have the freedom that audio entrepreneurs did to opt out of the incumbents’ preferred platforms and build their own. It’s worth remembering that the recording industry tried to sue the first MP3 players out of existence. What we’re seeing in the video market is what the digital audio marketplace would have looked like if the recording industry had won its lawsuit against the first MP3 players. The recording industry lost that lawsuit, and entrepreneurs went on to build products that were much better than the “official” ones being pushed by the labels. Unfortunately, entrepreneurs in the digital video market don’t have that same option.

If the DMCA were not on the books, it seems likely that products like Kaleidescape and the XBMC would be growing rapidly in popularity. Many of us would have set-top boxes with 500 GB hard drives capable of ripping dozens of DVDs to an open, standard format for subsequent streaming to any display in the user’s house. The existence of those boxes would spur the creation of a wider market for other digital video products designed to interoperate with the emerging open video standard.

Unfortunately, that’s not how things have gone. Hollywood has managed to do what the recording industry was unable to do: to ban users from converting their legally-purchased content to open formats. As a result, the market for open digital video devices is a pale shadow of what it would be in a competitive market. We’re stuck with clunky, proprietary, and non-interoperable products like Apple TV that require users to re-purchase their existing movie collections in order to watch them on the new device. I think everyone would agree that it was a good thing that the courts didn’t let the recording industry shut down the MP3 player market a decade ago. So why do we tolerate a law that effectively shuts down the analogous market for DVD jukeboxes?

DMCA Week, Part I: How the DMCA Was Born

Ten years ago tomorrow, on October 28, 1998, the Digital Millennium Copyright Act was signed into law. The DMCA’s anti-circumvention provisions, which became 17 USC Section 1201, made it a crime under most circumstances to “circumvent a technological measure that effectively controls access to” a copyrighted work, or to “traffic in” circumvention tools. In the default case, the new law meant that a copyright holder who used DRM to control access to her copyrighted material could exercise broad new control over how her material was used. If an album or film were distributed with DRM allowing it to be played only on alternate Tuesdays, or only in certain geographic regions, then these limits enjoyed the force of law–to go around them might not involve a a violation of copyright per se, but it would involve circumventing the access control, an activity that the DMCA made a felony.

Over the course of this week, Freedom to Tinker will be taking stock of the DMCA. What do ten years’ experience tell us about this law in particular, and about technology law and policy more generally?

Today, I’ll focus on the law’s creation. It passed in the Senate by unanimous consent, and in the House by a voice vote. But as Jessica Litman, among others, has pointed out, there was a lively debate leading up to that seemingly consensus moment. As a starting point for discussion, I’ll briefly summarize chapters six through nine of her 2001 book, Digital Copyright: Protecting Intellectual Property on the Internet.

In the early days of the Clinton administration, as part of a broader effort to develop policy responses to what was then known as the “Information Superhighway,” a working group was convened under Patent Commissioner Bruce Lehman to suggest changes to copyright law and policy. This group produced a 267 page white paper in September 1995. It argued that additional protections were necessary because

Creators and other owners of intellectual property rights will not be willing to put their interests at risk if appropriate systems — both in the U.S. and internationally — are not in place to permit them to set and enforce the terms and conditions under which their works are made available in the NII [National Information Infrastructure] environment.

In its section on Technological Protection (from pages 230-234), the white paper offers the meat of its recommendation for what became section 1201, the anti-circumvention rules:

Therefore, the Working Group recommends that the Copyright Act be amended to include a new Chapter 12, which would include a provision to prohibit the
importation, manufacture or distribution of any device, product or component incorporated into a device or product, or the provision of any service, the primary purpose or effect of which is to avoid, bypass, remove, deactivate, or otherwise circumvent, without authority of the copyright owner or the law, any process, treatment, mechanism or system which prevents or inhibits the violation of any of the exclusive rights under Section 106. The provision will not eliminate the risk that protection systems will be defeated, but it will reduce it.

In its prediction that anti-circumvention law would “reduce” “the risk that protection systems will be defeated,” the white paper offers a concise statement of the primary rationale for section 1201. That prediction hasn’t panned out: the anti-circumvention rules were enacted, but did not meaningfully reduce the risk of defeat faced by DRM systems. The defeat of such systems is, despite the DMCA, a routine eventuality following their introduction.

As Professor Litman tells the story, the Lehman white paper’s recommendations met with domestic resistance, which prompted Lehman to “press for an international diplomatic conference in Geneva hosted by the World Intellectual Property Organizaton (WIPO).” The upshot was a new treaty incorporating many of the white paper’s elements. It required participating nations to “provide adequate legal protection and effective legal remedies against the circumvention of effective technological measures that are used by authors… [to] restrict acts… which are not authorized by the authors concerned or permitted by law.”

Did this treaty actually require something like the DMCA? Before the DMCA’s passage, copyright law already included secondary liability for those who knowingly “induce, cause, or materially contribute to” the infringing conduct of another (contributory infringement liability), or who have the right and ability to control the infringing actions of another party and receive a financial benefit from the infringement (vicarious infringement liability). Clear precedent, and subsequent decisions like MGM v. Grokster confirm that creators of infringement-enabling technologies can be held liable under copyright law, even without the DMCA. Nonetheless, the treaty’s language was clearly intended by its American framers and promoters to provide a rationale for the DMCA’s anti-circumvention provisions.

One impact of this maneuver was to allow the DMCA to be promoted under the rubric of harmonization—aside from its merits as policy, DMCA proponents could claim that it was necessary in order to meet American treaty obligations. The fact that Clinton administration negotiators had been instrumental in creating the relevant international obligations in the first place was lost in the noise. And overall, America’s interest in meeting its international obligations in the intellectual property arena is quite strong. The economics of patents, rather than of copyright, dominate: U.S. patent holders in pharmaceuticals, high technology and elsewhere find themselves fighting foreign infringement. U.S. legislators are therefore apt to assign very high priority to encouraging global compliance with the intellectual property treaty regime, regardless of concerns they may have about the details of a particular measure.

A second long term impact was to lead to DMCA-like laws around the world. Other countries often took a narrow reading of the treaty obligation and declined, based on it, to adopt anti-circumvention rules. But, perhaps emboldened by the success of the international-negotiations-first approach to copyright, the U.S. executive branch has used free trade negotiations as a wedge to force other countries to adopt DMCA-like statutes. Anti-circumvention requirements make surprising cameos in the United States’s bilateral free trade agreements with Jordan, Singapore, Chile, Australia and several other countries (more information here).

What lessons can we draw from this experience? First, it is a cautionary tale about international law. One often hears appeals to international law, in domestic political debates, that attach special normative value to the fact that a given provision is required by a treaty. These appeals may be generally justified, but the DMCA/WIPO experience at least argues that they deserve to be evaluated critically rather than taken at face value. Second, it serves as a powerful reminder that the unanimous votes leading to the passage of the DMCA mask an intricate series of negotiations and controversies.

Thirdly, and most importantly, the globalized birth of the DMCA provides a cautionary tale for the future. The currently proposed ACTA (Anti-Counterfeiting Trade Agreement), is a next-generation treaty that would cover online piracy, among other matters. Its exact contents are under wraps–the public outcry and litigation that have surrounded the measure stem mostly from a leaked memo outlining possible principles for inclusion in the treaty. Proposals include creating or strengthening penalties for those who promote infringement non-commercially, and enhanced ability to seize and destroy infringing media at international borders. Absent the text of a proposed agreement, it’s hard to respond in detail to ACTA. But if the genesis of the DMCA teaches us anything, it is that these international agreements deserve close scrutiny. When an agreement is created in opaque, closed-door negotiations, and then presented to the legislature as a fait accompli, it deserves close and skeptical scrutiny.

Maybe "Open Source" Cars Aren't So Crazy After All

I wrote last week about the case for open source car software and, lo and behold, BMW might be pushing forward with the idea– albeit not in self-driving cars quite yet. 😉

Tangentially, I put “open source” in scare quotes because the car scenario highlights a new but important split in the open source and free software communities. The Open Source Initiative’s open source definition allows use of the term ‘open source’ to describe code which is available and modifiable but not installable on the intended device. Traditionally, the open source community assumed that if source was available, you could make modifications and install those modifications on the targeted hardware. This was a safe assumption, since the hardware in question was almost always a generative, open PC or OS. That is no longer the case- as I mentioned in my original car article, one might want to sign binaries so that not just anyone could hack on their cars, for example. Presumably even open source voting machines would have a similar restriction.

Another example appears to be the new ‘google phone’ (G1 or Android). You can download several gigs of source code now, appropriately licensed, so that the code can be called ‘open source’ under the OSI’s definition. But apparently you can’t yet modify that code and install the modified binaries to your own phone.

The new GPL v3 tries to address this issue by requiring (under certain circumstances) that GPL v3’d code be installable on devices with which it is shipped. But to the best of my knowledge no other license is yet requiring this, and the v3 is not yet widespread enough to put a serious dent in this trend.

Exactly how ‘open’ code like this is is up for discussion. It meets the official definition, but the inability to actually do much with the code seems like it will limit the growth of constructive community around the software for these types of devices- phones, cars, or otherwise. This issue bears keeping in mind when thinking about openness for source code of closed hardware- you will certainly see ‘open source’ tossed around a lot, but in this context, it may not always mean what you think it does.

An Illustration of Wikipedia's Vast Human Resources

The Ashley Todd incident has given us a nice illustration of the points I made on Friday about “free-riding” and Wikipedia. As Clay Shirky notes, there’s a quasi-ideological divide within Wikipedia between “deletionists” who want to tightly control the types of topics that are covered on Wikipedia and “inclusionists” who favor a more liberal policy. On Friday, the Wikipedia page on Ashley Todd became the latest front in the battle between them. You can see the argument play out here. For the record, both Shirky and I came down on the inclusionists’ side. The outcome of the debate was that the article was renamed from “Ashley Todd” to “Ashley Todd mugging hoax,” an outcome I was reasonably happy with.

Notice how the Wikipedia process reverses the normal editorial process. If Brittanica were considering an article on Ashley Todd, some Brittanica editor would first perform a cost-benefit analysis to decide whether the article would be interesting enough to readers to justify the the cost of creating the article. If she thought it was, then she would commission someone to write it, and pay the writer for his work. Once the article was written, she would almost always include the article in the encyclopedia, because she had paid good money for it.

In contrast, the Wikipedia process is that some people go ahead and create an article and then there is frequently an argument about whether the article should be kept. The cost of creating the article is so trivial, relative to Wikipedia’s ample resources of human time and attention, that it’s not even mentioned in the debate over whether to keep the article.

To get a sense for the magnitude of this, consider that in less than 24 hours, dozens of Wikipedians generated a combined total of about 5000 words of arguments for and against deleting an article that is itself only about 319 words. The effort people (including me) spent arguing about whether to have the article dwarfed the effort required to create the article in the first place.

Not only does Wikipedia have no difficulties overcoming a “free rider” problem, but the site actually has so many contributors that it can afford to squander vast amounts of human time and attention debating whether to toss out work that has already been done but may not meet the community’s standards.

The Trouble with "Free Riding"

This week, one of my favorite podcasts, EconTalk, features one of my favorite Internet visionaries, Clay Shirky. I interviewed Shirky when his book came out back in April. The host, Russ Roberts, covered some of the same ground, but also explored some different topics, so it was an enjoyable listen.

I was struck by something Prof. Roberts said about 50 minutes into the podcast:

One of the things that fascinates me about [Wikipedia] is that I think if you’d asked an economist in 1950, 1960, 1970, 1980, 1990, even 2000: “could Wikipedia work,” most of them would say no. They’d say “well it can’t work, you see, because you get so little glory from this. There’s no profit. Everyone’s gonna free ride. They’d love to read Wikipedia if it existed, but no one’s going to create it because there’s a free-riding problem.” And those folks were wrong. They misunderstood the pure pleasure that overcomes some of that free-rider problem.

He’s right, but I would make a stronger point: the very notion of a “free-rider problem” is nonsensical when we’re talking about a project like Wikipedia. When Roberts says that Wikipedia solves “some of” the free-rider problem, he seems to be conceding that there’s some kind of “free rider problem” that needs to be overcome. I think even that is conceding too much. In fact, talking about “free riding” as a problem the Wikipedia community needs to solve doesn’t make any sense. The overwhelming majority of Wikipedia users “free ride,” and far from being a drag on Wikipedia’s growth, this large audience acts as a powerful motivator for continued contribution to the site. People like to contribute to an encyclopedia with a large readership; indeed, the enormous number of “free-riders”—a.k.a. users—is one of the most appealing things about being a Wikipedia editor.

This is more than a semantic point. Unfortunately, the “free riding” frame is one of the most common ways people discuss the economics of online content creation, and I think it has been an obstacle to clear thinking.

The idea of “free riding” is based on a couple of key 20th-century assumptions that just don’t apply to the online world. The first assumption is that the production of content is a net cost that must either be borne by the producer or compensated by consumers. This is obviously true for some categories of content—no one has yet figured out how to peer-produce Hollywood-quality motion pictures, for example—but it’s far from universal. Moreover, the real world abounds in counterexamples. No one loses sleep over the fact that people “free ride” off of watching company softball games, community orchestras, or amateur poetry readings. To the contrary, it’s understood that the vast majority of musicians, poets, and athletes find these activities intrinsically enjoyable, and they’re grateful to have an audience “free ride” off of their effort.

The same principle applies to Wikipedia. Participating in Wikipedia is a net positive experience for both readers and editors. We don’t need to “solve” the free rider problem because there are more than enough people out there for whom the act of contributing is its own reward.

The second problem with the “free riding” frame is that it fails to appreciate that the sheer scale of the Internet changes the nature of collective action problems. With a traditional meatspace institution like a church, business or intramural sports league, it’s essential that most participants “give back” in order for the collective effort to succeed. The concept of “free riding” emphasizes the fact that traditional offline institutions expect and require reciprocation from the majority of their members for their continued existence. A church in which only, say, one percent of members contributed financially wouldn’t last long. Neither would an airline in which only one percent of the customers paid for their tickets.

On Wikipedia—and a lot of other online content-creation efforts—the ratio of contributors to users just doesn’t matter. Because the marginal cost of copying and distributing content is very close to zero, institutions can get along just fine with arbitrarily high “free riding” rates. All that matters is whether the absolute number of contributors is adequate. And because some fraction of new users will always become contributors, an influx of additional “free riders” is almost always a good thing.

Talking about peer production as solving a “free-rider problem,” then, gets things completely backwards. The biggest danger collaborative online projects face is not “free riding” but obscurity. A tiny free software project in which every user contributes code is in a much worse position than a massively popular software project like Firefox in which 99.9 percent of users “free ride.” Obviously, every project would like to have more of its users become contributors. But the far more important objective for an online collaborative effort to is grow the total size of the user community. New “free riders” are better than nothing.

I think this misplaced focus on free-riding relates to the Robert Laughlin talk I discussed on Wednesday. I suspect that one of the reasons Laughlin is dismissive of business models that involved giving away software is because he’s used to traditional business models in which the marginal customer always imposes non-trivial costs. Companies that sell products made out of atoms would obviously go bankrupt if they tried to give away an unlimited number of their products. ” We’ve never before had goods that could be replicated infinitely and distributed at close to zero cost, and so it’s not surprising that our intuitions and our economic models have trouble dealing with them. But they’re not going away, so we’re going to have to adjust our models accordingly. Dispensing with the concept of “free riding” is a good place to start.

In closing, let me recommend Mark Lemley’s excellent paper on the economics of free riding as it applies to patent and copyright debates. He argues persuasively that eliminating “free riding” is not only undesirable, but that it’s ultimately not even a coherent objective.