November 21, 2024

DMCA Week, Part I: How the DMCA Was Born

Ten years ago tomorrow, on October 28, 1998, the Digital Millennium Copyright Act was signed into law. The DMCA’s anti-circumvention provisions, which became 17 USC Section 1201, made it a crime under most circumstances to “circumvent a technological measure that effectively controls access to” a copyrighted work, or to “traffic in” circumvention tools. In the default case, the new law meant that a copyright holder who used DRM to control access to her copyrighted material could exercise broad new control over how her material was used. If an album or film were distributed with DRM allowing it to be played only on alternate Tuesdays, or only in certain geographic regions, then these limits enjoyed the force of law–to go around them might not involve a a violation of copyright per se, but it would involve circumventing the access control, an activity that the DMCA made a felony.

Over the course of this week, Freedom to Tinker will be taking stock of the DMCA. What do ten years’ experience tell us about this law in particular, and about technology law and policy more generally?

Today, I’ll focus on the law’s creation. It passed in the Senate by unanimous consent, and in the House by a voice vote. But as Jessica Litman, among others, has pointed out, there was a lively debate leading up to that seemingly consensus moment. As a starting point for discussion, I’ll briefly summarize chapters six through nine of her 2001 book, Digital Copyright: Protecting Intellectual Property on the Internet.

In the early days of the Clinton administration, as part of a broader effort to develop policy responses to what was then known as the “Information Superhighway,” a working group was convened under Patent Commissioner Bruce Lehman to suggest changes to copyright law and policy. This group produced a 267 page white paper in September 1995. It argued that additional protections were necessary because

Creators and other owners of intellectual property rights will not be willing to put their interests at risk if appropriate systems — both in the U.S. and internationally — are not in place to permit them to set and enforce the terms and conditions under which their works are made available in the NII [National Information Infrastructure] environment.

In its section on Technological Protection (from pages 230-234), the white paper offers the meat of its recommendation for what became section 1201, the anti-circumvention rules:

Therefore, the Working Group recommends that the Copyright Act be amended to include a new Chapter 12, which would include a provision to prohibit the
importation, manufacture or distribution of any device, product or component incorporated into a device or product, or the provision of any service, the primary purpose or effect of which is to avoid, bypass, remove, deactivate, or otherwise circumvent, without authority of the copyright owner or the law, any process, treatment, mechanism or system which prevents or inhibits the violation of any of the exclusive rights under Section 106. The provision will not eliminate the risk that protection systems will be defeated, but it will reduce it.

In its prediction that anti-circumvention law would “reduce” “the risk that protection systems will be defeated,” the white paper offers a concise statement of the primary rationale for section 1201. That prediction hasn’t panned out: the anti-circumvention rules were enacted, but did not meaningfully reduce the risk of defeat faced by DRM systems. The defeat of such systems is, despite the DMCA, a routine eventuality following their introduction.

As Professor Litman tells the story, the Lehman white paper’s recommendations met with domestic resistance, which prompted Lehman to “press for an international diplomatic conference in Geneva hosted by the World Intellectual Property Organizaton (WIPO).” The upshot was a new treaty incorporating many of the white paper’s elements. It required participating nations to “provide adequate legal protection and effective legal remedies against the circumvention of effective technological measures that are used by authors… [to] restrict acts… which are not authorized by the authors concerned or permitted by law.”

Did this treaty actually require something like the DMCA? Before the DMCA’s passage, copyright law already included secondary liability for those who knowingly “induce, cause, or materially contribute to” the infringing conduct of another (contributory infringement liability), or who have the right and ability to control the infringing actions of another party and receive a financial benefit from the infringement (vicarious infringement liability). Clear precedent, and subsequent decisions like MGM v. Grokster confirm that creators of infringement-enabling technologies can be held liable under copyright law, even without the DMCA. Nonetheless, the treaty’s language was clearly intended by its American framers and promoters to provide a rationale for the DMCA’s anti-circumvention provisions.

One impact of this maneuver was to allow the DMCA to be promoted under the rubric of harmonization—aside from its merits as policy, DMCA proponents could claim that it was necessary in order to meet American treaty obligations. The fact that Clinton administration negotiators had been instrumental in creating the relevant international obligations in the first place was lost in the noise. And overall, America’s interest in meeting its international obligations in the intellectual property arena is quite strong. The economics of patents, rather than of copyright, dominate: U.S. patent holders in pharmaceuticals, high technology and elsewhere find themselves fighting foreign infringement. U.S. legislators are therefore apt to assign very high priority to encouraging global compliance with the intellectual property treaty regime, regardless of concerns they may have about the details of a particular measure.

A second long term impact was to lead to DMCA-like laws around the world. Other countries often took a narrow reading of the treaty obligation and declined, based on it, to adopt anti-circumvention rules. But, perhaps emboldened by the success of the international-negotiations-first approach to copyright, the U.S. executive branch has used free trade negotiations as a wedge to force other countries to adopt DMCA-like statutes. Anti-circumvention requirements make surprising cameos in the United States’s bilateral free trade agreements with Jordan, Singapore, Chile, Australia and several other countries (more information here).

What lessons can we draw from this experience? First, it is a cautionary tale about international law. One often hears appeals to international law, in domestic political debates, that attach special normative value to the fact that a given provision is required by a treaty. These appeals may be generally justified, but the DMCA/WIPO experience at least argues that they deserve to be evaluated critically rather than taken at face value. Second, it serves as a powerful reminder that the unanimous votes leading to the passage of the DMCA mask an intricate series of negotiations and controversies.

Thirdly, and most importantly, the globalized birth of the DMCA provides a cautionary tale for the future. The currently proposed ACTA (Anti-Counterfeiting Trade Agreement), is a next-generation treaty that would cover online piracy, among other matters. Its exact contents are under wraps–the public outcry and litigation that have surrounded the measure stem mostly from a leaked memo outlining possible principles for inclusion in the treaty. Proposals include creating or strengthening penalties for those who promote infringement non-commercially, and enhanced ability to seize and destroy infringing media at international borders. Absent the text of a proposed agreement, it’s hard to respond in detail to ACTA. But if the genesis of the DMCA teaches us anything, it is that these international agreements deserve close scrutiny. When an agreement is created in opaque, closed-door negotiations, and then presented to the legislature as a fait accompli, it deserves close and skeptical scrutiny.

Will cherry picking undermine the market for ad-supported television?

Want to watch a popular television show without all the ads? Your options are increasing. There’s the iTunes store, moving toward HD video formats, in which a growing range of shows can be bought on a per-episode or per-season basis, to be watched without advertisements on a growing range of devices at a time of your chooing. Or you could buy a Netflix subscription and Roku streaming box on top of your existing media expenditures, and stream many TV episodes directly over the web. Thirdly, there’s the growing market for DVDs or Blu-ray discs themselves, which are higher definition and particularly rewarding for those who are able to shell out for top-end home theater systems that can make the most of the added information in a disc as opposed to a  broadcast. I’m sure there are yet more options for turning a willingness to pay into an ad-free viewing experience — video-on-demand over the pricey but by most accounts great FiOS service, perhaps? Finally, TiVo and other options like it reward those who can afford DVRs, and further reward those savvy enough to bother programming their remotes with the 30-second skip feature.

In any case, the growing popularity of these options and others like them pose a challenge, or at least a subtle shift in pricing incentives, for the makers of television content. Traditionally, content has been monetized by ads, where advertisers could be confident that the whole viewership of a given show would be tuned in for whatever was placed in the midst of an episode. Now, the wealthiest, best educated, most consumer electronics hungry segments of the television audience–among the most valuable viewers to advertisers–is able to absent itself from the ad viewing public.

This problem is worse than just losing some fraction of the audience: it’s about losing a particular fraction of the audience. If x percent of the audience skips the ads for the reasons mentioned in the first paragraph, then the remaining 100-x percent of the audience is the least tech-savvy, least consumer electronics acquistive part of the audience, by and large a much less attractive demographic for advertisers. (A converse version of this effect may be true for the online advertising market, where every viewer is in front of a web browser or relatively fancy phone, but I’m less confident of that because of the active interest in ad-blocking technologies. Maybe online ad viewers will be a middle slice, savvy enough to be online but not to block ads?)

What will this mean for TV? Here’s one scenario: Television bifurcates. Ad-supported TV goes after the audience that still watches ads, those toward the lower part of the socioeconomic spectrum. Ads for Walmart replace those for designer brands. The content of ad-supported TV itself trends toward options that cater to the ad-watching demographic. Meanwhile, high end TV emerges as an always ad-free medium supported by more direct revenue channels, with more and more of it coming along something like the HBO route. These shows are underwritten by, and ultimately directed to, the ad-skipping but high-income crowd. So there won’t be advertisers clamoring to attract the higher income viewers, as such, but those who invest in creating the shows in the first place will learn over time to cater to the interests and viewing habits of the elite.

Another scenario, that could play out in tandem with the first, is that there may be a strong appetite for a truly universal advertising medium, either because of the ease this creates for certain advertisers or because of the increasing revenue premium as such broad audiences become rarer and are bid up in value. In this case, you could imagine a Truman Show-esque effort to integrate advertising with the TV content. The ads would be unskippable because they wouldn’t exist or, put another way, would be the only thing on (some parts of) television.

Come Join Us Next Spring

It’s been an exciting summer here at the Center for Information Technology Policy. On Friday, we’ll be moving into a brand new building. We’ll be roughly doubling our level of campus activity—lectures, symposia and other events—from last year. You’ll also see some changes to our online activities, including a new, expanded Freedom to Tinker that will be hosted by the Center and will feature an expanded roster of contributors.

One of our key goals is to recruit visiting scholars who can enrich, and benefit from, our community. We’ve already lined up several visitors for the coming year, and will welcome them soon. But we also have space for several more. With the generous support of Princeton’s Woodrow Wilson School and School of Engineering and Applied Sciences, we are able to offer limited support for visitors to join us on a semester basis in spring 2009. The announcement, available here, reads as follows:

CITP Seeks Visiting Faculty, Fellows or Postdocs for Spring 2009 Semester

The Center for Information Technology Policy (CITP) at Princeton University is seeking visiting faculty, fellows, or postdocs for the Spring 2009 semester.

About CITP

Digital technologies and public life are constantly reshaping each other—from net neutrality and broadband adoption, to copyright and file sharing, to electronic voting and beyond.

Realizing digital technology’s promise requires a constant sharing of ideas, competencies and norms among the technical, social, economic and political domains.

The Center for Information Technology Policy is Princeton University’s effort to meet this challenge. Its new home, opening in September 2008, is a state of the art facility designed from the ground up for openness and collaboration. Located at the intellectual and physical crossroads of Princeton’s engineering and social science communities, the Center’s research, teaching and public programs are building the intellectual and human capital that our technological future demands.

To see what this mission can mean in practice, take a look at our website, at http://citp.princeton.edu.

One-Term Visiting Positions in Spring 2009

The Center has secured limited resources from a range of sources to support visitors this coming spring. Visitors will conduct research, engage in public programs, and may teach a seminar during their appointment. They’ll play an important role at a pivotal time in the development of this new center. Visitors will be appointed to a visiting faculty or visiting fellow position, or a postdoctoral role, depending on qualifications.

We are happy to hear from anyone who works at the intersection of digital technology and public life. In addition to our existing strengths in computer science and sociology, we are particularly interested in identifying engineers, economists, lawyers, civil servants and policy analysts whose research interests are complementary to our existing activities. Levels of support and official status will depend on the background and circumstances of each appointee. Terms of appointment will be from February 1 until either July 1 or September 1 of 2009.

If you are interested, please email a letter of interest, stating background, intended research, and salary requirements, to David Robinson, Associate Director of the Center, at . Please include a copy of your CV.

Deadline: October 15, 2008.

Beyond this particular recruiting effort, there are other ways to get involved—interested students can apply for graduate study in the 2009-2010 school year, and we continue to seek out suitable candidates for externally-funded fellowships. More information about those options is here.

Lenz Ruling Raises Epistemological Questions

Stephanie Lenz’s case will be familiar to many of you: After publishing a 29-second video on YouTube that shows her toddler dancing to the Prince song “Let’s Go Crazy,” Ms. Lenz received email from YouTube, informing her that the video was being taken down at Universal Music’s request. She filed a DMCA counter-notification claiming the video was fair use, and the video was put back up on the site. Now Ms. Lenz, represented by the EFF, is suing Universal, claiming that the company violated section 512(f) of the Digital Millennium Copyright Act. Section 512(f) creates liability for a copyright owner who “knowingly materially misrepresents… that material or activity is infringing.”

On Wednesday, the judge denied Universal’s motion to dismiss the suit. The judge held that “in order for a copyright owner to proceed under the DMCA with ‘a good faith belief that the use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law,’ the owner must evaluate whether the material makes fair use of the copyright.”

The essence of Lenz’s claim is that when Universal sent a notice claiming her use was “not authorized by… the law,” they already knew her use was actually lawful. She cites news coverage that suggests that Universal’s executives watched the video and then, at Prince’s urging, sent a takedown notice they would not have opted to send on their own. Wednesday’s ruling gives the case a chance to proceed into discovery, where Lenz and the EFF can try to find evidence to support their theory that Universal’s lawyers recognized her use was legally authorized under fair use—but caved to Prince’s pressure and sent a spurious notice anyway.

Universal’s view is very different from Lenz’s and, apparently, from the judge’s—they claim that the sense of “not authorized by… the law” required for a DMCA takedown notice is that a use is unauthorized in the first instance, before possible fair use defenses are considered. This position is very important to the music industry’s current practice of sending automated takedown notices based on recognizing copyright works; if copyright owners were required to form any kind of belief about the fairness of a use before asking for a takedown, then this kind of fully computer-automated mass request might not be possible, since it’s hard to imagine a computer performing the four-factor weighing test that informs a fair use determination.

Seen in this light, the case has at least as much to do with the murky epistemology of algorithmic inference as it does with fair use per se. The music industry uses takedown bots to search out and flag potentially infringing uses of songs, and then in at least some instances to send automated takedown notices. If humans at Universal manually review a random sample of the bot’s output, and the statistics and sampling issues are well handled, and they find that a certain fraction of the bot’s output is infringing material, then they can make an inference. They can infer with the statistically appropriate level of confidence that the same fraction of songs in a second sample, consisting of bot-flagged songs “behind a curtain” that have not manually reviewed, are also infringing. If the fraction of material that’s infringing is high enough—e.g. 95 percent?—then one can reasonably or in good faith (at least in the layperson, everyday sense of those terms) believe that an unexamined item turned up by the bot is infringing.

The same might hold true if fair use is also considered: As long a high enough fraction of the material flagged by the bot in the first, manual human review phase turns out to be infringement-not-defensible-as-fair-use, a human can believe reasonably that a given instance flagged by the bot—still “behind the curtain” and not seen by human eyes—is probably an instance of infringement-not-defensible-as-fair-use.

The general principle here would be: If you know the bot is usually right (for some definition of “usually”), and don’t have other information about some case X on which the bot has offered a judgment, then it is reasonable to believe that the bot is right in case X—indeed, it would be unreasonable to believe otherwise, without knowing more. So it seems like there is some level of discernment, in a bot, that would suffice in order for a person to believe in good faith that any given item identified by the bot was an instance of infringement suitable for a DMCA complaint. (I don’t know what the threshold should be, who should decide, or whether or not the industry’s current bots meet it.) This view, when it leads to auto-generated takedown requests, has the strange consequence that music industry representatives are asserting that they have a “good faith belief” certain copies of certain media are infringing, even when they aren’t aware that those copies exist.

Here’s where the sidewalk ends, and I begin to wish I had formal legal training: What are the epistemic procedures required to form a “good faith belief”? How about a “reasonable belief”? This kind of question in the law surely predates computers: It was Oliver Wendell Holmes, Jr. who first created the reasonable man, a personage Louis Menand has memorably termed “the fictional protagonist of modern liability theory.” I don’t even know to whom this question should be addressed: Is there a single standard nationally? Does it vary circuit by circuit? Statute by statute? Has it evolved in response to computer technology? Readers, can you help?

Is the New York Times a Confused Company?

Over lunch I did something old-fashioned—I picked up and read a print copy of the New York Times. I was startled to find, on the front of the business section, a large, colorfully decorated feature headlined “Is Google a Media Company?” The graphic accompanying the story shows a newspaper masthead titled “Google Today,” followed by a list of current and imagined future offerings, from Google Maps and Google Earth to Google Drink and Google Pancake. Citing the new, wikipedia-esque service Knol, and using the example of that service’s wonderful entry on buttermilk pancakes, the Times story argues that Knol’s launch has “rekindled fears among some media companies that Google is increasingly becoming a competitor. They foresee Google’s becoming a powerful rival that not only owns a growing number of content properties, including YouTube, the top online video site, and Blogger, a leading blogging service, but also holds the keys to directing users around the Web.”

I hope the Times’s internal business staff is better grounded than its reporters and editors appear to be—otherwise, the Times is in even deeper trouble than its flagging performance suggests. Google isn’t becoming a media company—it is one now and always has been. From the beginning, it has sold the same thing that the Times and other media outlets do: Audiences. Unlike the traditional media outlets, though, online media firms like Google and Yahoo have decoupled content production from audience sales. Whether selling ads alongside search results, or alongside user-generated content on Knol or YouTube, or displaying ads on a third party blog or even a traditional media web site, Google acts as a broker, selling audiences that others have worked to attract. In so doing, they’ve thrown the competition for ad dollars wide open, allowing any blog to sap revenue (proportionately to audience share) from the big guys. The whole infrastructure is self-service and scales down to be economical for any publisher, no matter how small. It’s a far cry from an advertising marketplace that relies, as the newspaper business traditionally has, on human add sales. In the new environment, it’s a buyer’s market for audiences, and nobody is likely to make the kinds of killings that newspapers once did. As I’ve argued before, the worrying and plausible future for high-cost outlets like the Times is a death of a thousand cuts as revenues get fractured among content sources.

One might argue that sites like Knol or Blogger are a competitive threat to established media outlets because they draw users away from those outlets. But Google’s decision to add these sites hurts its media partners only to the (small) extent that the new sites increase the total amount of competing ad inventory on the web—that is, the supply of people-reading-things to whom advertisements can be displayed. To top it all off, Knol lets authors, including any participating old-media producers, capture revenue from the eyeballs they draw. The revenues in settings like these are slimmer because they are shared with Google, as opposed to being sold directly by NYTimes.com or some other establishment media outlet. But it’s hard to judge whether the Knol reimbursement would be higher or lower than the equivalent payment if an ad were displayed on the established outlet’s site, since Google does not disclose the fraction of ad revenue in shares with publishers in either case. But the addition of one more user-generated content site, whether from Google or anyone else, is at most a footnote to the media industry trend: Google’s revenues come from ads, and that makes it a media company, pure and simple.