April 24, 2014


The New Yorker Covers Wikipedia

Writing in this week’s New Yorker, Stacy Schiff takes a look at the Wikipedia phenomenon. One sign that she did well: The inevitable response page at Wikipedia is almost entirely positive. Schiff’s writing is typical of what makes the New Yorker great. It has rich historical context, apt portrayals of the key characters involved in the story, and a liberal sprinkling of humor, bons mots and surprising factual nuggets. It is also, as all New Yorker pieces are, rigorously fact-checked and ably edited.

Normally, I wouldn’t use FTT as a forum to “talk shop” about a piece of journalism. But in this case, the medium really is the message – the New Yorker’s coverage of Wikipedia is itself a showcase for some of the things old-line publications still do best. As soon as I saw Schiff’s article in my New Yorker table of contents (yes, I still read it in hard copy, and yes, I splurge on getting it mailed abroad to Oxford) I knew it would be a great test case. On the one hand, Wikipedia is the preeminent example of community-driven, user-generated content. Any coverage of Wikipedia, particularly any critical coverage, is guaranteed to be the target of harsh, well-informed scrutiny by the proud community of Wikipedians. On the other, The New Yorker’s writing is, indisputably, among the best out there, and its fact checking department is widely thought to be the strongest in the business.

When reading Wikipedia, one has to react to surprising claims by entertaining the possibility that they might not be true. The less plausible a claim sounds, the more skepticism one must have when considering it. In some cases, a glance at the relevant Talk page helps, since this can at least indicate whether or not the claim has been vetted by other Wikipedians. But not every surprising claim has backstory available on the relevant talk page, and not every reader has the time or inclination to go to that level of trouble for every dubious claim she encounters in Wikipedia. The upshot is that implausible or surprising claims in Wikipedia often get taken with a grain or more of salt, and not believed – and on the other hand, plausible-sounding falsehoods are, as a result of their seeming plausibility, less likely to be detected.

On the other hand, rigorous fact-checking (at least in the magazine context where I have done it and seen it) does not simply mean that someone is trying hard to get things right: It means that someone’s job depends on their being right, and it means that the particularly surprising claims in the fact-checked content in particular can be counted on to be well documented by the intense, aspiring, nervous young person at the fact checker’s desk. At TIME, for example, every single word that goes in to the magazine physically gets a check mark, on the fact-checkers’ copy, once its factual content has been verified, with the documentation of the fact’s truth filed away in an appropriate folder (the folders, in a holdover from an earlier era, are still called “carbons”). It is every bit as grueling as it sounds, and entirely worthwhile. The same system is in use across most of the Time, Inc. magazine publishing empire, which includes People, Fortune, and Sports Illustrated and represents a quarter of the U.S. consumer magazine market. It’s not perfect of course – reports of what someone said in a one-on-one interview, for example, can only ever be as good as the reporter’s notes or tape recording. But it is very, very good. In my own case, knowing what goes in to the fact-checking process at places like TIME and The New Yorker gives me a much higher level of confidence in their accuracy than I have when, as I often do, I learn something new from Wikipedia.

The guarantee of truth that backs up New Yorker copy gives its content a much deeper impact. Consider these four paragraphs from Schiff’s story:

The encyclopedic impulse dates back more than two thousand years and has rarely balked at national borders. Among the first general reference works was Emperor’s Mirror, commissioned in 220 A.D. by a Chinese emperor, for use by civil servants. The quest to catalogue all human knowledge accelerated in the eighteenth century. In the seventeen-seventies, the Germans, champions of thoroughness, began assembling a two-hundred-and-forty-two-volume masterwork. A few decades earlier, Johann Heinrich Zedler, a Leipzig bookseller, had alarmed local competitors when he solicited articles for his Universal-Lexicon. His rivals, fearing that the work would put them out of business by rendering all other books obsolete, tried unsuccessfully to sabotage the project.

It took a devious Frenchman, Pierre Bayle, to conceive of an encyclopedia composed solely of errors. After the idea failed to generate much enthusiasm among potential readers, he instead compiled a “Dictionnaire Historique et Critique,” which consisted almost entirely of footnotes, many highlighting flaws of earlier scholarship. Bayle taught readers to doubt, a lesson in subversion that Diderot and d’Alembert, the authors of the Encyclopédie (1751-80), learned well. Their thirty-five-volume work preached rationalism at the expense of church and state. The more stolid Britannica was born of cross-channel rivalry and an Anglo-Saxon passion for utility.

Wales’s first encyclopedia was the World Book, which his parents acquired after dinner one evening in 1969, from a door-to-door salesman. Wales—who resembles a young Billy Crystal with the neuroses neatly tucked in—recalls the enchantment of pasting in update stickers that cross-referenced older entries to the annual supplements. Wales’s mother and grandmother ran a private school in Huntsville, Alabama, which he attended from the age of three. He graduated from Auburn University with a degree in finance and began a Ph.D. in the subject, enrolling first at the University of Alabama and later at Indiana University. In 1994, he decided to take a job trading options in Chicago rather than write his dissertation. Four years later, he moved to San Diego, where he used his savings to found an Internet portal. Its audience was mostly men; pornography—videos and blogs—accounted for about a tenth of its revenues. Meanwhile, Wales was cogitating. In his view, misinformation, propaganda, and ignorance are responsible for many of the world’s ills. “I’m very much an Enlightenment kind of guy,” Wales told me. The promise of the Internet is free knowledge for everyone, he recalls thinking. How do we make that happen?

As an undergraduate, he had read Friedrich Hayek’s 1945 free-market manifesto, “The Use of Knowledge in Society,” which argues that a person’s knowledge is by definition partial, and that truth is established only when people pool their wisdom. Wales thought of the essay again in the nineteen-nineties, when he began reading about the open-source movement, a group of programmers who believed that software should be free and distributed in such a way that anyone could modify the code. He was particularly impressed by “The Cathedral and the Bazaar,” an essay, later expanded into a book, by Eric Raymond, one of the movement’s founders. “It opened my eyes to the possibility of mass collaboration,” Wales said.

After reading this copy, and knowing how The New Yorker works, one can be confident that a devious Frenchman named Pierre Bayle once really did propose an encyclopedia comprised entirely of errors. The narrative is put together well. It will keep people reading and will not cause confusion. Interested readers can follow up on a nugget like Wales’ exposure to the Hayek essay by reading it themselves (it’s online here).

I am not a Wikipedia denialist. It is, and will continue to be, an important and valuable resource. But the expensive, arguably old fashioned approach of The New Yorker and other magazines still delivers a level of quality I haven’t found, and do not expect to find, in the world of community-created content.


More on Meta-Freedom

Tim Lee comments:

The fact that you can waive your free speech rights via contract doesn’t mean that it would be a good idea to enact special laws backing up those contracts with criminal penalties. I think you’re missing an important middle ground here. The choice isn’t between no tinkering rights and constitutionally mandated tinkering rights. There’s a third option: the the law should neither restrict nor guarantee tinkering rights. You’re welcome to tinker, but you’re also welcome to contract away your freedom to tinker.

The DMCA sticks its thumbs on the “no tinkering” side of the scale by giving DRM creators rights beyond those available to parties in ordinary contract disputes, and by roping third parties into the DRM “contract” whether they’ve agreed to it or not. If I sign a NDA, and then I break it, the company can sue me. But they can’t have me thrown in jail. And they can’t necessarily sue the journalist to whom I divulged the NDA’d information.

But repealing the DMCA would not create an inalienable right to tinker. It would simply put the freedom to tinker on the same plane as all our other rights: you’d have the right to sign them away by contract, but in the absence of a contract you would retain them.

He is right. There is an important middle ground possible that calls for DMCA repeal without calling for the contracts that restrict tinkering rights to be unenforceable. There is certainly a great deal to be said in favor of such a position. I would still say that the mixture-of-motives issue applies, because when people are allowed to sign away their tinkering rights, many of them will, and this outcome will be particularly unwelcome among power users and technology policy activists.


The Freedom to Tinker with Freedom?

Doug Lay, commenting on my last post, pointed out that the Zune buyout would help make a world of DRM-enabled music services more attractive. “Where,” he asked, “does this leave the freedom to tinker?”

Anti-DMCA activism has tended to focus on worst-case, scary scenarios that can spur people to action. It’s a standard move in politics of all kinds, aptly captured in the title of a 2005 BBC documentary about Bush and Blair, The Power of Nightmares. In the context of a world of DRM gone mad, it’s obvious why we need the freedom to tinker. We need it because (in that world) opaque, tinker-proof devices protected by restrictive laws would be extremely harmful to consumers. The only way to make sure that the experience of the average media viewer or software user doesn’t go down the tubes, in this scenario, is to make sure that consumers, either legislatively or through individual choice, never let DRM get off the ground.

But consider an alternative possibility. The Darknet is a permanent backdrop for any real-world system. The major players know this – after all, it was a team at Microsoft Research that helped to launch the Darknet idea. The big players will, in the long run, be smart enough not to drive users into the arms of the Darknet. They will compete with the Darknet, and with each other, and will end up producing systems that most consumers think are fine. Yes, consumers will (still) chafe at the restrictions on DRM-protected systems ten or twenty years from now. But on the whole, they will find that these systems are attractive, and worth investing in.

Who loses in this scenario? Ed and others have argued that all consumers will suffer to some degree because we all enjoy the benefits that come from a few intrepid power users excercising the freedom to tinker. There are educational benefits that come from tinkering and, perhaps most importantly, the freedom to tinker keeps technologies flexible and leaves room for them to interoperate in surprising ways not initially envisioned by their creators. And, as Alex has pointed out to me, the social costs of tinkerproofing are cumulative in such a way that there may be a collective bargaining problem–we may have a situation in which the freedom to tinker does not matter very much to most individuals, but we’d all be better off if, collectively, we assigned a higher value to our individual freedom to tinker than we actually feel for it.

These arguments certainly have significant merit. Together, they (and others like them) might be enough to make it the case that we should create legal protection for the freedom to tinker, or at least build a social consensus for the importance of tinkering.

But I think the people who lose the most, in this DRM-isn’t-so-bad scenario, are the power users. People who like to poke around under the hood. People who are outliers, attaching more importance to the freedom to tinker than a typical consumer attaches to it. I’m talking, in other words, about us.

We the reader-participants of www.freedom-to-tinker.com are an unusual bunch. We really like to tinker. In my own case, I know that I care more about things like being able to time and space shift my media collection than the average person does. I derive a certain strange pleasure from being able to change the way the interface on my desktop computer looks. I buy books so I can mark them up, even though it would be much cheaper and more space-efficient to use a library.

In fact, when I think about it, I have to admit that I would find a world where DRM works and the ability to tinker can be bargained away to be a bit of a downer. I know that the equilibrium point the market reaches, in such a case, will be based on the moderate importance most people attach to tinkering, rather than the high importance that I attach to it. I’ll probably still buy in to some DRM-based music scheme in the long run, just as I still go to the movies even while wishing that they would focus more on plot and less on special effects. But I’ll miss the tinkering.

If the government were to put a legal guarantee behind the freedom to tinker, it would be reducing peoples’ freedom to contract by telling them they can’t bargain away their tinkering rights. It would force on consumers as a whole an outcome that they would manifestly not choose for themselves in the private market. Yes, it is possible that externalities or collective action issues could justify this coercion. But even if those considerations didn’t justify the coercion, part of me would still want it to happen, because that way, I’d get to keep tinkering rights that, under a different terrain of options, I would end up choosing to relinquish.

I apparently haven’t mastered the art of ending a blog post, so just as I closed last time with a “bottom line,” this one gets a “moral of the story.” The moral of the story is that many of us, who may find ourselves arguing based on public reasons for public policies that protect the freedom to tinker, also have a private reason to favor such policies. The private reason is that we ourselves care more about tinkering than the public at large does, and we would therefore be happier in a protected-tinkering world than the public at large would be. We all owe it to ourselves, to each other, and to the public with whom we communicate to be careful and candid about our mixture of motivations.


Rethinking DRM Dystopia

Thanks to Ed for the flattering introduction – now if only I can live up to it! It’s an honor (and a little intimidating) to be guest blogging on FTT after several years as an avid reader. I’ve never blogged before, but I am looking forward to the thoughtful, user-driven exchanges and high transparency that blogs in general, and FTT in particular, seem to cultivate. Please consider yourself, dear reader, every bit as warmly invited to comment and engage with my posts as you are with Ed’s, and Alex’s.

I want to use this first post to flag something that startled me, and to speculate a little about the lessons that might be drawn from it. I was surprised to read recently that Zune, Microsoft’s new music service, will probably scan users’ iTunes libraries and automatically buy for them (at Microsoft’s expense) copies of any protected music they own on the iTunes service.

Let’s suppose, for the sake of argument, that this early report is right – that Microsoft is, in fact, going to make an offer to all iTunes users to replicate their libraries of iTunes, FairPlay-protected music on the new Zune service at no added cost to the users. There are several questions of fact that leap to mind. Did Microsoft obtain the licensing rights to all of the music that is for sale on iTunes? If not, there will be some iTunes music that is not portable to the new service. Will copyright holders be getting the same amount from Microsoft, when their songs are re-purchased on behalf of migrating iTunes users, as they will get when a user makes a normal purchase of the same track in the Zune system? The copyright holders have a substantial incentive to offer Microsoft a discount on this kind of “buy out” mass purchasing. As Ed pointed out to me, it is unlikely that users would otherwise choose to re-purchase all of their music, at full price, out of their own pockets simply in order to be able to move from iTunes to Zune. By discounting their tracks to enable migration to a new service, the copyright holders would be helping create a second viable mass platform for online music sales – a move that would, in the long run, probably increase their sales.

I have spent a fair amount of time and energy worrying about dystopian scenarios in which a single vertically integrated platform, protected by legally-reinforced DRM technologies, locks users in and deprives them not only of first-order options (like the ability to copy songs to a second computer), but also of the second-order freedom to migrate away from a platform whose DRM provisions, catalog, or other features ultimately compare unfavorably to alternative platforms.

Of course, as it has turned out, the dominant DRM platform at the moment, FairPlay, actually does let people make copies of their songs on multiple computers. It is in general a fair bit less restrictive than what some of us have worried that we might, as consumers, ultimately end up being saddled with. Indeed, the relatively permissive structure of FairPlay DRM is very likely one of the factors that has contributed to Apple’s success in a marketplace that has seen many more restrictive alternative systems fail to take hold. But the dominance of Apple’s whole shiny white realm of vertical integration in the digital music market still has made it seem like it would be hard to opt against Apple, even if the platform were to get worse or if better platforms were to emerge to challenge it.

But now it seems that it may actually be easy as pie for any iTunes user to leave the Apple platform. The cost of the Zune player, which will presumably be exclusive to the Zune music service just as the iPod is to iTunes, is a significant factor, but given that reliability issues require users to replace iPods frequently, buying a new player doesn’t actually change the cost equation for a typical user over the long run.

What are the lessons here? Personally, I feel like I underestimated the power of the market to solve the possible problems raised by DRM. It appears that the “lock in” phenomenon creates a powerful incentive for competitors to invest heavily in acquiring new users, even to the point of buying them out. Microsoft is obviously the most powerful player in the technology field, and perhaps some will argue it is unique in its ability to make this kind of an offer. But I doubt that – if the Zune launch is a success, it will set a powerful precedent that DRM buyouts can be worthwhile. And even if Microsoft were unique in its ability to offer a buyout, the result in this case is that we’ll have two solid, competing platforms, each one vertically integrated. It’s no stretch of the imagination to think Apple may respond with a similar offer to lure Zune users to iTunes.

Bottom line: Markets are often surprisingly good at sorting out this kind of thing. Technology policy watchers underestimate the power of competition at our peril. It’s easy to see Microsoft or Apple as established firms coasting on their vertically integrated dominance, but the Zune buyout is a powerful reminder that that’s not what it feels like to be in this or most any other business. These firms, even the biggest, best and most dominant, are constantly working hard to outdo one another. Consumers often do very well as a result… even in a world of DRM.


Guest Blogger: David Robinson

I’m thrilled to welcome David Robinson as a guest blogger. David was a star student in my InfoTech and the Law course at Princeton a few years ago. He received a philosophy degree from Princeton and proceeded to Oxford, studying philosophy and political economy on a Rhodes Scholarship. A budding journalist, he was opinion editor of the Daily Princetonian and interned at Time and the Wall Street Journal. David will return to the States as the first managing editor of The American, a business magazine that will debut in a few months.


Banner Ads Launch Security Attacks

An online banner advertisement that ran on MySpace.com and other sites over the past week used a Windows security flaw to infect more than a million users with spyware when people merely browsed the sites with unpatched versions of Windows …

So says Brian Krebs at the Washington Post’s Security Fix blog. The ads, he says, contained a booby-trapped image that exploited a Windows security flaw to install malicious software. (Microsoft released a patch for the flaw back in January.)

Is this MySpace’s fault? I’m not asking whether MySpace is legally liable for the attack, though I’m curious what lawyers have to say about that question. I’m asking from an ethical and practical standpoint. Recognizing that the attacker himself bears primary responsibility, does MySpace bear some responsibility too?

A naive user who saw the ad displayed on a MySpace page would assume the ad was coming from MySpace. On a technical level, MySpace would not have served out the ad image, but would instead have put into the MySpace page some code directing the user’s browser to go to somebody else’s server and get an ad image; this other server would have actually provided the ad. MySpace’s business model relies on getting paid by ad agencies to embed ads in this way.

Of course, MySpace is in the business of displaying content submitted by other people. Any MySpace user could have put a similarly booby-trapped image on his own MySpace page; this has almost certainly happened. But it’s one thing to go to Johnny’s MySpace page and be attacked by Johnny. It’s another thing to go to your friend’s MySpace page and get attacked because of something that MySpace told you to display. If we’re willing to absolve MySpace of responsibility for Johnny’s attack – and I think we should be – it doesn’t follow that we have to hold MySpace blameless for the ad attack.

Nor does the fact that MySpace (presumably) does not vet the individual ads resolve the question. Failure to take a precaution does not in itself imply that the precaution is unnecessary. MySpace could have decided to vet every ad, at some cost, but instead they presumably decided to vet the ad agencies they are working with, and rely on those agencies to vet the ads.

The online ad business is a complicated web of relationships and deals. Some agencies don’t sell ads directly but make deals to display ads sold by others; and those others may in turn make the same kinds of deals, so that ads are not placed on sites not directly but through a chain of intermediaries. The more the sale and placement of ads is automated, the less there are people in the loop to spot harmful or inappropriate ads. And the more complex and indirect the mechanisms of ad placement become, the harder it is for anyone to tell where an ad came from or how it ended up being displayed on a particular site. Ben Edelman has documented how these factors can cause ads for reputable companies to be displayed by spyware. Presumably the same kinds of factors enabled the display of these attack ads on MySpace and elsewhere.

If this is true, then these sorts of ad-based attacks will be a systemic problem unless the structure of the online ad business changes.


Taking Stevens Seriously

From the lowliest blogger to Jon Stewart, everybody is laughing at Sen. Ted Stevens and his remarks (1.2MB mp3) on net neutrality. The sound bite about the Internet being “a series of tubes” has come in for for the most ridicule.

I’ll grant that Stevens sounds pretty confused on the recording. But’s let’s give the guy a break. He was speaking off the cuff in a meeting, and he sounds a bit agitated. Have you ever listened to a recording of yourself speaking in an unscripted setting? For most people, it’s pretty depressing. We misspeak, drop words, repeat phrases, and mangle sentences all the time. Normally, listeners’ brains edit out the errors.

In this light, some of the ridicule of Stevens seems a bit unfair. He said the Internet is made up of “tubes”. Taken literally, that’s crazy. But experts talk about “pipes” all the time. Is the gap between “tubes” and “pipes” really so large? And when Stevens says that his staff sent him “an Internet” and it took several days to arrive, it sounds to me like he meant to say “an email” and just misspoke.

So let’s take Stevens seriously, and consider the possibility that somewhere in his head, or in the head of a staffer telling him what to say, there was a coherent argument that was supposed to come out of Stevens’ mouth but was garbled into what we heard. Let’s try to reconstruct that argument and see if it makes any sense.

In particular, let’s look at the much-quoted core of Stevens’ argument, as transcribed by Ryan Singel. Here is my cleaned-up restatement of that part of Stevens’ remarks:

NetFlix delivers movies by mail. What happens when they start delivering them by download? The Internet will get congested.

Last Friday morning, my staff sent me an email and it didn’t arrive until Tuesday. Why? Because the Internet was congested.

You want to help consumers? Consumers don’t benefit when the Net is congested. A few companies want to flood the Internet with traffic. Why shouldn’t ISPs be able to manage that traffic, so other traffic can get through? Your regulatory approach would make that impossible.

The Internet doesn’t have infinite capacity. It’s like a series of pipes. If you try to push too much traffic through the pipes, they’ll fill up and other traffic will be delayed.

The Department of Defense had to build their own network so their time-critical traffic wouldn’t get blocked by Internet congestion.

Maybe the companies that want to dump so much traffic on the Net should pay for the extra capacity. They shouldn’t just dump their traffic onto the same network links that all of us are paying for.

We don’t have regulation now, and the Net seems to be working reasonably well. Let’s leave it unregulated. Let’s wait to see if a problem really develops.

This is a rehash of two of the standard arguments of neutrality regulation opponents: let ISPs charge sites that send lots of traffic through their networks; and it’s not broke so don’t fix it. Nothing new here, but nothing scandalous either.

His examples, on the other hand, seem pretty weak. First, it’s hard to imagine that NetFlix would really use up so much bandwidth that they or their customers weren’t already paying for. If I buy an expensive broadband connection, and I want to use it to download a few gigabytes a month of movies, that seems fine. The traffic I slow down will mostly be my own.

Second, the slow email wouldn’t have been caused by general congestion on the Net. The cause must be either an inattentive person or downtime of a Senate server. My guess is that Stevens was searching his memory for examples of network delays, and this one popped up.

Third, the DoD has plenty of reasons other than congestion to have its own network. Secrecy, for example. And a need for redundancy in case of a denial-of-service attack on the Internet’s infrastructure. Congestion probably ranks pretty far down the list.

The bottom line? Stevens may have been trying to make a coherent argument. It’s not a great argument, and his examples were poorly chosen, but it’s far from the worst argument ever heard in the Senate.

Why then the shock and ridicule from the Internet public? Partly because the recording was a perfect seed for a Net ridicule meme. But partly, too, because people unfamiliar with everyday Washington expect a high level of debate in the Senate, and Stevens’ remarks, even if cleaned up, don’t nearly qualify. As Art Brodsky of Public Knowledge put it, “We didn’t [post the recording] to embarrass Sen. Stevens, but to give the public an inside view of what can go on at a markup. Just so you know.” Millions of netizens now know, and they’re alarmed.


Net Neutrality: Strike While the Iron Is Hot?

Bill Herman at the Public Knowledge blog has an interesting response to my net neutrality paper. As he notes, my paper was mostly about the technical details surrounding neutrality, with a short policy recommendation at the end. Here’s the last paragraph of my paper:

There is a good policy argument in favor of doing nothing and letting the situation develop further. The present situation, with the network neutrality issue on the table in Washington but no rules yet adopted, is in many ways ideal. ISPs, knowing that discriminating now would make regulation seem more necessary, are on their best behavior; and with no rules yet adopted we don’t have to face the difficult issues of line-drawing and enforcement. Enacting strong regulation now would risk side-effects, and passing toothless regulation now would remove the threat of regulation. If it is possible to maintain the threat of regulation while leaving the issue unresolved, time will teach us more about what regulation, if any, is needed.

Herman argues that waiting is a mistake, because the neutrality issue is in play now and that can’t continue for long. Normally, issues like these are controlled by a small group of legislative committee members, staffers, interest groups and lobbyists, but rarely an issue will open up for wider debate, giving broader constituencies influence over what happens. That’s when most of the important policy changes happen. Herman argues that the net neutrality issue is open now, and if we don’t act it will close again and we (the public) will lose our influence on the issue.

He makes a good point: the issue won’t stay in the public eye forever, and when it leaves the public eye change will be more difficult. But I don’t think it follows that we should enact strong neutrality regulation now. There are several reasons for this.

Tim Lee offers one reason in his response to Herman. Here’s Tim:

So let’s say Herman is right and the good guys have limited resources with which to wage this fight. What happens once network neutrality is the law of the land, Public Knowledge has moved onto its next legislative issue, and the only guys in the room at FCC hearings on network neutrality implementation are telco lawyers and lobbyists? The FCC will interpret the statute in a way that’s friendly to the telecom industry, for precisely the reasons Herman identifies. Over time, “network neutrality” will be redefined and reinterpreted to mean something the telcos can live with.

But it’s worse than that, because the telcos aren’t likely to stop at rendering the law toothless. They’re likely to continue lobbying for additional changes to the rules—by the FCC or Congress—that helps them exclude new competitors and cement their monopoly power? Don’t believe me? Look at the history of cable franchising. Look at the way the CAB helped cartelize the airline industry, and the ICC cartelized surface transportation. Look at FCC regulation of telephone service and the broadcast spectrum. All of those regulatory regimes were initially designed to control oligopolistic industries too, and each of them ended up becoming part of the problem.

I’m wary of Herman’s argument for other reasons too. Most of all, I’m not sure we know how to write neutrality regulations that will have the effects we want. I’m all in favor of neutrality as a principle, but it’s one thing to have a goal and another thing entirely to know how to write rules that will achieve that goal in practice. I worry that we’ll adopt well-intentioned neutrality regulations that we’ll regret later – and if the issue is frozen later it will be even harder to undo our mistakes. Waiting will help us learn more about the problem and how to fix it.

Finally, I worry that Congress will enact toothless rules or vague statements of principle, and then declare that the issue has been taken care of. That’s not what I’m advocating; but I’m afraid it’s what we’ll get if insist that Congress pass a net neutrality bill this year.

In any case, odds are good that the issue will be stalemated, and we’ll have to wait for the new Congress, next year, before anything happens.


New Net Neutrality Paper

I just released a new paper on net neutrality, called Nuts and Bolts of Network Neutrality. It’s based on several of my earlier blog posts, with some new material.


CleanFlicks Ruled an Infringer

Joe Gratz writes,

Judge Richard P. Matsch of the United States District Court for the District of Colorado [on] Wednesday filed this opinion granting partial summary judgment in favor of the movie studios, finding that CleanFlicks infringes copyright. This is not a terribly surprising result; CleanFlicks’ business involves selling edited DVD-Rs of Hollywood movies, buying and warehousing one authorized DVD of the movie for each edited copy it sells.

CleanFlicks edited the movies by bleeping out strong language, and removing or obscuring depictions of explicit sex and violence. (Tim Lee also has interesting commentary: 1 2 3.)

The opinion is relatively short, and worth reading if you’re interested in copyright. The judge ruled that CleanFlicks violated the studios’ exclusive rights to make copies of the movies, and to distribute copies to the public. He said that what CleanFlicks did was not fair use.

There are at least four interesting aspects to the opinion.

First, the judge utterly rejected CleanFlicks’s public policy argument. CleanFlicks had argued that public policy should favor allowing its business, because it enables people with different moral standards to watch movies, and it lets people compare the redacted and unredacted versions to decide whether the language, sex, and violence are really necessary to the films. The judge noted that Congress, in debating and passing the Family Movie Act, during the pendency of this lawsuit, had chosen to legalize redaction technologies that didn’t make a new DVD copy, but had not legalized those like CleanFlicks that did make a copy. He said, reasonably, that he did not want to overrule Congress on this policy issue. But he went farther, saying that this public policy argument is “inconsequential to copyright law” (page 7).

Second, the judge ruled that the redacted copies of the movies are not derivative works. His reasoning here strikes me as odd. He says first that the redaction is not a transformative use, because it removes material but doesn’t add anything. He then says that because the redacted version is not transformative, it is not a derivative work (page 11). If it is true in general that redaction does not create a derivative work, this has interesting consequences for commercial-skipping technologies – my understanding is that the main copyright-law objection to commercial-skipping is that it creates an unauthorized derivative work by redacting the commercials.

Third, the judge was unimpressed with CleanFlicks’s argument that it wasn’t reducing the studios’ profits, and was possibly even increasing them by bringing the movie to people who wouldn’t have bought it otherwise. (Recall that for every redacted copy it sold, CleanFlicks bought and warehoused one ordinary studio-issued DVD; so every CleanFlicks sale generated a sale for the studio.) The judge didn’t much engage this economic argument but instead stuck to a moral-rights view that CleanFlicks was injuring the artistic integrity of the films:

The argument [that CleanFlicks has no impact or a positive impact on studio revenues] has superficial appeal but it ignores the intrinsic value of the right to control the content of the copyrighted work which is the essence of the law of copyright.

(page 11)

Finally, the judge notes that the studios did not make a DMCA claim, even though CleanFlicks was circumventing the encryption on DVDs into order to enable its editing. (The studios say they could have brought such a claim but chose not to.) Why they chose not to is an interesting question. I think Tim Lee is probably right here: the studios were feeling defensive about the overbreadth of the DMCA, so they didn’t want to generate more conservative opponents of the DMCA by winning this case on DMCA grounds.

There also seems to have been no claim that CleanFlicks fostered infringement by releasing its copies as unencrypted DVDs, when the original studio DVDs had been encrypted with CSS (the standard, laughably weak DVD encryption scheme). The judge takes care to note that CleanFlicks and its co-parties all release their edited DVDs in unencrypted form, but his ruling doesn’t seem to rely on this fact. Presumably the studios chose not to make this argument either, perhaps for reasons similar to their DMCA non-claim.

In theory CleanFlicks can appeal this decision, but my guess is that they’ll run out of money and fold before any appeal can happen.