October 31, 2024

The Future of Filesharing

Today there’s a Senate hearing on “The Future of P2P”. On Saturday, I gave a talk with a remarkably similar title, “The Future of Filesharing,” at the ResNet 2004 conference, a gathering of about 400 people involved in running networks for residential colleges and universities. Here’s a capsule summary of my talk.

(Before starting, a caveat. Filesharing technologies have many legitimate, non-infringing uses. When I say “filesharing” below, I’m using that term as a shorthand to refer to infringing uses of filesharing systems. Rather than clutter up the text below with lots of caveats about legitimate noninfringing uses, let’s just put aside the noninfringing uses for now. Okay?)

From a technology standpoint, the future of filesharing will involve co-evolution between filesharing technology on one side, and anti-copying and anti-filesharing technology on the other. By “co-evolution” I mean that whenever one side finds a successful tactic, the other side will evolve to address that tactic, so that each side catalyzes the evolution of the other side’s technology.

The resulting arms race favors the filesharing side, for two reasons. First, the filesharing side can probably adapt faster than the anti-filesharing side; and speed is important in this kind of move-countermove game. Second, the defensive technologies that filesharing systems are likely to adapt are the same defensive technologies used in ordinary commercial distributed systems (end-to-end encryption, anti-denial of service tactics, reputation systems, etc.), so the filesharing side can benefit from the enormous existing R&D efforts on defensive technologies.

Given all of this, it’s a mistake for universities or ISPs to spends lots of money and effort trying to develop or deploy the One True Solution Technology (OTSS). Co-evolution ensures that the OTSS would sow the seeds of its own destruction, by motivating filesharing designers and users to change their systems and behavior to defeat it. At best, the OTSS would buy a little time – but not much time, given the quick reaction time of the other side. Rather than an OTSS, a series of quick-and-dirty measures might have some effect, and at least would waste fewer resources fighting a losing battle.

The best role for a university in the copyright wars is to do what a university does best: educate students. When I talk about education, I don’t mean a five-minute lecture at freshman initiation. I don’t mean adding three paragraphs on copyright to that rulebook that nobody reads. I don’t mean scare tactics. What I do mean is a real, substantive discussion of the copyright system.

My experience is that students are eager to have serious, intellectual discussions about why we have the copyright system we have. They will take seriously the economic justification for copyright, if it is explained to them in a non-hysterical way. They’ll appreciate the wisdom of the limitations on copyright, such as fair use and the idea/expression dichotomy; and in so doing they’ll realize why there are not exceptions for other things.

This kind of education is expensive; but all good education is. Surely, amid all of the hectoring “education” campaigns, there is room for some serious education too.

Report from RIAA v. P2P User Courtroom

Mary Bridges offers an interesting report from a court hearing yesterday, in one of the RIAA’s lawsuits against end users accused of P2P infringement. She points to an amicus brief filed by folks at Harvard’s Berkman Center, at the Court’s request, that explains some of the factual and legal issues raised in these suits.

[link credit: Derek Slater]

Must-Read Copyright Articles

Recently I read two great articles on copyright: Tim Wu’s Copyright’s Communications Policy and Mark Lemley’s Ex Ante Versus Ex Post Justifications for Intellectual Property.

Wu’s paper, which has already been praised widely in the copyright blogosphere, argues that copyright law, in addition to its well-known purpose of creating incentives for authors, has another component that amounts to a government policy on communications systems. This idea has been kicking around for some time, but Wu really nails it. His paper has a fascinating historical section describing what happened when new technologies, such as player pianos, radio, and cable TV, affected the copyright balance. In each case, after lots of legal maneuvering, a deal was cut between the incumbent industry and the challenger. Wu goes on to explain why this is the case, and what it all means for us today. There’s much more to this paper; a single paragraph can’t do it justice.

Lemley’s paper is a devastating critique of a new style of copyright-extension argument. The usual rationale for copyright is that it operates ex ante (which is lawyerspeak for beforehand): by promising authors a limited monopoly on copying and distribution of any work they might create in the future, we give them an incentive to create. After the work is created, the copyright monopoly leads to inefficiencies, but these are necessary because we have to keep our promise to the author. The goal of copyright is to keep others from free-riding on the author’s creative work.

Recently, we have begun hearing ex post arguments for copyright, saying that even for works that have already been created, the copyright monopoly is more efficient than a competitive market would be. Some of the arguments in favor of copyright term extension are of this flavor. Lemley rebuts these arguments very convincingly, arguing that they (a) are theoretically unsound, (b) are contradicted by practical experience, and (c) reflect an odd anti-market, central-planning bias. Based on this description, you might think Lemley’s article is long and dense; but it’s short and surprisingly readable. (Don’t be fooled by the number of pages in the download – they’re mostly endnotes.)

Copyright and Cultural Policy

James Grimmelmann offers another nice conference report, this time from the Seton Hall symposium on “Peer to Peer at the Crossroads”. I had expressed concern earlier about the lack of technologists on the program at the symposium, but James reports that the lawyers did just fine on their own, steering well clear of the counterfactual technology assumptions one sometimes sees at lawyer conferences.

Among other interesting bits, James summarizes Tim Wu’s presentation, based on a recent paper arguing that much of what passes for copyright policy is really just communications policy in disguise.

We’re all familiar, by now, with the argument that expansive copyright is bad because it’s destructive to innovation and allows incumbent copyright industries to prevent the birth of new competitors. Content companies tied to old distribution models are, goes this argument, strangling new technologies in their crib. We’re also familiar, by now, with the argument that changes in technology are destroying old, profitable, and socially-useful business, without creating anything stable, profitable, or beneficial in their place. In this strain of argument, technological Boston Stranglers roam free, wrecking the enormous investments that incumbents have made and ruining the incentives for them to put the needed money into building the services and networks of the future.

Tim’s insight, to do it the injustice of a sound-bite summarization, is that these are not really arguments that are rooted in copyright policy. These are communications policy arguments; it just so happens that the relevant which happens to affect communications policy is copyright law. Where in the past we’d have argued about how far to turn the “antitrust exemption for ILECs” knob, or which “spectrum auction” buttons to push, now we’re arguing about where to set the “copyright” slider for optimal communications policy. That means debates about copyright are being phrased in terms of a traditional political axis in communications law: whether to favor vertically-integrated (possibly monopolist) incumbents who will invest heavily because they can capture the profits from their investments, or to favor evolutionary competition with open standards in which the pressure for investment is driven by the need to stay ahead of one’s competitors.

The punch line: right now, our official direction in communications policy is moving towards the latter model. The big 1996 act embraced these principles, and the FCC is talking them up big time. Copyright, to the extent that it is currently pushing towards the former model, is pushing us to a communications model that flourished in decades past but is now out of favor.

This is a very important point, because the failure to see copyright in the broader context of communications policy has been the root cause of many policy errors, such as the FCC’s Broadcast Flag ruling.

I would have liked to attend the Seton Hall symposium myself, but I was at the Harvard Speedbumps conference that day. And I would have produced a Grimmelmann-quality conference report – really I would – but the Harvard conference was officially off-the-record. I’ll have more to say in future posts about the ideas discussed at the speedbumps conference, but without attributing them to any particular people.

Industry to Sue Supernode Operators?

Rumor has it that the recording industry is considering yet another tactic in their war on peer-to-peer filesharing: lawsuits against people whose computers act as supernodes.

Supernodes are a feature of some P2P networks, such as the FastTrack network used by Kazaa and Grokster. Supernodes act as hubs for the P2P network, helping people find the files they search for. (Once a user finds the desired file, that file is downloaded directly from the machine that has it.)

The industry tried suing the makers of Kazaa and Grokster, but the judge ruled that these P2P companies could not be punished because, unlike Napster, they did not participate in acts of infringement. In Napster, every search involved the participation of server machines that were run by Napster itself. In FastTrack networks, the same role is played by the supernodes, which are not run by the P2P vendor.

A supernode is just an ordinary end-user’s computer. The P2P software causes a user’s computer to “volunteer” to be a supernode, if the computer is fast and has a good network connection. The user may not know that his computer is a supernode. Indeed, he may not even know what a supernode is.

The likely theory behind a lawsuit would be that a supernode is participating in acts of infringement, just as Napster did, and so it should be held responsible as a contributory and/or vicarious infringer, just as Napster was. Regardless of the legalities, many people would think such lawsuits unfair, because at least some of the defendants would be unaware of their role as supernodes.

Perhaps the real goal of the lawsuits would be to convince people not to act as supernodes. Most of the P2P applications have a “don’t be a supernode” configuration switch. If people understood that they could avoid lawsuits by using this switch, many would.

On the other hand, the industry had hoped that the existing lawsuits against P2P direct infringers would convince people to use the “don’t upload files” configuration switch on their P2P software, even if they still use P2P for downloading. (It’s not that downloading is legal, or that the industry doesn’t object to it. It’s just that it’s much easier to catch uploaders than downloaders, and the industry’s suits thus far have been against uploaders.)

The lawsuits have been effective in teaching people that unauthorized filesharing is almost always illegal and carries potentially serious penalties. They have been far less effective, I think, in enticing people to turn off the upload feature in their P2P software. Getting people to turn off the supernode feature seems even harder.

The main effect of suits against supernode operators would be to confuse ordinary users about the law, which can’t be in the industry’s best interest. If they’re going to file suits against P2P users, going after direct infringers looks like the best strategy.