October 31, 2024

Can P2P Vendors Block Porn or Copyrighted Content?

P2P United, a group of P2P software vendors, sent a letter to Congress last week claiming that P2P vendors are unable to redesign their software to block the transmission of pornographic or copyrighted material. Others have claimed that such blocking is possible. As a technical matter, who is right?

In this post I’ll look at what is technically possible. I’ll ignore the question of whether the law does, or should, require P2P software to be redesigned in this way. Instead, I’ll just ask whether it would be technologically possible to do so. To keep this post (relatively) short, I’ll omit some technical details.

I’ll read “blocking copyrighted works” as requiring a system to block the transmission of any particular work whose copyright owner has complained through an appropriate channel. The system would be given a “block-list” of works, and it would have to block transmissions of works that are on the list. The block-list would be lengthy and would change over time.

Blocking porn is harder than blocking copyrighted works. Copyright-blocking is looking for copies of a specific set of works, while porn-blocking is looking for a potentially infinite universe of pornographic material. Today’s image-analysis software is far, far too crude to tell a porn image from a non-porn one. Because porn-blocking is strictly harder than copyright-blocking, I’ll look only at copyright-blocking from here on. P2P United is correct when they say that they can’t block porn.

Today’s P2P systems use a decentralized architecture, with no central machine that participates in all transactions, so that any blocking strategy must be implemented by software running on end users’ computers. Retrofitting an existing P2P network with copyright-blocking would require blocking software to be installed, somehow, on the computers of that network’s users. It seems unlikely that an existing P2P software vendor would have both the right and the ability to force the necessary installation.

(The issues are different for newly created P2P protocols, where there isn’t an installed base of programs that would need to be patched. But I’ll spare you that digression, since such protocols don’t seem to be at issue in P2P United’s letter.)

This brings us to the next question: If there were some way to install blocking software on all users’ computers, would that software be able to block transmissions of works on the block-list? The answer is probably yes, but only in the short run. There are two approaches to blocking. Either you can ban searches for certain terms, such as the names of certain artists or songs, or you can scan the content of files as they are transmitted, and try to block files if their content matches one of the banned files.

The real problem you face in trying to use search-term banning or content-scanning is that users will adopt countermeasures to evade the blocking. If you ban certain search terms, users will deliberately misspell their search terms or replace them with agreed-upon code words. (That’s how users evaded the search-term banning that Napster used after Judge Patel’s injunction.) If you try to scan content, users will distort or encrypt files before transmission, so that the scanner doesn’t recognize the files’ content, and the receiving user will automatically restore or decrypt the files after receiving them. If you find out what users are doing, you can fight back with counter-countermeasures; but users, in turn, will react to what you have done.

The result is an arms race between the would-be blockers and the users. And it looks to me like an unfavorable arms race for the blockers, in the sense that users will be able to get what they want most of the time despite spending less money and effort on the arms race than the blockers do.

The bottom line: in the short run, P2P vendors may be able to make a small dent in infringement, but in the long run, users will find a way to distribute the files they want to distribute.

Panel on Copyright and Free Speech

Lawrence Solum reports on a panel discussion at the American Association of Law Schools conference. It’s an interesting discussion, and everybody seems to agree that there are significant and increasing conflicts between copyright and free speech.

In her presentation, Jessica Litman used my experience as an example of the chilling effect of the DMCA. Somehow this reminded me of the caption (but not necessarily the title!) on this classic despair.com poster: “It could be that the purpose of your life is only to serve as a warning to others.”

RIAA Subpoena Decision, and Fallout

There’s been lots of talk about the DC Circuit court’s ruling that the RIAA cannot compel ISPs to identify customers who the RIAA suspects of infringing copyrights. The court ruled on narrow grounds, saying that Congress, in the text of the DMCA, did not authorize the type of subpoena that the RIAA wants to use.

This is good news, but it is not as big a deal as some people think. The subpoena provision in question was hardly the greatest injustice in the world. Yes, it was open to abuse by various bad actors; and yes, not everybody identified to the RIAA turned out to be an infringer. If I were king, I would not allow RIAA-style subpoenas without judicial approval. But unless you shed tears for the actual infringers whose names were revealed to the RIAA – which I don’t – this is not the huge privacy boon that some have suggested.

What happens next? One of two things. The RIAA may ask Congress to change the law, to allow the subpoenas in question. My guess is that Congress would give them what they want, perhaps with a few new safeguards to prevent the most egregious abuse scenarios. Alternatively, the RIAA may cut a deal with the major ISPs, in which the RIAA agrees not to ask Congress to change the law, and the ISPs agree in exchange to forward RIAA warning messages to customers who the RIAA identifies as probable infringers.

In the meantime, the RIAA says they intend to file John Doe lawsuits, in which they sue first and then use a traditional subpoena to identify the defendant.

Devil in the Details

There’s been a lot of discussion lately about compulsory license schemes for music. I’ve said before that I’m skeptical about their practicality. One reason for my skepticism is a concern about the measurement problem, and especially about the technical details of how measurement would be done.

To split up the revenue pool, compulsory license schemes all measure something – some proxy for consumer demand – and then give each copyright owner a share of the pie determined by the measured value. Most proposals require measuring how often a song is downloaded, or how often it is played.

Most compulsory license advocates tell us what they want to measure, but as far as I know, nobody has gone into any detail about how they would do the measurement. And based on the thinking I have done on the “how” question, there doesn’t seem to be an easy answer.

So here is my challenge to compulsory enthusiasts: tell us, in technical detail, how you propose to do the measurements. You don’t have to give us working code, but do tell us which programs you would write or modify, and what specifically they would look for. Tell us how you would cope with backward compatibility, and the diverse formats in which people download and store music. Tell us how you would deal with non-PC platforms such as Macs, Linux boxes, and iPods, as well as non-traditional network setups such as public WiFi access points.

The devil is in the details; so show us the details of your plan.

Reflections on the Harvard Alternative Compensation Meeting

Yesterday I attended a daylong workshop at Harvard Law School about alternative compensation systems for digital media. It was a great meeting, with many interesting people saying interesting things. There was a high density of other bloggers, including Ernie Miller, John Palfrey, Derek Slater, Aaron Swartz, and Eugene Volokh, and I hope to read their reactions to the meeting. (Eugene has already posted a brief recap.)

The morning focused on mandatory license systems, such as those proposed by Fisher and Netanel. The conversation immediately turned to the core problem, which strategic behavior by users, intended to channel the system’s revenues to their friends. Examples include Eugene Volokh’s “Second Amendment Blues” scenario, in which the NRA releases a song and NRA members obsessively download and play it, and my scenario in which I play and play my brother’s off-key rendition of “Feelings”. The result is that tax money gets channeled to the NRA or my brother, rather than to real artists. Everybody agreed that this cannot be eliminated, but there are some things you can do to reduce the distortion it causes. (And don’t forget that the goal is only to be less inefficient than the current system.) Two issues remained largely unexplored. First, some have suggested that social norms will cause most people to avoid gaming the system, out of a feeling of obligation to artists. We don’t know how strong those norms will prove to be. Second, some people expressed concern that people will find other perverse ways to respond to the off-kilter incentives that a mandatory license creates. It seems to me that we can predict most of the first-order effects of a mandatory license, but we haven’t thought much about second- and third-order effects.

There was also some discussion about the “porn problem” – the fact that some of the media material consumed under the license will be pornographic, and there will be strong political opposition to any system that causes the government to send checks to porn publishers. (Excluding porn from the system raises other legal and practical problems.) One response is to propose a system in which each person gets to designate the destination of their own tax money. That helps the political problem somewhat, but I still think that some people would object to any system that treats porn as a legitimate kind of content.

At the end of the morning I was a bit less pessimistic than before about the advisability of adopting a mandatory license. But I’m still far from convinced that it’s the right course.

The afternoon discussion was about voluntary license schemes. And here an interesting thing happened. We talked for a while about how one might structure a system in which consumers can license a pool of copyrighted music contributed by artists, with the revenue being split up appropriately among the artists. Eventually it became clear that what we were really doing was setting up a record company! We were talking about how to recruit artists, what contract to sign with artists, which distribution channels to use, how to price the product, and what to do about P2P piracy of our works. Give us shiny suits, stubble, tiny earpiece phones, and obsequious personal assistants, and we could join the RIAA. This kind of voluntary scheme is not an alternative to the existing system, but just another entrant into it.

This is not to say that a few ISPs or universities can’t get together and cut a voluntary deal with the existing record companies (and other copyright owners). Such a deal would still be interesting, and it would lack some of the disadvantages of the more ambitious mandatory license schemes. Of all of the blanket license schemes, this would be both the least risky and the easiest to arrange. But it hasn’t happened yet. (Penn State’s deal with Napster doesn’t count, since it’s just a bulk purchase of subscriptions to a service, and not a blanket license that allows unrestricted use of music on the campus.)

All in all, it was a very instructive and fun meeting. Big thanks to the Harvard people for arranging it. And now, due to a big snowstorm, I get to spend an extra day or two in lovely Cambridge.