I’m taking a holiday break from blogging. I’ll be back in early January.
Archives for December 2004
Last week, in response to the MPAA lawsuits against BitTorrent trackers, I wrote that it’s impossible to sue BitTorrent itself, because it is nothing but a communications protocol. Michael Madison was skeptical, which was a fair response given what little I had written on the subject. Let me say a bit more, to clarify.
Opponents of P2P technologies often make the rhetorical move of calling the thing they oppose a “network.” The word carries connotations – especially for nonexperts – of a physical contrivance that is operated by some organization. Think of the old phone system, or the electrical power grid. Somebody has to build and manage all that equipment. The implication is that there is somebody in charge who can supervise the use of the network. Read the plaintiffs’ briefs in the Grokster case and you’ll see many references to a “network” that is “operated” by the defendants.
Computer scientists sometimes use the word “network” to refer to something more virtual. Others are now using “network” in this sense, as when people talk about the social network of friendships among the residents of a small town. Nobody owns and operates the social network. There is nobody you can sue to shut it down, because it’s not a network in the same sense the power grid is.
A communications protocol is an agreement or convention about how computer systems can cooperate to accomplish some task. It isn’t owned or operated by anybody. (People might own copyrights or patents relating to a protocol, but let’s set aside that possibility for now.) There’s a sense in which English or any other human language is a kind of protocol that people use to cooperate with each other. Again: nobody owns, operates or controls the English language, and there is nobody you can sue to shut it down. This isn’t to say that you can’t punish misuses of English, such as fraud or criminal conspiracies that use the language; but punishing misuse is not the same as attacking the language itself.
Given a lawsuit about a particular technology, how can we tell whether that network is more like the power grid or more like a social network? Here I think the Grokster courts have gotten it right. Rather than arguing over what is a “network,” or what “network” means anyway, they looked at the nature of the technology and the defendant’s control or influence over it. That is, as lawyers say, a fact-intensive inquiry.
The MPAA, in suing the operators of BitTorrent trackers rather than trying to attack the BitTorrent protocol itself, seems to be recognizing this distinction. That in itself good news.
TinyP2P is a functional peer-to-peer file sharing application, written in fifteen lines of code, in the Python programming language. I wrote TinyP2P to illustrate the difficulty of regulating peer-to-peer applications. Peer-to-peer apps can be very simple, and any moderately skilled programmer can write one, so attempts to ban their creation would be fruitless.
For more information about TinyP2P, see http://www.freedom-to-tinker.com/tinyp2p.html.
The MPAA has announced lawsuits against the operators of P2P index servers, such as BitTorrent trackers, according to a Wired News story by Xeni Jardin.
A BitTorrent tracker keeps track of who is downloading and/or uploading a particular file, and makes this information available to others who want to find the file. The suits will presumably allege that the person running the tracker knew that the people downloading the file were infringing, and knew that the tracker was facilitating those illegal downloads, and yet the person ran the tracker anyway.
Previously, copyright owners had considered suing the operators of Kazaa supernodes, which also provide index information. As I wrote previously, suing supernode operators would have been a bad idea, because ordinary user machines silently volunteer to be supernodes, often without their owner’s knowledge. It’s one thing to sue somebody for setting up an index for a given file; it’s another thing entirely to sue somebody who didn’t even know that his machine was providing index information.
The good news is that we seem to be avoiding the worst-case scenario, which is a blanket lawsuit trying to shut down BitTorrent entirely. Such a suit would be unwarranted, as there is nothing about BitTorrent’s design that seems aimed to facilitate infringement. BitTorrent is designed to allow efficient distribution of large files. If that by itself were enough to get somebody sued, then things would be pretty bad.
Of course, it’s hard to see how one could sue BitTorrent. How do you sue a communications protocol? You can sue the person who designed the protocol, but the protocol itself can’t be undesigned. Nor can the technical community unlearn the lessons it has learned.
On Friday I wrote about DVD region coding, which allows the manufacture of DVDs that (in theory) can only be played in certain regions of the world. U.S. public policy, in the form of the Digital Millennium Copyright Act (DMCA), plays an important role in shoring up the region coding mechanism. Is this good public policy? Should the U.S. want DVDs to be region coded?
Let’s look at the economic effects of region coding. These days, the main effect is to allow the studios to price discriminate by selling the same DVD at a different price in the U.S. than overseas. Generally, we can expect the U.S. price to be higher – let’s assume the price is Pu in the U.S. and Po overseas. If it weren’t for region coding, this differential pricing would be hard to sustain, because people could buy DVDs cheaply overseas and resell them in the U.S. Region coding prevents this kind of reimportation.
(Similar issues arise in the debate over drug reimportation, where we also see U.S. producers wanting to price discriminate, and reimportation posing a threat to that price discrimination strategy. The drug reimportation issue is more difficult – there, policy decisions take on a moral dimension, because drug pricing is literally a life and death issue for some patients.)
If region coding were abolished, then the U.S. price and the overseas price for a DVD would equalize, at a level below the current U.S. price and above the current overseas price. The studios could no longer price discriminate, and so would be worse off. U.S. consumers would be better off – they would spend fewer total dollars on DVDs, and would get more DVDs for those dollars. Overseas customers would see a price increase, and so would be worse off. Total welfare would decline, with the gains of U.S. consumers outweighed by the losses of U.S. studios and overseas consumers.
But we shouldn’t expect U.S. policy to care much about the welfare of overseas consumers. And if we focus only on the impact on U.S. people and companies, then region coding doesn’t look nearly as good – it looks like a deliberate policy of boosting DVD prices in the U.S. Indeed, region coding acts just a like a tariff of Pu-Po dollars on each reimported DVD. If we didn’t have region coding, would Congress enact such a tariff? I doubt it.
(Note: My analysis above assumes that all movie studios are located in the U.S., so that the U.S. economy captures all of the producer-side benefits of price discrimination. If overseas studios use region coding to boost their prices in the U.S., this hurts U.S. consumers while providing no countervailing U.S. benefit, so region coding looks even worse.)
(Another note: Some readers may object that the U.S. shouldn’t be so selfish as to ignore the welfare of people outside its borders. Point taken. But surely you would agree that, whatever level of U.S. aid to the world community is appropriate, that aid should be used to attack a problem more pressing than the high price of DVDs.)
As I noted yesterday, part of the license that DVD makers have to sign is <a href="As I noted yesterday, part of the license that DVD makers have to sign is available on the DVD Copy Control Association (DVD-CCA) website. It’s 48 pages of dense technolegalese, consisting mostly of a list of things that DVD players aren’t allowed to do. On reading it, three things jumped out at me.
First, DVD region coding, the mechanism designed to stop DVDs bought in one part of the world from being played in another part, is the subject of much more regulatory effort than I expected. For example, there are special robustness requirements for region coding. (In the weird Orwellian language of DRM vendors, “robustness” is a code word denoting the use of deliberately complex, nonmodular designs so as to resist diagnosis, analysis, and repair.)
Second, it seems to be impossible to build a software DVD player that complies with the requirements. According to section 184.108.40.206 (page A-20),
Specificially, [software] implementations shall include all of the [required anti-reverse-engineering characteristics] which shall be implemented in a way that it is reasonably certain they: cannot be defeated or circumvented using widely accessible tools such as but not limited to debuggers, decompilers, and similar Software development products; and can only with difficulty be defeated or circumvented using professional computer engineering equipment such as … logic analyzers …
To comply with this, one would somehow have to write a piece of software whose data and algorithms absolutely cannot be determined by a person using a debugger or decompiler. We can be “reasonably certain” that any program written today can be understood using these tools. (It seems reasonable to read “cannot” as requiring absolute impenetrability, given that the next clause says “only with difficulty”.)
Third, the document bans DVD players from taking a movie that is encoded on a DVD at one level of resolution and outputting that movie on an analog output at a higher level of resolution. (Section 220.127.116.11 (2), page A-11) This ban holds even if the DVD publisher wants to allow a higher-resolution output. I couldn’t figure out what the purpose of this restriction might be. Maybe the document’s authors just got carried away after writing pages and pages of text limiting the functionality of DVD players.
[DVD-CCA is suing Kaleidescape.] The company, which has won several recent consumer electronics awards, said it has worked closely with the DVD CCA for more than a year, and will fight the suit, filed Tuesday.
Kaleidescape creates expensive consumer electronics networks that upload the full contents of as many as 500 DVDs to a home server, and allow the owner to browse through the movies without later using the DVDs themselves. That’s exactly what the copy-protection technology on DVDs, called Content Scramble System (CSS) was meant to prevent, the Hollywood-backed group said.
“The express intent and purpose of the contract and CSS are to prevent copying of copyrighted materials such as DVD motion pictures,” Bill Coats, a DVD CCA attorney, said in a statement. “While Kaleidescape obtained a license to use CSS, the company has built a system to do precisely what the license and CSS are designed to prevent–the wholesale copying of protected DVDs.”
From the DVD-CCA rhetoric, you might think this suit is about copyright infringement. Reading the article and DVD-CCA statements carefully, though, it seems as if it’s just a contract dispute about whether Kaleidescape violated the terms of its license agreement with DVD-CCA.
(I haven’t seen DVD-CCA’s complaint yet, so I can’t be absolutely sure that there are no copyright claims. But if it were a copyright case, one would have expected the plaintiffs to include some major copyright owners, such as movie studios.)
The subtext here is that DVD-CCA is trying to maintain its control over all technology related to DVDs. In the good old days, copyright law gave copyright owners the right to sue infringers but gave no right to stop noninfringing uses just because the copyright owner didn’t like them. These days, copyright interests seem to want broad control over technology design.
It’s far to early to tell whether this lawsuit will involve big policy issues, or whether it will be confined to narrow issues of contract interpretation. Regardless, its a good bet we’ll learn more about how the DVD-CCA operates.
By the way, the DVD-CCA’s “Procedural Specifications” are freely available for download by anybody who provides their name and contact information. (Amusingly, the Procedural Specifications document itself says, falsely, that “[t]he Procedural Specifications are provided only to CSS Licensees, prospective CSS Licensees, and others with a business need to know consistent with the intent and purposes of the CSS licensing process.”)
Recently OCLC, a large library consortium, compiled a list of the top 1000 books, measured by the number of copies held by member libraries. In light of the earlier discussion here about must-read books on science and technology, I decided to see which sci/tech books made the OCLC top 1000.
As with the previous college presidents’ list, the results are disappointing. Here are the science/technology books in the OCLC top 1000, leaving out periodicals, general encyclopedias, and medical reference books:
|115||Darwin||Origin of Species|
|406||Levine||Internet for Dummies|
|422||Darwin||Voyage of the Beagle|
|445||Hawking||Brief History of Time|
|777||Mueller||Upgrading and Repairing PCs|
|966||Krol||Whole Internet Guide|
Origin of Species is a reasonable pick for the top of the science list, but it ranks surprisingly low, behind three cartoon books. (Garfield ranks 18th, tops among books by living authors. The other two are Doonesbury and Peanuts.) The ideas from Newton’s Principia pervade modern physics, but the book itself is mainly of historical interest. Voyage of the Beagle and Brief History of Time are worthy enough.
It’s the technology books that really disappoint. These books are useful, to be sure, and it’s not surprising that libraries have them. What’s really sad is that no book about the intellectual content or impact of engineering or computer technology made the list.
This stuff is important! Are we as technologists failing to write engaging books about it? Are librarians or the public failing to recognize the value of the books that are written? Probably all of these things are true.
Snocap, a company involving Napster founder Shawn Fanning, is trying to enable new peer-to-peer networks that identify copyrighted works and charge users for receiving them, according to Jeff Leeds’ story in Friday’s New York Times. Snocap is not itself building the P2P network(s), but is supplying the payment and song-identification technology.
Based on press accounts, it appears that the Snocap uses audio fingerprinting technology, which reduces an audio track to a short binary description and then looks that description up in a database containing the descriptions of many known works. The Snocap application will check the fingerprints of the songs it is sharing, and will charge the user accordingly.
In my Rip/Mix/Burn lecture, I talked about how Napster had solved one half of the digital music problem – how to distribute the music – but had ignored the other half – how to manage payment. It turned out that distribution was by far the easier problem to solve; and Napster just left the payment problem for later. You couldn’t pay on Napster, even if you wanted to. Now Snocap will give you a way to pay, at least for songs whose copyright owners register them with Snocap.
Let’s think about how a P2P system based on Snocap might work. When users want to share a file, Snocap will compute the file’s audio fingerprint and look up that fingerprint in the database. One of three things will happen:
- the file is in the database and the copyright owner has stated conditions for its use,
- the file is in the database and the copyright owner hasn’t told Snocap anything about the rules for its use, or
- the file isn’t in the database.
In the first case, the system will clearly enforce the copyright owner’s rules. In the second case, the system knows what the file is, and the file is almost certainly copyrighted, so the system would probably have to deny access to the file.
The third case is the really interesting one. One could argue that the system should deny access here too, since the file is probably copyrighted by somebody, and ignorance of the copyright owner’s identity is no excuse for infringement.
But what if the system allows the distribution of unrecognized files, arguing that the copyright owner is free to register the file with Snocap if he really wants to be paid? Is this enough to shield the P2P operator from liability if the file is infringing? This might make an interesting moot-court case.
But perhaps the P2P operator’s main concern is not to comply with the law, but to reduce the probability of facing a big lawsuit (whether or not that lawsuit has merit). In that case, all that really matters is whether Snocap allows the P2P vendor to kiss up to the big record companies – as long as their content is in Snocap’s database, then they won’t have grounds to sue the P2P vendor. If this is really the innovator’s best strategy, it’s a sad commentary on the state of copyright law.
At the moment we don’t know much about Snocap or how it would be used in P2P networks. Once we see P2P networks using Snocap (if we ever do), we’ll be able to see how they have chosen to address these questions.
UPDATE (7:30 PM): This post originally assumed that Snocap itself was creating a P2P network, rather than just creating the song identification and payment tools. It’s now updated to fix this error. Thanks to Derek Slater for pointing out my earlier error.
Ben Edelman offers a nice dissection of the latest End User License Agreement (EULA) from Gator. It has to be one of the worst EULAs ever written. Below are some highlights; see Ben’s post if you want more details.
[Background about Gator: Many people say Gator’s product is spyware. Gator has a habit of threatening those people, to get them to say “adware” instead of “spyware”. Draw your own conclusions.]
For starters, the EULA is nearly 6000 words, or 63 on-screen pages. Worse, Gator has taken affirmative steps to make the EULA harder to read, harder to understand, and harder to save. They eliminated helpful formatting, such as boldface section titles, and they removed a button that let you capture the EULA text in Notepad for searching or printing. (Both features were present in previous iterations of the Gator EULA.)
The EULA forbids the use of packet sniffers to determine what information the Gator software is sending out about you.
Worst of all, the EULA forbids you from removing the Gator software, except by removing all of the programs that came bundled with Gator. (It’s not clear how you’re supposed to figure out which programs those are.) Even if you remove all of the programs bundled with Gator, this would only invoke the removal program that Gator provides, which may or may not actually remove all of Gator from your system.
EULAs like this seem designed to create as many unsuspecting or inadvertent violations as possible. James Grimmelmann argues that this is just a tactic to give Gator legal ammunition in case their users sue them, the idea being that anybody suing Gator would face counterclaims for breach of the EULA. That seems plausible, but I doubt it’s the whole story.
To the extent that the EULA gives Gator legal leverage over its users, that leverage could be used to deter criticism of Gator, and not just lawsuits. Experience has shown that some companies, especially ones with dodgy products, do use what legal leverage they have against their critics. If I planned to criticize Gator in detail, I would worry about this issue.
There are two solutions to this overEULAfication problem. A court could throw out this kind of egregious EULA, or at least narrow its scope. Alternatively, users could raise the price of this behavior by refusing to use overEULAfied products. Realistically, this will only happen if users are given the tools to do so.
The best kind of tool for this purpose is information. I would love to see a “EULA doghouse” site that listed products with excessive EULAs, or that rated products by the content of their EULAs. At the very least, EULA evaluation could become standard procedure for people writing reviews of software products. Unfortunately, there hasn’t been much progress on this front.