November 26, 2024

RIAA Suing i2hub Users

Yesterday the RIAA announced lawsuits against many college students for allegedly using a program called i2hub to swap copyrighted music files. RIAA is trying to paint this as an important step in their anti-infringement strategy, but it looks to me like a continuation of what they have already been doing: suing individuals for direct infringement, and trying to label filesharing technologies (as opposed to infringing uses of them) as per se illegal.

The new angle in this round of suits is that i2hub traffic uses the Internet2 network. The RIAA press release is careful to call Internet2 a “specialized” network, but many press stories have depicted it a private network, separate from the main Internet. In fact, Internet2 is not really a separate network. It’s more like a set of express lanes for the Internet, built so that network traffic between Internet2 member institutions can go faster.

(The Washington Post article gets this point seriously wrong, calling Internet2 “a faster version of the Web”, and saying that “more and more college students have moved off the Web to trade music on Internet2, a separate network …”.)

Internet2 has probably been carrying a nonzero amount of infringing traffic for a long time, just because it is part of the Internet. What’s different about i2hub is not that some of its traffic goes over Internet2, but that it was apparently structured so that its traffic would usually travel over Internet2 links. In theory, this could make transfer of any large file, whether infringing or not, faster.

The extra speed of Internet2 doesn’t seem like much of an issue for music files, though. Music files are quite small and can be downloaded pretty quickly on ordinary broadband connections. Any speedup from using i2hub would mainly affect movie downloads, since movie files are much larger than music files. And yet it was the music industry, not the movie industry, that brought these suits.

Given all of this, my guess is that the RIAA is pushing the Internet2 angle mostly for policial and public relations reasons. By painting Internet2 as a separate network, the RIAA can imply that the transfer of infringing files over Internet2 is a new kind of problem requiring new regulation. And by painting Internet2 as a centrally-managed entity, the RIAA can imply that it is more regulable than the rest of the Internet.

Another unique aspect of i2hub is that it could only be used, supposedly, by people at univerisities that belong to the Internet2 consortium, which includes more than 200 schools. The i2hub website pitches it as a service just “by students, for students”. Some have characterized i2hub as a private filesharing network. That may be true in a formal sense, as not everybody could get onto i2hub. But the potential membership was so large that i2hub was, for all intents and purposes, a public system. We don’t know exactly how the RIAA or its agents got access to i2hub to gather the information behind the suits, but it’s not at all surprising that they were able to do so. If students thought that they couldn’t get caught if they shared files on i2hub, they were sadly mistaken.

[Disclaimer: Although some Princeton students are reportedly being sued, nothing in this post is based on inside information from those students (whoever they are) or from Princeton. As usual, I am not speaking for Princeton.]

Measure It, and They Will Come

The technology for measuring TV and radio audiences is about to change in important ways, according to a long and interesting article, in yesterday’s New York Times Magazine, by Jon Gertner. This will have implications for websites, online media, and public life as well.

Standard audience-measurement technology, as used in the past by Nielsen and Arbitron, paid a few consumers to keep diaries of which TV and radio stations they watched and listened to, and when. Newer technology, such as Nielsen’s “people meters”, actually connect to TVs and measure when they are on and which channel they are tuned to; family members are asked to press buttons saying when they start and stop watching. People meter results were surprisingly different than diary results, perhaps because people wrote in their diaries the shows they planned to watch, or the shows they liked, or the shows they thought others would want them to be watching, rather than the shows they really did watch.

The hot new thing in audience measurement involves putting quiet watermarks (i.e., distinctive audio markers) in the background of shows that are broadcast, and then paying consumers to wear beeper-like devices that record the watermarks they hear. A key advantage of this technology, from the audience monitor’s viewpoint, is that it records what the person hears whereever they go. For example, current Nielsen ratings for TV only measure what people see on their own television at home. Anything seen or heard in a public place, or on the Internet, doesn’t factor into the ratings. That is going to change.

Another use of the new technology puts a distinctive watermark in each advertisement, and then record which ads people hear. When this happens – and it seems inevitable that it will – advertisers will be willing to pay more for audio ads in public places and on the Net, because they’ll be able to measure the effect of those ads. Audio ads will no longer be coupled to radio and TV stations, but will be deliverable by anybody who has people nearby. This will mean, inevitably, that we’ll hear more audio ads in public places and on the Net. That’ll be annoying.

Worse yet, by measuring what people actually hear, the technologies will strengthen advertisers’ incentives to deliver ads in ways that defeat the standard measures we use to skip or avoid them. No longer will advertisers measure attempts to deliver audio ads; now they’ll measure success in delivering sound waves to our ears. So we’ll hear more and more audio ads in captive-audience situations like elevators, taxicabs, and doctors’ waiting rooms. Won’t that be nice?

Congressional Hearings on Music Interoperability

Yesterday a House subcommittee on “Courts, the Internet and Intellectual Property” held hearings on interoperability of music formats. (The National Journal Tech Daily has a good story, unfortunately behind a paywall.) Witnesses spoke unanimously against any government action in this area. According to the NJTD story,

[Subcommittee chair Rep. Lamar] Smith and other lawmakers who attended the hearing agreed with the panelists. The exception was Rep. Howard Berman of California, the subcommittee’s top Democrat, whose district encompasses Hollywood. He suggested that the confusing proliferation of non-compatible copy-protection technologies could be impeding the development of a legal digital-music marketplace.

What’s going on here? Rep. Smith’s opening statement gives some clues about the true purpose of the hearing.

Legitimate questions have been raised regarding the impact of digital interoperability on consumers. In the physical world, consumers didn’t expect that music audio cassettes were interoperable with CD players. Consumers switching from music cassettes to CDs bought the same music for $10 to $20 per CD that they already owned. Consumers accepted this since they felt they were getting something new with more value – a digital format that made every reproduction sound as good as the first playback.

Music is quickly becoming an online business with no connection to the physical world except for the Internet connection. Even that connection is increasingly becoming wireless. Some of the same interoperability issues that occur in the physical world are now appearing here. Consumers who want to switch from one digital music service to another must often purchase new music files and, sometimes, new music players.

For example, music purchased from the iTunes Music Store will only work on Apple’s iPod music player. Music purchased from Real cannot be accessed on the iPod. Last year, both companies became involved in a dispute over Real’s attempt to offer software called Harmony that would have allowed legal copies of music purchased from Real’s online music store to be playable on Apple’s iPod music player. Apple objected to this effort, calling it “hacker like” and invoking the DMCA. Apple blocked Real’s software from working a short time afterwards.

This interoperability issue is of concern to me since consumers who bought legal copies of music from Real could not play them on an iPod. I suppose this is a good thing for Apple, but perhaps not for consumers. Apple was invited to testify today, but that they chose not to appear. Generally speaking, companies with 75% market share of any business, in this case the digital download market, need to step up to the plate when it comes to testifying on policy issues that impact their industry. Failure to do so is a mistake.

As a result of disputes like the one between Apple and Real, some have suggested that efforts to boost digital music interoperability should be encouraged by regulation or legislation. Others have urged Congress to leave the issue to the marketplace and let consumers decide what it best for them.

The hearing is clearly meant to send a “we’re watching you” message to Apple and others, urging them not to block interoperability.

Of course, if full interoperability is really the goal, we already have a solution that is hugely popular. It’s called MP3. More likely, what the subcommittee really wants to see is a kind of pseudo-interoperability that allows products from a limited set of companies to work together, while excluding everyone else. It’s hard to see how this could happen without a further reduction in competition, amounting to a cartelization of the market for digital music services.

The right public policy in this area is to foster robust competition among digital music services of all kinds. A good start would be to remove existing barriers to competition, for example by repealing or narrowing the DMCA, and to ensure that the record companies don’t act as a cartel in negotiating with music services.

Inducing Confusion

Alex, and others reporting on the Supreme Court arguments in the Grokster case, noticed that the justices seemed awfully interested in active inducement theories. Speculation has begun about what this might mean.

News.com is running a piece by John Borland, connecting the court discussion to last year’s ill-fated Induce Act. The Induce Act, which was killed by a unanimous chorus of criticism from the technology world, would have created a broad new category of liability for companies that failed to do enough (by vaguely defined standards) to prevent copyright infringement.

(The news.com piece has a terrible headline: Court mulls P2P ‘pushers’. This fails to convey the article’s content, and it drops the loaded word “pushers”, which appears nowhere in the article. The headline writer seems to acknowledge that the word doesn’t fit, by putting it in scare-quotes, which only highlights the fact that nobody is being quoted. Don’t blame John Borland; the headline was probably written by his editor. This isn’t the first time we’ve seen a misleading headline from news.com.)

There’s a big difference between the Induce Act and the kind of narrow active inducement standard that was suggested to the court. Indeed, the main advocate to the court of an active inducement standard was IEEE-USA, which testified against the Induce Act. Here, as always, the details matter. A decision by the court to adopt an active inducement standard could be very good news, or very bad news, depending on the specifics of what the court says.

The worst case, in some respects, is probably the one Fred von Lohmann mentions in the article, in which the court endorses the general idea of an inducement standard, but doesn’t fill in the details. If that happens, we’ll be stuck with years and years of litigation to figure out what the court meant. Regardless, it seems likely that after the court announces its decision, Congress will consider Induce Act II.

ICANN Cut Secret Domain Deal

According to Michael Froomkin at ICANNWatch, evidence has come to light that ICANN secretly cut a deal with IATA, an airline industry association, to create a new “.travel” domain and give control of it to a front organization controlled by IATA. If true, this is a serious breach of ICANN’s own rules and undermines ICANN’s legitimacy. As Michael says, this is a story that deserves more attention that it is likely to get.

ICANN, depending on whom you ask, is either a technical coordination agency for Internet naming, or the closest thing we have to a government for the Net. One of ICANN’s jobs is to decide whether and how to create new Top-Level Domains (TLDs). TLDs, such as “.com”, “.edu”, and “.uk” are the roots of the Internet’s name space. Whether ICANN is a standards body or a government, it is supposed to follow certain principles of fairness and transparency, as set down in its own bylaws. Apparently it has broken those rules in this case, and has done so in order to grant an unfair advantage in the TLD award process to a particular group.

In a normal organization, revelations like this might cause the members to revolt and elect new leadership. But ICANN doesn’t seem to have membership in the normal sense of the term, and it doesn’t seem to have a legitimate democratic process for picking its leaders. What we’ll get instead, if we get anything, is grumbling, and determination to keep ICANN from expanding its power further.

Revelations like this have to undermine ICANN’s already fragile legitimacy. People will ask why ICANN is in charge; and there’s not really a good answer. We can recount the history of how ICANN got its current position; but it’s hard to justify ICANN’s power as anything other than an accident of that history. My sense is that ICANN keeps its power mostly because nobody knows what would replace ICANN if it were deposed. That’s no way to run an Internet.

UPDATE (April 6): Edward Hasbrouck, who appears to deserve credit for uncovering much of this story, offers more details and background.