November 29, 2024

Why Use Remotely-Readable Passports?

Yesterday at CFP, I saw an interesting panel on the proposed radio-enabled passports. Frank Moss, a State Department employee and accomplished career diplomat, is the U.S. government’s point man on this issue. He had the guts to show up at CFP and face a mostly hostile audience. He clearly believes that he and the government made the right decision, but I’m not convinced.

The new passports, if adopted, will contain a chip that stores everything on the passport’s information page: name, date and place of birth, and digitized photo. This information will be readable by a radio protocol. Many people worry that bad guys will detect and read passports surreptitiously, as people walk down the street.

Mr. Moss said repeatedly that the chip can only be read at a distance of 10 centimeters (four inches, for the metric-impaired), making surreptitious reading unlikely. Later in the panel, Barry Steinhardt of the ACLU did a live demo in which he read information off the proposed radio-chip at a distance of about one meter, using a reader device about the size of a (closed) laptop. I have no doubt that this distance could be increased by engineering the reader more aggressively.

There was lots of back-and-forth about partial safeguards that might be added, such as building some kind of foil or wires into the passport cover so that the chip could only be read when the passport was open. Such steps do reduce the vulnerability of using remotely-readable passports, but they don’t reduce it to zero.

In the Q&A session, I asked Mr. Moss directly why the decision was made to use a remotely readable chip rather than one that can only be read by physical contact. Technically, this decision is nearly indefensible, unless one wants to be able to read passports without notifying their owners – which, officially at least, is not a goal of the U.S. government’s program. Mr. Moss gave a pretty weak answer, which amounted to an assertion that it would have been too difficult to agree on a standard for contact-based reading of passports. This wasn’t very convincing, since the smart-card standard could be applied to passports nearly as-is – the only change necessary would be to specify exactly where on the passport the smart-card contacts would be. The standardization and security problems associated with contactless cards seem to be much more serious.

After the panel, I discussed this issue with Kenn Cukier of The Economist, who has followed the development of this technology for a while and has a good perspective on how we reached the current state. It seems that the decision to use contactless technology was made without fully understanding its consequences, relying on technical assurances from people who had products to sell. Now that the problems with that decision have become obvious, it’s late in the process and would be expensive and embarrassing to back out. In short, this looks like another flawed technology procurement program.

RIAA Suing i2hub Users

Yesterday the RIAA announced lawsuits against many college students for allegedly using a program called i2hub to swap copyrighted music files. RIAA is trying to paint this as an important step in their anti-infringement strategy, but it looks to me like a continuation of what they have already been doing: suing individuals for direct infringement, and trying to label filesharing technologies (as opposed to infringing uses of them) as per se illegal.

The new angle in this round of suits is that i2hub traffic uses the Internet2 network. The RIAA press release is careful to call Internet2 a “specialized” network, but many press stories have depicted it a private network, separate from the main Internet. In fact, Internet2 is not really a separate network. It’s more like a set of express lanes for the Internet, built so that network traffic between Internet2 member institutions can go faster.

(The Washington Post article gets this point seriously wrong, calling Internet2 “a faster version of the Web”, and saying that “more and more college students have moved off the Web to trade music on Internet2, a separate network …”.)

Internet2 has probably been carrying a nonzero amount of infringing traffic for a long time, just because it is part of the Internet. What’s different about i2hub is not that some of its traffic goes over Internet2, but that it was apparently structured so that its traffic would usually travel over Internet2 links. In theory, this could make transfer of any large file, whether infringing or not, faster.

The extra speed of Internet2 doesn’t seem like much of an issue for music files, though. Music files are quite small and can be downloaded pretty quickly on ordinary broadband connections. Any speedup from using i2hub would mainly affect movie downloads, since movie files are much larger than music files. And yet it was the music industry, not the movie industry, that brought these suits.

Given all of this, my guess is that the RIAA is pushing the Internet2 angle mostly for policial and public relations reasons. By painting Internet2 as a separate network, the RIAA can imply that the transfer of infringing files over Internet2 is a new kind of problem requiring new regulation. And by painting Internet2 as a centrally-managed entity, the RIAA can imply that it is more regulable than the rest of the Internet.

Another unique aspect of i2hub is that it could only be used, supposedly, by people at univerisities that belong to the Internet2 consortium, which includes more than 200 schools. The i2hub website pitches it as a service just “by students, for students”. Some have characterized i2hub as a private filesharing network. That may be true in a formal sense, as not everybody could get onto i2hub. But the potential membership was so large that i2hub was, for all intents and purposes, a public system. We don’t know exactly how the RIAA or its agents got access to i2hub to gather the information behind the suits, but it’s not at all surprising that they were able to do so. If students thought that they couldn’t get caught if they shared files on i2hub, they were sadly mistaken.

[Disclaimer: Although some Princeton students are reportedly being sued, nothing in this post is based on inside information from those students (whoever they are) or from Princeton. As usual, I am not speaking for Princeton.]

Measure It, and They Will Come

The technology for measuring TV and radio audiences is about to change in important ways, according to a long and interesting article, in yesterday’s New York Times Magazine, by Jon Gertner. This will have implications for websites, online media, and public life as well.

Standard audience-measurement technology, as used in the past by Nielsen and Arbitron, paid a few consumers to keep diaries of which TV and radio stations they watched and listened to, and when. Newer technology, such as Nielsen’s “people meters”, actually connect to TVs and measure when they are on and which channel they are tuned to; family members are asked to press buttons saying when they start and stop watching. People meter results were surprisingly different than diary results, perhaps because people wrote in their diaries the shows they planned to watch, or the shows they liked, or the shows they thought others would want them to be watching, rather than the shows they really did watch.

The hot new thing in audience measurement involves putting quiet watermarks (i.e., distinctive audio markers) in the background of shows that are broadcast, and then paying consumers to wear beeper-like devices that record the watermarks they hear. A key advantage of this technology, from the audience monitor’s viewpoint, is that it records what the person hears whereever they go. For example, current Nielsen ratings for TV only measure what people see on their own television at home. Anything seen or heard in a public place, or on the Internet, doesn’t factor into the ratings. That is going to change.

Another use of the new technology puts a distinctive watermark in each advertisement, and then record which ads people hear. When this happens – and it seems inevitable that it will – advertisers will be willing to pay more for audio ads in public places and on the Net, because they’ll be able to measure the effect of those ads. Audio ads will no longer be coupled to radio and TV stations, but will be deliverable by anybody who has people nearby. This will mean, inevitably, that we’ll hear more audio ads in public places and on the Net. That’ll be annoying.

Worse yet, by measuring what people actually hear, the technologies will strengthen advertisers’ incentives to deliver ads in ways that defeat the standard measures we use to skip or avoid them. No longer will advertisers measure attempts to deliver audio ads; now they’ll measure success in delivering sound waves to our ears. So we’ll hear more and more audio ads in captive-audience situations like elevators, taxicabs, and doctors’ waiting rooms. Won’t that be nice?

Congressional Hearings on Music Interoperability

Yesterday a House subcommittee on “Courts, the Internet and Intellectual Property” held hearings on interoperability of music formats. (The National Journal Tech Daily has a good story, unfortunately behind a paywall.) Witnesses spoke unanimously against any government action in this area. According to the NJTD story,

[Subcommittee chair Rep. Lamar] Smith and other lawmakers who attended the hearing agreed with the panelists. The exception was Rep. Howard Berman of California, the subcommittee’s top Democrat, whose district encompasses Hollywood. He suggested that the confusing proliferation of non-compatible copy-protection technologies could be impeding the development of a legal digital-music marketplace.

What’s going on here? Rep. Smith’s opening statement gives some clues about the true purpose of the hearing.

Legitimate questions have been raised regarding the impact of digital interoperability on consumers. In the physical world, consumers didn’t expect that music audio cassettes were interoperable with CD players. Consumers switching from music cassettes to CDs bought the same music for $10 to $20 per CD that they already owned. Consumers accepted this since they felt they were getting something new with more value – a digital format that made every reproduction sound as good as the first playback.

Music is quickly becoming an online business with no connection to the physical world except for the Internet connection. Even that connection is increasingly becoming wireless. Some of the same interoperability issues that occur in the physical world are now appearing here. Consumers who want to switch from one digital music service to another must often purchase new music files and, sometimes, new music players.

For example, music purchased from the iTunes Music Store will only work on Apple’s iPod music player. Music purchased from Real cannot be accessed on the iPod. Last year, both companies became involved in a dispute over Real’s attempt to offer software called Harmony that would have allowed legal copies of music purchased from Real’s online music store to be playable on Apple’s iPod music player. Apple objected to this effort, calling it “hacker like” and invoking the DMCA. Apple blocked Real’s software from working a short time afterwards.

This interoperability issue is of concern to me since consumers who bought legal copies of music from Real could not play them on an iPod. I suppose this is a good thing for Apple, but perhaps not for consumers. Apple was invited to testify today, but that they chose not to appear. Generally speaking, companies with 75% market share of any business, in this case the digital download market, need to step up to the plate when it comes to testifying on policy issues that impact their industry. Failure to do so is a mistake.

As a result of disputes like the one between Apple and Real, some have suggested that efforts to boost digital music interoperability should be encouraged by regulation or legislation. Others have urged Congress to leave the issue to the marketplace and let consumers decide what it best for them.

The hearing is clearly meant to send a “we’re watching you” message to Apple and others, urging them not to block interoperability.

Of course, if full interoperability is really the goal, we already have a solution that is hugely popular. It’s called MP3. More likely, what the subcommittee really wants to see is a kind of pseudo-interoperability that allows products from a limited set of companies to work together, while excluding everyone else. It’s hard to see how this could happen without a further reduction in competition, amounting to a cartelization of the market for digital music services.

The right public policy in this area is to foster robust competition among digital music services of all kinds. A good start would be to remove existing barriers to competition, for example by repealing or narrowing the DMCA, and to ensure that the record companies don’t act as a cartel in negotiating with music services.

Inducing Confusion

Alex, and others reporting on the Supreme Court arguments in the Grokster case, noticed that the justices seemed awfully interested in active inducement theories. Speculation has begun about what this might mean.

News.com is running a piece by John Borland, connecting the court discussion to last year’s ill-fated Induce Act. The Induce Act, which was killed by a unanimous chorus of criticism from the technology world, would have created a broad new category of liability for companies that failed to do enough (by vaguely defined standards) to prevent copyright infringement.

(The news.com piece has a terrible headline: Court mulls P2P ‘pushers’. This fails to convey the article’s content, and it drops the loaded word “pushers”, which appears nowhere in the article. The headline writer seems to acknowledge that the word doesn’t fit, by putting it in scare-quotes, which only highlights the fact that nobody is being quoted. Don’t blame John Borland; the headline was probably written by his editor. This isn’t the first time we’ve seen a misleading headline from news.com.)

There’s a big difference between the Induce Act and the kind of narrow active inducement standard that was suggested to the court. Indeed, the main advocate to the court of an active inducement standard was IEEE-USA, which testified against the Induce Act. Here, as always, the details matter. A decision by the court to adopt an active inducement standard could be very good news, or very bad news, depending on the specifics of what the court says.

The worst case, in some respects, is probably the one Fred von Lohmann mentions in the article, in which the court endorses the general idea of an inducement standard, but doesn’t fill in the details. If that happens, we’ll be stuck with years and years of litigation to figure out what the court meant. Regardless, it seems likely that after the court announces its decision, Congress will consider Induce Act II.