December 28, 2024

Archives for 2005

Why Does Anybody Believe Viralg?

A story is circulating about a Finnish company called Viralg, which claims to have a product that “blocks out all illegal swapping of your data”. There is also a press release from Viralg.

This shows all the signs of being a scam or hoax. The company’s website offers virtually nothing beyond claims to be able to totally eradicate file swapping of targeted files. The “Company” page has no information about the company or who works for it. The “Customers” page does not mention any specific customers. The “Testimonials” page has no actual testimonials from customers or anybody else. The “Services” page refers to independent testing but gives no information about who did the testing or what specifically they found. The “Contacts” page lists only an email address. There is no description of the company’s technology, except to say that it is a “virtual algorithm”, whatever that means. Neither the website nor the Viralg press release nor any of the press coverage mentions the name of any person affiliated with Viralg. The press release uses nonsense technobabble like “super randomized corruption”.

The only real technical information available is in a patent application from Viralg, which describes standard, well-known methods for spoofing content in Kazaa and other filesharing networks. If this is the Viralg technology, it certainly doesn’t provide what the website and press release claim.

My strong suspicion is that the headline on the Slashdot story – “Finnish Firm Claims Fake P2P Hash Technology” – is correct. But it’s not the hashes that look fake, it’s the technology.

Next-Gen DVD Encryption: Better, but Won't Stop Filesharing

Last week, specifications were released for AACS, an encryption-based system that may be used on next-generation DVDs. You may recall that CSS, which is currently used on DVDs, is badly misdesigned, to the point that I sometimes use it in teaching as an example of how not to use crypto. It’s still a mystery how CSS was bungled so badly. But whatever went wrong last time wasn’t repeated this time – AACS seems to be very competently designed.

The design of AACS seems aimed at limiting entry to the market for next-gen DVD players. It will probably succeed at that goal. What it won’t do is prevent unauthorized filesharing of movies.

To understand why it meets one goal and not the other, let’s look more closely at how AACS manages cryptographic keys. The details are complicated, so I’ll simplify things a bit. (For full details see Chapter 3 of the AACS spec, or the description of the Subset Difference Method by Naor, Naor, and Lotspiech.) Each player device is assigned a DeviceID (which might not be unique to that device), and is given decryption keys that correspond to its DeviceID. When a disc is made, a random “disc key” is generated and the video content on the disc is encrypted under the disc key. The disc key is encrypted in a special way and is then written onto the disc.

When a player device wants to read a disc, the player first uses its own decryption keys (which, remember, are specific to the player’s DeviceID) to unlock the disc key; then it uses the disc key to unlock the content.

This scheme limits entry to the market for players, because you can’t build a player without getting a valid DeviceID and the corresponding secret keys. This allows the central licensing authority, which hands out DeviceIDs and keys, to control who can make players. But there’s another way to get that information – you could reverse-engineer another player device and extract its DeviceID and keys, and then you could make your own players, without permission from the licensing authority.

To stop this, the licensing authority will maintain a blacklist of “compromised” DeviceIDs. Newly manufactured discs will be made so that their disc keys can be unlocked only by DeviceIDs that aren’t on the blacklist. If a DeviceID is added to the blacklist today, then players with that DeviceID won’t be able to play discs that are manufactured in the future; but they will still be able to play discs manufactured in the past.

CSS used a scheme rather like this, but there were only a few distinct DeviceIDs. A large number of devices shared a DeviceID, and so blacklisting a DeviceID would have caused lots of player devices in the field to break. This made blacklisting essentially useless in CSS. AACS, by contrast, uses some fancy cryptography to increase the number of distinct DeviceIDs to about two billion (2 to the 31st power). Because of this, a DeviceID will belong to one device, or at most a few devices, making blacklisting practical.

This looks like a good plan for controlling entry to the market. Suppose I want to go into the player market, without signing a license with the licensing authority. I can reverse-engineer a few players to get their DeviceIDs and keys, and then build those into my product. The licensing authority will respond by figuring out which DeviceIDs I’m using, and revoking them. Then the players I have sold won’t be able to play new discs anymore, and customers will shun me.

This plan won’t stop filesharing, though. If somebody, somewhere makes his own player using a reverse-engineered DeviceID, and doesn’t release that player to the public, then he will be able to use it with impunity to play or rip discs. His DeviceID can only be blacklisted if the licensing authority learns what it is, and the authority can’t do that without getting a copy of the player. Even if a player is released to the public, it will still make all existing discs rippable. New discs may not be rippable, at least for a while, but we can expect new reverse-engineered DeviceIDs to pop up from time to time, with each one making all existing discs rippable. And, of course, none of this stops other means of ripping or capturing content, such as capturing the output of a player or infiltrating the production process.

Once again, DRM will limit competition without reducing infringement. Companies are welcome to try tactics like these. But why should our public policy support them?

UPDATE (11:30 AM): Eric Rescorla has two nice posts about AACS, making similar arguments.

Texas Bill Would Close Meetings About Computer Security

A bill (HB 3245) introduced in the Texas state legislature would exempt meetings discussing “matters relating to computer security or the security of other information resources technologies” from the state’s Open Meetings Act.

This seems like a bad idea. Meetings can already be closed if sufficient cause is shown. The mere fact that computer security, or matters relating to it, will be discussed should not in itself be sufficient cause to close a meeting. Computer security is a topic on which Texas, or any state or national government, needs all the help it can get. The public includes many experts who are willing to help. Why shut them out?

The bill is scheduled for a hearing on Monday in the Texas House State Affairs Committee. If you live in Texas, you might want to let the committee members know what you think about this.

(Thanks to Adina Levin for bringing this to my attention.)

Why Use Remotely-Readable Passports?

Yesterday at CFP, I saw an interesting panel on the proposed radio-enabled passports. Frank Moss, a State Department employee and accomplished career diplomat, is the U.S. government’s point man on this issue. He had the guts to show up at CFP and face a mostly hostile audience. He clearly believes that he and the government made the right decision, but I’m not convinced.

The new passports, if adopted, will contain a chip that stores everything on the passport’s information page: name, date and place of birth, and digitized photo. This information will be readable by a radio protocol. Many people worry that bad guys will detect and read passports surreptitiously, as people walk down the street.

Mr. Moss said repeatedly that the chip can only be read at a distance of 10 centimeters (four inches, for the metric-impaired), making surreptitious reading unlikely. Later in the panel, Barry Steinhardt of the ACLU did a live demo in which he read information off the proposed radio-chip at a distance of about one meter, using a reader device about the size of a (closed) laptop. I have no doubt that this distance could be increased by engineering the reader more aggressively.

There was lots of back-and-forth about partial safeguards that might be added, such as building some kind of foil or wires into the passport cover so that the chip could only be read when the passport was open. Such steps do reduce the vulnerability of using remotely-readable passports, but they don’t reduce it to zero.

In the Q&A session, I asked Mr. Moss directly why the decision was made to use a remotely readable chip rather than one that can only be read by physical contact. Technically, this decision is nearly indefensible, unless one wants to be able to read passports without notifying their owners – which, officially at least, is not a goal of the U.S. government’s program. Mr. Moss gave a pretty weak answer, which amounted to an assertion that it would have been too difficult to agree on a standard for contact-based reading of passports. This wasn’t very convincing, since the smart-card standard could be applied to passports nearly as-is – the only change necessary would be to specify exactly where on the passport the smart-card contacts would be. The standardization and security problems associated with contactless cards seem to be much more serious.

After the panel, I discussed this issue with Kenn Cukier of The Economist, who has followed the development of this technology for a while and has a good perspective on how we reached the current state. It seems that the decision to use contactless technology was made without fully understanding its consequences, relying on technical assurances from people who had products to sell. Now that the problems with that decision have become obvious, it’s late in the process and would be expensive and embarrassing to back out. In short, this looks like another flawed technology procurement program.

RIAA Suing i2hub Users

Yesterday the RIAA announced lawsuits against many college students for allegedly using a program called i2hub to swap copyrighted music files. RIAA is trying to paint this as an important step in their anti-infringement strategy, but it looks to me like a continuation of what they have already been doing: suing individuals for direct infringement, and trying to label filesharing technologies (as opposed to infringing uses of them) as per se illegal.

The new angle in this round of suits is that i2hub traffic uses the Internet2 network. The RIAA press release is careful to call Internet2 a “specialized” network, but many press stories have depicted it a private network, separate from the main Internet. In fact, Internet2 is not really a separate network. It’s more like a set of express lanes for the Internet, built so that network traffic between Internet2 member institutions can go faster.

(The Washington Post article gets this point seriously wrong, calling Internet2 “a faster version of the Web”, and saying that “more and more college students have moved off the Web to trade music on Internet2, a separate network …”.)

Internet2 has probably been carrying a nonzero amount of infringing traffic for a long time, just because it is part of the Internet. What’s different about i2hub is not that some of its traffic goes over Internet2, but that it was apparently structured so that its traffic would usually travel over Internet2 links. In theory, this could make transfer of any large file, whether infringing or not, faster.

The extra speed of Internet2 doesn’t seem like much of an issue for music files, though. Music files are quite small and can be downloaded pretty quickly on ordinary broadband connections. Any speedup from using i2hub would mainly affect movie downloads, since movie files are much larger than music files. And yet it was the music industry, not the movie industry, that brought these suits.

Given all of this, my guess is that the RIAA is pushing the Internet2 angle mostly for policial and public relations reasons. By painting Internet2 as a separate network, the RIAA can imply that the transfer of infringing files over Internet2 is a new kind of problem requiring new regulation. And by painting Internet2 as a centrally-managed entity, the RIAA can imply that it is more regulable than the rest of the Internet.

Another unique aspect of i2hub is that it could only be used, supposedly, by people at univerisities that belong to the Internet2 consortium, which includes more than 200 schools. The i2hub website pitches it as a service just “by students, for students”. Some have characterized i2hub as a private filesharing network. That may be true in a formal sense, as not everybody could get onto i2hub. But the potential membership was so large that i2hub was, for all intents and purposes, a public system. We don’t know exactly how the RIAA or its agents got access to i2hub to gather the information behind the suits, but it’s not at all surprising that they were able to do so. If students thought that they couldn’t get caught if they shared files on i2hub, they were sadly mistaken.

[Disclaimer: Although some Princeton students are reportedly being sued, nothing in this post is based on inside information from those students (whoever they are) or from Princeton. As usual, I am not speaking for Princeton.]