November 23, 2024

Archives for 2007

AACS: Slow Start on Traitor Tracing

[Previous posts in this series: 1, 2, 3, 4, 5, 6, 7, 8.]

Alex wrote on Thursday about the next step in the breakdown of AACS, the encryption scheme used on next-gen DVD discs (HD-DVD and Blu-ray): last week a person named Arnezami discovered and published a processing key that apparently can be used to decrypt all existing discs.

We’ve been discussing AACS encryption, on and off, for several weeks now. To review the state of play: the encryption scheme serves two purposes: key distribution and traitor tracing. Key distribution ensures that every player device, except devices that have been blacklisted, can decrypt a disc. Traitor tracing helps the authorities track down which player has been compromised, if key information is leaked. The AACS authorities encode the header information for each disc in such a way that keys are distributed properly and traitor tracing can occur.

Or that’s the theory, at least. In practice, the authorities are making very little use of the traitor tracing facilities. We’re not sure why this is. They surely have an interest in tracing traitors, and failing to encode discs to facilitate traitor tracing is just a lost opportunity.

The main traitor tracing feature is the so-called sequence key mechanism. This mechanism is not used at all on any of the discs we have seen, nor have we seen any reports of its use.

A secondary traitor tracing feature involves the use of processing keys. Each player device has a unique set of a few hundred device keys, from which it can calculate a few billion different processing keys. Each processing key is computable by only a fraction of the players in the world. Each disc’s headers include a list of the processing keys that can decrypt the disc; any one of the listed processing keys is sufficient to decrypt the disc.

For some reason, all existing discs seem to list the same set of 512 processing keys. Each player will be able to compute exactly one of these processing keys. So when Arnezami leaked a processing key, the authorities could deduce that he must have extracted it from a player that knew that particular processing key. In other words, it narrowed down the identity of his player to about 0.2% of all possible players.

Because all existing discs use the same set of processing keys, the processing key leaked by Arnezami can decrypt any existing disc. Had the authorities used different sets of processing keys on different discs – which was perfectly feasible – then a single processing key would not have unlocked so many discs. Arnezami would have had to extract and publish many processing keys, which would have made his job more difficult, and would have further narrowed down which player he had.

The ability to use different processing key sets on different discs is part of the AACS traitor tracing facility. In failing to do this, the authorities once again failed to use the traitor tracing mechanisms at their disposal.

Why aren’t the authorities working as hard as they can to traitor-trace compromised players? Sure, the sequence key and processing key mechanisms are a bit complex, but if the authorities weren’t going to use these mechanisms, then why would they have gone to the difficulty and expense of designing them and requiring all players to implement them? It’s a mystery to us.

AACS: A Tale of Three Keys

[Previous posts in this series: 1, 2, 3, 4, 5, 6, 7.]

This week brings further developments in the gradual meltdown of AACS (the encryption scheme used for HD-DVD and Blu-Ray discs). Last Sunday, a member of the Doom9 forum, writing under the pseudonym Arnezami, managed to extract a “processing key” from an HD-DVD player application. Arnezami says that this processing key can be used to decrypt all existing HD-DVD and Blu-Ray discs. Though currently this attack is more powerful than previous breaks, which focused on a different kind of key, its usefulness will probably diminish as AACS implementers adapt.

To explain what’s at stake, we need to describe a few more details about the way AACS manages keys. Recall that AACS player applications and devices are assigned secret device keys. Devices can use these keys to calculate a much larger set of keys called processing keys. Each AACS movie is encrypted with a unique title key, and several copies of the title key, encrypted with different processing keys, are stored on the disc. To play a disc, a device figures out which of the encrypted title keys it has the ability to decrypt. Then it uses its device keys to compute the necessary processing key, uses the processing key to decrypt the title key, and uses the title key to extract the content.

These three kinds of keys have different security properties that make them more or less valuable to attackers. Device keys are the most useful. If you know the device keys for a player, you can decrypt any disc that the player can. Title keys are the least useful, because each title key works only for a single movie. (Attacks on any of these keys will be limited by disc producers’ ability to blacklist compromised players. If they can determine which device has been compromised, they can change future discs so that the broken player, or its leaked device keys, won’t be able to decrypt them.)

To date, no device keys have been compromised. All successful breaks, before Arnezami, have involved extracting title keys from player software. These attacks are rather cumbersome–before non-technical users can decrypt a movie, somebody with the means to extract the title key needs to obtain a copy of the disc and publish its title key online. Multiple web sites for sharing title keys have been deployed, but these are susceptible to legal and technical threats.

So is the new attack on the processing key comparable to learning a venerable device key or a lowly title key? The answer is that, due to a strange quirk in the way the processing keys used on existing discs were selected, the key Arnezami published apparently can be used to decrypt every HD-DVD or Blu-Ray disc on the market. For the time being, knowing Arnezami’s processing key is as powerful as knowing a device key. For instance, someone could use the processing key to build a player or ripper that is able to treat all current discs as if they were unencrypted, without relying on online services or waiting for other users to extract title keys.

Yet this power will not last long. For future discs, processing key attacks will probably be no more valuable than title key attacks, working only on a single disc or a few discs at most. We’ll explain why in tomorrow’s post.

Is there any such thing as “enough” technological progress?

Yesterday, Ed considered the idea that there may be “a point of diminishing returns where more capacity doesn’t improve the user’s happiness.” It’s a provocative concept, and one that I want to probe a bit further.

One observation that seems germane is that such thoughts have a pedigree. Henry L. Ellsworth, , in his 1843 report to Congress, wrote that “the advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end.”

It seems to me that the idea of diminishing marginal returns is most at home in settings where the task or process under consideration has well-defined boundaries. For example, making steel: Larger steel mills, up to a point, are more efficient that smaller ones. Larger furnaces reduce capital costs per unit of output, and secondary functions like logistics, training and bookkeeping can be spanned across larger amounts of steel without commensurate increases in their cost. But consolidating an industry, and replacing small production facilities with a larger one, does not necessarily involve any fundamental advancement in the state of the art. (It may, of course.)

Innovation—which is the real wellspring of much of human progress—tends not to follow such predictable patterns. Science textbooks like to present sanitized stories of incremental, orderly advancement, but as Thomas Kuhn famously argued, history actually abounds with disjointed progress, serendipitous accidents, and unanticipated consequences, both good and bad.

There are areas in which incremental improvement is the norm: shaving razors, compression algorithms, mileage per gallon. But in each of these areas, the technology being advanced is task-specific. Nobody is going to use their car to shave or their Mach 3 to commute to the office.

But digital computers—Turing machines—are different. It’s an old saw that a digital computer can be used to change or analyze literally any information. When it comes to computers, advancement means faster Turing machines with larger memories, in smaller physical footprints and with lower costs (including, e.g., manufacturing expense and operational electricity needs).

Ed’s observation yesterday that there is an ultimate limit to the bandwidth leading into the human brain is well taken. But in terms of all transmission of digital content globally, the “last hop” from computer to human is already a very small part of the total traffic. Mostly, traffic is among nodes on end-to-end computer networks, among servers in a Beowulf cluster or similar setup, or even traffic among chips on a motherboard or cores in the same chip. Technologies that advance bandwidth capabilities are useful primarily because of the ways they change what computers can do (at the human time scale). The more they advance, the more things, and the more kinds of things, computers will be capable of. It’s very unlikely we’ve thought of them all.

It is also striking how far our capability to imagine new uses for digital technology has lagged behind the advancement of the technology itself. Blogs like this one were effectively possible from the dawn of the World Wide Web (or even before), and they now seem to be a significant part of what the web can most usefully be made to do. But it took years, after the relevant technologies were available, for people to recognize and take advantage of this possibility. Likewise, much of “web 2.0” has effectively meant harnessing relatively old technologies, such as Javascript, in new and patently unanticipated ways.

The literature of trying to imagine far-out implications of technological advancement is at once both exciting and discouraging: Exciting because it shows that much of what we can imagine probably will happen eventually, and discouraging because it shows that the future is full of major shifts, obvious in retrospect, to which we were blind up until their arrival.

I occasionally try my hand at the “big picture” prognostication game, and enjoy reading the efforts of others. But in the end I’m left feeling that the future, though bright, is mysterious. I can’t imagine a human community, even in the distant future, that has exhausted its every chance to create, innovate and improve its surroundings.

How Much Bandwidth is Enough?

It is a matter of faith among infotech experts that (1) the supply of computing and communications will increase rapidly according to Moore’s Law, and (2) the demand for that capacity will grow roughly as fast. This mutual escalation of supply and demand causes the rapid change we see in the industry.

It seems to be a law of physics that Moore’s Law must terminate eventually – there are fundamental physical limits to how much information can be stored, or how much computing accomplished in a second, within a fixed volume of space. But these hard limits may be a long way off, so it seems safe to assume that Moore’s Law will keep operating for many more cycles, as long as there is demand for ever-greater capacity.

Thus far, whenever more capacity comes along, new applications are invented (or made practical) to use it. But will this go on forever, or is there a point of diminishing returns where more capacity doesn’t improve the user’s happiness?

Consider the broadband link going into a typical home. Certainly today’s homeowner wants more bandwidth, or at least can put more bandwidth to use if it is provided. But at some point there is enough bandwidth to download any reasonable webpage or program in a split second, or to provide real-time ultra-high-def video streams to every member of the household. When that day comes, do home users actually benefit from having fatter pipes?

There is a plausible argument that a limit exists. The human sensory system has limited (though very high) bandwidth, so it doesn’t make sense to direct more than a certain number of bits per second at the user. At some point, your 3-D immersive stereo video has such high resolution that nobody will notice any improvement. The other senses have similar limits, so at some point you have enough bandwidth to saturate the senses of everybody in the home. You might want to send information to devices in the home; but how far can that grow?

Such questions may not matter quite yet, but they will matter a great deal someday. The structure of the technology industries, not to mention technology policies, are built around the idea that people will keep demanding more-more-more and the industry will be kept busy providing it.

My gut feeling is that we’ll eventually hit the point of diminishing returns, but it is a long way off. And I suspect we’ll hit the bandwidth limit before we hit the computation and storage limits. I am far from certain about this. What do you think?

(This post was inspired by a conversation with Tim Brown.)

SonyBMG (Accidentally?) Giving Away MP3 of New Billy Joel Song

Billy Joel’s new song, “All My Life” is being released in stages. Presently it’s available for free streaming from People Magazine’s site. Later in the month it will be available for purchase only at the iTunes Music store. After that it will be released in other online stores. Or at least that was the plan of the record company, SonyBMG.

As an anonymous reader points out, although the People site looks like it is streaming the song, thus giving users no easy way to copy it, what the site actually does is download a high-quality MP3 file (unencumbered by any copy protection) to the user’s computer, and then play the MP3. The MP3 is dropped in a place where ordinary users won’t stumble across it, but if you know where to look you’ll find it sitting on your computer after you listen to the “stream”. In other words, SonyBMG is, perhaps inadvertently, giving away high-quality MP3s of “All My Life.”

(Technical details, for those who care: The “streaming” control is actually a Flash object that downloads and plays an MP3. It uses the normal browser mechanism to do the downloading, which means that the browser (Firefox, at least) automatically squirrels away a copy of the downloaded file. Result: the MP3 file is left on the user’s system.)

The obvious question is why SonyBMG did this. It could be (1) a mistake by an engineer who didn’t realize that the canned music-player control he was using operated by downloading an MP3. Or perhaps (2) the engineer didn’t realize that the browser would keep a copy of the file. Or it could be that (3) SonyBMG knew about all of this and figured users wouldn’t notice, or (4) they figured that any user who could find the MP3 could capture an ordinary stream anyway. For what it’s worth, my money is on (2).