November 25, 2024

CD DRM: Compatibility and Software Updates

Alex and I are working on an academic paper, “Lessons from the Sony CD DRM Episode”, which will analyze several not-yet-discussed aspects of the XCP and MediaMax CD copy protection technologies, and will try to put the Sony CD episode in context and draw lessons for the future. We’ll post the complete paper here next week. Until then, we’ll post drafts of a few sections here. We have two reasons for this: we hope the postings will be interesting in themselves, and we hope your comments will help us improve the paper.

Today’s section will be (in the final paper) the last part of the technical core of the paper. Readers of the final paper will have seen the rest of our technical analysis by this point. Blog readers haven’t seen it all yet – stay tuned.

Please note that this is a draft and should not be formally quoted or cited. The final version of our entire paper will be posted here when it is ready.

Compatibility and Software Updates

Compared to other media on which software is distributed, compact discs have a very long life. Many compact discs will still be inserted into computers and other players twenty years or more after they are first bought. If a particular version of (say) active protection software is burned onto a new CD, that software version may well try to install and run itself decades after it was first developed.

The same is not true of conventional software, even when it ships on a CD-ROM. Very few if any of today’s Windows XP CDs will be inserted into computers in 2026; but CDs containing today’s CD DRM software will be. Accordingly, CD DRM software faces a much more serious issue of compatibility with future systems.

The future compatibility problem has two distinct aspects: safety, or how to avoid incompatibilities that cause crashes or malfunction of other software, and efficacy, or how to ensure that the desired anti-copying features remain effective.

Protecting Safety by Deactivating Old Software

Safety is the easier attribute to protect, and in most respects the more important. One way to protect safety is to design the DRM software so that it is likely to be inert and harmless on future systems. Both XCP and MediaMax do this by relying on the Windows Autorun feature, which is unlikely to be supported in future Windows versions for security reasons. If, say, the upcoming Windows Vista does not support Autorun (or supports it but disables it by default), then XCP and MediaMax will have no effect on Vista systems. Perhaps the use of Autorun by XCP and MediaMax was a deliberate design decision to ensure safety; but we suspect that it was a side-effect of a design choice that was expedient for other reasons.

Another way to protect safety is to build a sunset date into the software, and to program the software to be as inert as possible once the sunset date is reached. Building in a sunset after (say) three years would protect against many safety problems; and it would have little effect on the record label’s business model, as we would expect nearly all revenue from monetizing new uses of the music to have been extracted within the first three years after the disc is pressed. If a customer is ever going to pay for iPod downloading, she is likely to do so within the first three years after the CD is pressed.

Updating the Software

Like any software vendor, a DRM vendor can issue new verions of its products. A new version can be shipped on newly pressed CDs, but existing CDs cannot be modified retroactively.

Instead, the vendor can offer updates, which can be delivered either by download or on new CDs. Downloads can occur immediately, but only on machines that are connected to the Internet. CD delivery can potentially reach more machines, but is slower and less certain.

Either mode of distribution can be used straightforwardly if the user wants to cooperate. Users will generally cooperate with updates that only provide safety on new systems, or that otherwise increase the software’s value to the user. But updates that merely retain the efficacy of the software’s usage restriction mechanisms will not be welcomed by users.

Users have many ways to block the downloading or installation of updates. They can write-protect the software’s code, so that it cannot be updated. They can configure the system to block network connections to the vendor’s servers. They can use standard security tools, such as personal firewalls, to stop the downloads. System security tools are often well suited for such a task, being programmed to block unwanted network connections, downloads, and code installation. If a current security tool does not block updates of CD DRM software, the tool vendor has an incentive to make future versions do so.

A DRM vendor who wants to offer efficacy-related updates, recognizing that users will not want those updates, has two options. The vendor can offer updates and hope that many users will not bother to block them. From the record label’s standpoint, prolonging the system’s efficacy for some users is better than nothing. Alternatively, the vendor can try to force users to accept updates.

Forcing Updates

If a user can block updates of the DRM software on his machine, the vendor’s best strategy for forcing an update is somehow to convince the user that the update is in his best interest. This can be done by making a non-updated system painful to use.

If we rule out dangerous and almost certainly illegal approaches such as logic bombs that destroy a noncompliant user’s files or hold his computer hostage, the vendor’s best option is to make the DRM software block all access to protected CDs until the user updates the software. The software might check periodically with some server on the Internet, which would produce some kind of cryptographic assertion saying which versions are allowed to continue operating without an update, as of some date time. If the software on the user’s system noticed that no recent certificate existed that allowed its own version to keep operating, it would go into a locked down mode that blocked all
access to protected discs but allowed software updates. The user would then have to update to a new version in order to get access to his protected CDs.

This approach could force updates on some users and thereby prolong the efficacy of the DRM for those users. However, it also has several drawbacks. If the computer is not connected to the Internet, the software will eventually lock down the user’s music because it cannot see any certificates that allow it to continue. (The software could continue working if it can’t see the Internet, but that would allow users to block updates indefinitely by configuring their systems to stop the DRM software from making network connections.) A bug in the software could cause it to lock itself down irreversibly, perhaps by accident. The software could lock itself down if the vendor’s Internet site is shut down, for example if the vendor goes bankrupt.

Locking down the music, or forcing a restrictive software update, can also be counterproductive, by giving the user a reason to defeat or remove the DRM software. (Users could also defeat the timeout mechanism by misleading the DRM software about the date and time, but we expect that most users with the inclination to do that would choose instead to remove the DRM software altogether.) The software is more likely to remain on the user’s system if it does not behave annoyingly. Automatic update can reduce the DRM system’s efficacy if it just drives users to remove the DRM software. From the user’s standpoint, every software update is a security risk, because it might carry hostile or buggy code.

Given the difficulties associated with forced updates, and the user backlash it likely would have triggered, we are not surprised that neither XCP nor MediaMax chose to use forced updates.

Spyware Workshop, March 16-17

Helen Nissenbaum and I are co-organizing an interdisciplinary workshop on spyware, in New York on March 16 (evening) and March 17 (day). We have a great-looking lineup of speakers, reflecting a range of viewpoints on technical, legal, and policy aspects of the spyware problem.

The workshop is free and open to the public, but we ask that you let us know if you plan to attend. For more information, see the workshop announcement.

The workshop is co-organized by NYU’s Information Law Institute and Princeton’s Center for Information Technology Policy.

CD DRM: Threat Models and Business Models

Alex and I are working on an academic paper, “Lessons from the Sony CD DRM Episode”, which will analyze several not-yet-discussed aspects of the XCP and MediaMax CD copy protection technologies, and will try to put the Sony CD episode in context and draw lessons for the future. We’ll post the complete paper here next Friday. Until then, we’ll post drafts of a few sections here. We have two reasons for this: we hope the postings will be interesting in themselves, and we hope your comments will help us improve the paper.

Today’s excerpt is from a section early in the paper, where we are still setting the scene before the main technical discussion begins:

Threat Models and Business Models

Before analyzing the security of any system, we need to ask what the system is trying to accomplish: what its threat model is. In the case of CD DRM, the system’s goals are purely economic, and the technical goals of the system exist only to protect or enable the business models of the record label and the DRM vendor. Accordingly, any discussion of threat models must begin and end by talking about business models.

It is important to note that the record label and the DRM vendor are separate entities whose goals and incentives are not always aligned. Indeed, we will see that incentive differences between the label and the DRM vendor can be an important factor in understanding the design and deployment of CD DRM systems.

Record Label Goals

The record label would like to prevent music from the CD from becoming generally available on peer-to-peer file sharing networks, but this goal is clearly infeasible. If even one user succeeds in ripping an unprotected copy of the music and putting that copy onto P2P networks, then the music will be generally available. Clearly no CD DRM system can be nearly strong enough to stop this from happening; and as we will see below, real systems do not even try to achieve the kind of comprehensive coverage of all major computing platforms that we would needed as a prerequisite for stopping P2P sharing of protected music. We conclude that the goal of CD DRM systems cannot be to prevent P2P file sharing.

The record label’s goal must therefore be to stop many users from making disc-to-disc copies or from engaging in other forms of local copying or use of the music. By preventing local copying, the record company might be able to sell more copies of the music. For example, if Alice cannot make a copy of a CD to give to Bob, Bob might buy another copy from the record label.

By controlling other local uses, the record company might be able to charge extra fee for those uses. For example, if the record label can stop Alice from downloading music from a CD into her iPod, the label might be able to charge Alice an extra fee for iPod downloads. Charging extra for iPod downloads creates a new revenue stream for the label, but it also reduces the value to users of the original CD and therefore reduces the revenue that the label can extract from CD sales. Whether the new revenue stream outweighs the loss of CD revenue depends on detailed assumptions about customer preferences, which may not be easy for the label to determine in practice. For our purposes, it suffices to say that the label wants to establish control over the uses made by at least some users, because that control will tend generally to increase the label’s profit.

We note also that the record company’s profit-maximizing strategy in this regard is largely independent of the contours of copyright law. Whether the label would find it more profitable to control a use, as opposed to bundling it with the CD purchase, is a separate question from whether the law gives the label the right to file lawsuits relating to that use. Attempting to enforce copyright law exactly as written is almost certainly not the record label’s profit-maximizing strategy.

Monetizing the Platform

Even beyond its effect on controlling copying and use of content, CD DRM can generate revenue for the record label because it installs and runs software on users’ computers. The label can monetize this installed platform in various ways. For example, the DRM software comes with a special music-player application which is used to listen to the protected disc. This application can display advertisements or other promotional material that creates value for the label. Alternatively, the platform can gather information about the user’s music listening habits, and that information can be exploited for some business purpose. If these tactics are taken too far, the DRM software can become spyware. Even if these tactics are pursued more moderately, users may still object; but the record company may use these tactics anyway if it believes the benefits to it outweigh the costs.

DRM Vendor Goals

The DRM vendor’s primary goal, obviously, is to provide value to the record label, in order to maximize the price that the vendor can charge the label for using the DRM technology. If this were the only factor, then the incentives of the vendor and the label would be perfectly aligned and there would be no need to consider the vendor’s incentives separately.

However, there are at least two ways in which the DRM vendor’s incentives diverge from the record label’s. First, the vendor has a much larger tolerance for risk than the label does. The label is a large, established business with a valuable brand name. The vendor (at least in the cases at issue here) is a start-up company struggling to establish itself. The label has much more to lose than the vendor does if something goes horribly wrong. Accordingly, we can expect the vendor to be much more willing to accept security risks than the label is.

The second incentive difference is that the vendor can monetize the installed platform in ways that are not available to the record label. For example, once the vendor’s software is installed on a user’s system, the software can control copying and use of other labels’ CDs. Having a larger installed base makes the vendor’s product more
attractive to other labels. Because the vendor gets this extra benefit from installing the software, the vendor has an incentive to be more aggressive about pushing the software onto users’ computers than the label would be.

In short, the vendor’s incentives diverge from the label’s incentives in ways that make the vendor more likely to (a) cut corners and accept security and reliability risks, and (b) push its software onto more user’s computers, even in some cases where the label would prefer to do otherwise. If the label knew everything about how the vendor’s technology worked, then this would not be an issue – the label would simply insist that the vendor protect its interests. But if some aspects of the vendor’s design are withheld from the label as proprietary, or if the label is not extremely diligent in monitoring the vendor’s design choices – both of which are likely in practice – then the vendor will sometimes act against the label’s interests.

Analog Hole Bill Would Impose a Secret Law

If you’ve been reading here lately, you know that I’m no fan of the Sensenbrenner/Conyers analog hole bill. The bill would require almost all analog video devices to implement two technologies called CGMS-A and VEIL. CGMS-A is reasonably well known, but the VEIL content protection technology is relatively new. I wanted to learn more about it.

So I emailed the company that sells VEIL and asked for a copy of the specification. I figured I would be able to get it. After all, the bill would make compliance with the VEIL spec mandatory – the spec would in effect be part of the law. Surely, I thought, they’re not proposing passing a secret law. Surely they’re not going to say that the citizenry isn’t allowed to know what’s in the law that Congress is considering. We’re talking about television here, not national security.

After some discussion, the company helpfully explained that I could get the spec, if I first signed their license agreement. The agreement requires me (a) to pay them $10,000, and (b) to promise not to talk to anybody about what is in the spec. In other words, I can know the contents of the bill Congress is debating, but only if I pay $10k to a private party, and only if I promise not to tell anybody what is in the bill or engage in public debate about it.

Worse yet, this license covers only half of the technology: the VEIL decoder, which detects VEIL signals. There is no way you or I can find out about the encoder technology that puts VEIL signals into video.

The details of this technology are important for evaluating this bill. How much would the proposed law increase the cost of televisions? How much would it limit the future development of TV technology? How likely is the technology to mistakenly block authorized copying? How adaptable is the technology to the future? All of these questions are important in debating the bill. And none of them can be answered if the technology part of the bill is secret.

Which brings us to the most interesting question of all: Are the members of Congress themselves, and their staffers, allowed to see the spec and talk about it openly? Are they allowed to consult experts for advice? Or are the full contents of this bill secret even from the lawmakers who are considering it?

Google Video and Privacy

Last week Google introduced its video service, which lets users download free or paid-for videos. The service’s design is distinctive in many ways, not all of them desirable. One of the distinctive features is a DRM (anti-infringement) mechanism which is applied if the copyright owner asks for it. Today I want to discuss the design of Google Video’s DRM, and especially its privacy implications.

First, some preliminaries. Google’s DRM, like everybody else’s, can be defeated without great difficulty. Like all DRM schemes that rely on encrypting files, it is vulnerable to capture of the decrypted file, or to capture of the keying information, either of which will let an adversary rip the video into unprotected form. My guess is that Google’s decision to use DRM was driven by the insistence of copyright owners, not by any illusion that the DRM would stop infringement.

The Google DRM system works by trying to tether every protected file to a Google account, so that the account’s username and password has to be entered every time the file is viewed. From the user’s point of view, this has its pros and cons. On the one hand, an honest user can view his video on any Windows PC anywhere; all he has to do is move the file and then enter his username and password on the new machine. On the other hand, the system works only when connected to the net, and it carries privacy risks.

The magnitude of privacy risk depends on the details of the design. If you’re going to have a DRM scheme that tethers content to user accounts, there are three basic design strategies available, which differ according to how much information is sent to Google’s servers. As we’ll see, Google apparently chose the design that sends the most information and so carries the highest privacy risk for users.

The first design strategy is to encrypt files so that they can be decrypted without any participation by the server. You create an encryption key that is derived from the username and password associated with the user’s Google account, and you encrypt the video under that key. When the user wants to play the video, software on the user’s own machine prompts for the username and password, derives the key, decrypts the video, and plays it. The user can play the video as often as she likes, without the server being notified. (The server participates only when the user initially buys the video.)

This design is great from a privacy standpoint, but it suffers from two main drawbacks. First, if the user changes the password in her Google account, there is no practical way to update the user’s video files. The videos can only be decrypted with the user’s old password (the one that was current when she bought the videos), which will be confusing. Second, there is really no defense against account-sharing attacks, where a large group of users shares a single Google account, and then passes around videos freely among themselves.

The second design tries to address both of these problems. In this design, a user’s files are encrypted under a key that Google knows. Before the user can watch videos on a particular machine, she has to activate her account on that machine, by sending her username and password to a Google server, which then sends back a key that allows the unlocking of that user’s videos on that machine. Activation of a machine can last for days, or weeks, or even forever.

This design addresses the password-change problem, because the Google server always knows the user’s current password, so it can require the current password to activate an account. It also addresses the account-sharing attack, because a widely-shared account will be activated on a suspiciously large number of machines. By watching where and how often an account is activated, Google can spot sharing of the account, at least if it is shared widely.

In this second design, more information flows to Google’s servers – Google learns which machines the user watches videos on, and when the user first uses each of the machines. But they don’t learn which videos were watched when, or which videos were watched on which machine, or exactly when the user watches videos on a given machine (after the initial activation). This design does have privacy drawbacks for users, but I think few users would complain.

In the third design, the user’s computer contacts Google’s server every time the user wants to watch a protected video, transmitting the username and password, and possibly the identity of the video being watched. The server then provides the decryption key needed to watch that particular video; after showing the video the software on the user’s computer discards the key, so that another handshake with the server is needed if the user wants to watch the same video later.

Google hasn’t revealed whether or not they send the identity of the video to the server. There are two pieces of evidence to suggest that they probably do send it. First, sending it is the simplest design strategy, given the other things we know about Google’s design. Second, Google has not said that they don’t send it, despite some privacy complaints about the system. It’s a bit disappointing that they haven’t answered this question one way or the other, either to disclose what information they’re collecting, or to reassure their users. I’d be willing to bet that they do send the identity of the video, but that bet is not a sure thing. [See update below.]

This third design is the worst one from a privacy standpoint, giving the server a full log of exactly where and when the user watches videos, and probably which videos she watches. Compared to the second design, this one creates more privacy risk but has few if any advantages. The extra information sent to the server seems to have little if any value in stopping infringement.

So why did Google choose a less privacy-friendly solution, even though it provided no real advantage over a more privacy-friendly one? Here I can only speculate. My guess is that Google is not as attuned to this kind of privacy issue as they should be. The company is used to logging lots of information about how customers use its services, so a logging-intensive solution would probably seem natural, or at least less unnatural, to its engineers.

In this regard, Google’s famous “don’t be evil” motto, and customers’ general trust that the company won’t be evil, may get Google into trouble. As more and more data builds up in the company’s disk farms, the temptation to be evil only increases. Even if the company itself stays non-evil, its data trove will be a massive temptation for others to do evil. A rogue employee, an intruder, or just an accidental data leak could cause huge problems. And if customers ever decide that Google might be evil, or cause evil, or carelessly enable evil, the backlash would be severe.

Privacy is for Google what security is for Microsoft. At some point Microsoft realized that a chain of security disasters was one of the few things that could knock the company off its perch. And so Bill Gates famously declared security to be job one, thousands of developers were retrained, and Microsoft tried to change its culture to take security more seriously.

It’s high time for Google to figure out that it is one or two privacy disasters away from becoming just another Internet company. The time is now for Google to become a privacy leader. Fixing the privacy issues in its video DRM would be a small step toward that goal.

[Update (Feb. 9): A Google representative confirms that in the current version of Google Video, the identity of the video is sent to their servers. They have updated the service’s privacy policy to disclose this clearly.]