December 24, 2024

Archives for 2006

USACM Policy Statement on DRM

I’m pleased to post here a new policy statement on DRM, issued by USACM, the U.S. public policy committee of ACM, the leading professional society for computer scientists. It’s a balanced yet strong statement of principles that can be applied to many public policy questions relating to DRM. I helped to draft it, and I support it. USACM has posted the original document in PDF format.

USACM is a valuable resource on infotech policy issues. And they have a useful policy blog too.

USACM Policy Recommendations on Digital Rights Management

February 2006

BACKGROUND:

New technologies have remade the consumer entertainment landscape, allowing creative content – such as movies, television, and radio programming – to be delivered in digital form. Because exact copies of digital content can be widely and quickly distributed, some content distributors are employing technical protection systems to manage consumer uses of copyrighted content, often characterized as “digital rights management (DRM)” technology. DRM systems are intended to enable distributors to manage consumer uses of content. In theory, this may prevent the making and distribution of infringing copies of digital works. However, use of these technologies has created controversy, especially as regards issues of “fair use” and public interest. In some cases, DRM technologies have been found to undermine consumers’ rights, infringe customer privacy, and damage the security of consumers’ computers. One notable example was the software distributed with compact discs in 2005 by Sony BMG. Sony subsequently withdrew the product, which had created security and privacy vulnerabilities for consumers’ computers, because of resulting public criticism and legal action.

The marketplace should determine the success or failure of DRM technologies but, increasingly, content distributors are turning to legislatures or the courts to erect new legal mandates to replace long-standing copyright regimes. DRM systems should be mechanisms for reinforcing existing legal constraints on behavior, not as mechanisms for creating new legal constraints. Striking a balance among consumers’ rights, public interest, and protection of valid copyright interests is no simple task for technologists or policymakers. For this reason, USACM has developed the following recommendations on this important issue.

RECOMMENDATIONS:

Competition: Public policy should enable a variety of DRM approaches and systems to emerge, should allow and facilitate competition among them, and should encourage interoperability among them. No proprietary DRM technology should be mandated for use in any medium.

Copyright Balance: Because lawful use (including fair use) of copyrighted works is in the public’s best interest, a person wishing to make lawful use of copyrighted material should not be prevented from doing so. As such, DRM systems should be mechanisms for reinforcing existing legal constraints on behavior (arising from copyright law or by reasonable contract), not as mechanisms for creating new legal constraints. Appropriate technical and/or legal safeguards should be in place to preserve lawful uses in cases where DRM systems cannot distinguish lawful uses from infringing uses.

Consumer Protection: DRM should not be used to interfere with the rights of consumers. Neither should DRM technologies interfere with any technology or use of consumer systems that are unrelated to the copyrighted items being managed. Policymakers should actively monitor actual use of DRM and amend policies as necessary to protect these rights and interests.

Privacy and Consent: Public policy should ensure that DRM systems may collect, store, and redistribute private information about users only to the extent required for their proper operation, that they follow fair information practices, and that they are subject to informed consent by users.

Research and Public Discourse: DRM systems and policies should not interfere with legitimate research, with discourse about research results, or with other matters of public concern. Laws and regulations concerning DRM should contain explicit provisions to protect this principle.

Targeted Policies: Public policies meant to reinforce copyright should be limited to applications where copyright interests are actually at stake. Laws and regulations concerning DRM should have limited scope, applying only where there is a realistic risk of copyright infringement.

Nuts and Bolts of Network Discrimination

One of the reasons the network neutrality debate is so murky is that relatively few people understand the mechanics of traffic discrimination. I think that in reasoning about net neutrality it helps to understand how discrimination would actually be put into practice. That’s what I want to explain today. Don’t worry, the details aren’t very complicated.

Think of the Internet as a set of routers (think: metal boxes with electronics inside) connected by links (think: long wires). Packets of data get passed from one router to another, via links. A packet is forwarded from router to router, until it arrives at its destination.

Focus now on a single router. It has several incoming links on which packets arrive, and several outgoing links on which it can send packets. When a packet shows up on an incoming link, the router will figure out (by methods I won’t describe here) on which outgoing link the packet should be forwarded. If that outgoing link is free, the packet can be sent out on it immediately. But if the outgoing link is busy transmitting another packet, the newly arrived packet will have to wait – it will be “buffered” in the router’s memory, waiting its turn until the outgoing link is free.

Buffering lets the router deal with temporary surges in traffic. But if packets keep showing up faster than they can be sent out on some outgoing link, the number of buffered packets will grow and grow, and eventually the router will run out of buffer memory.

At that point, if one more packet shows up, the router has no choice but to discard a packet. It can discard the newly arriving packet, or it can make room for the new packet by discarding something else. But something has to be discarded.

(This is one illustration of the “best effort” principle, which is one of the clever engineering decisions that made the Internet feasible. The Internet will do its best to deliver each packet promptly, but it doesn’t make any guarantees. It’s up to software that uses the Internet Protocol to detect dropped packets and recover. The software you’re using to retrieve these words can, and probably often does, recover from dropped packets.)

When a router is forced to discard a packet, it can discard any packet it likes. One possibility is that it assigns priorities to the packets, and always discards the packet with lowest priority. The technology doesn’t constrain how packets are prioritized, as long as there is some quick way to find the lowest-priority packet when it becomes necessary to discard something.

This mechanism defines one type of network discrimination, which prioritizes packets and discards low-priority packets first, but only discards packets when that is absolutely necessary. I’ll call it minimal discrimination, because it only discriminates when it can’t serve everybody.

With minimal discrimination, if the network is not crowded, lots of low-priority packets can get through. Only when there is an unavoidable conflict with high-priority packets is a low-priority packet inconvenienced.

Contrast this with another, more drastic form of discrimination, which discards some low-priority packets even when it is possible to forward or deliver every packet. A network might, for example, limit low-priority packets to 20% of the network’s capacity, even if part of the other 80% is idle. I’ll call this non-minimal discrimination.

One of the basic questions to ask about any network discrimination regime is whether it is minimal in this sense. And one of the basic questions to ask about any rule limiting discrimination is how it applies to minimal versus non-minimal discrimination. We can imagine a rule, for example, that allows minimal discrimination but limits or bans non-minimal discrimination.

This distinction matters, I think, because minimal and non-minimal discrimination are supported by different arguments. Minimal discrimination may be an engineering necessity. But non-minimal discrimination is not technologically necessary – it makes service worse for low-priority packets, but doesn’t help high-priority packets – so it could only be justified by a more complicated economic argument, for example that non-minimal discrimination allows forms of price discrimination that increase social welfare. Vague arguments that you have to reserve some fraction of capacity for some purpose won’t cut it.

[Postscript for networking geeks: You might complain that it matters not only which packets are dropped but also which packets are forwarded first, and so on. True enough. I simplified things a bit to fit within a blog post; but it should be fairly obvious how to expand the principle I’m describing here to deal with the issues you’re raising.]

Obligatory Summers Post

According to Section 4.3(c)(iii)(g) of the Code of Academic Blogging, I am required, on pain of banishment from the faculty club, to post about the departure of Lawrence Summers as Harvard president. Much e-ink has been spilled on this topic, and I for one feel no wiser for it. With some trepidation, let me offer a few thoughts.

I am the first to admit that I don’t know much about how to run Harvard, or about what kind of job Summmers did. I read the same press articles as everybody else, but in my experience outsiders commenting on university matters hugely overweight things they read in the newspapers, and underweight less flashy details of management. For example, I as a Princeton professor don’t much care what Princeton’s president thinks about the Iraq war, even though the local newpaper will print any offhand remark she makes on that topic. I care more about who she appointed to the committee to pick the dean of engineering, or about whether she wants to change the administrative status of sixth-year grad students. You’ll never read about those things in the newspaper.

So I’m pretty sure Summers wasn’t bounced because he dissed Cornel West and made some ill-considered remarks in a speech. Not having followed the details of Summers’ management of Harvard, I won’t pretend to know the detailed reasons for his ouster, or whether the Harvard Corporation showed good judgment in (apparently) deciding he should leave.

What is clear is that he is gone because the Corporation (what most other schools would call the Trustees or Regents) decided he should go. A faculty vote of no confidence last year had no real effect, and another one now would also not have mattered if the Corporation thought Summers was on the right track. If the faculty were involved in ousting Summers, their role was to convince the Corporation that Summers was doing a bad job.

Some commentators argue that the opinions of Harvard faculty shouldn’t matter. But even if Harvard faculty members know nothing about how Harvard should be run (which is pretty unlikely, if you ask me), it still matters what they think.

Consider a corporation run by a CEO who reports to a board of directors. If the majority of vice presidents think the CEO is doing a bad job, that should be a matter of concern for the board. They should talk to the CEO and the managers about what is happening. Then they should decide if people are unhappy because the CEO is making difficult but necessary decisions, or whether the CEO is just doing a bad job.

Maybe the CEO is doing an okay job at most things, but he seems to have a knack for angering and disappointing vice presidents. This is a problem for the company if it causes vice presidents to leave or makes it harder to recruit new ones. This problem is especially serious if other companies are eager to hire away vice presidents, and if the competence of the vice presidents is a big factor in the quality of the company’s output.

None of this depends on whether the vice presidents have the formal power to fire the CEO, or to do anything else for that matter. If employees make a difference in the company’s output and the labor market is competitive, the employees have power.

Which is why the call from some commentators to strip the faculty of their power is pointless. At most universities, the faculty have little or no formal power. All the Harvard Arts and Sciences faculty did was (a) pass non-binding resolutions, and (b) talk to people. To the extent they had power over the real decision makers, that power was granted not by Harvard but by the market. That is not something Harvard can change by amending its bylaws.

How Watermarks Fail

I wrote Wednesday about Randy Picker’s suggestion of using digital watermarks to embed users’ personal financial information into media files, to discourage users from sharing the files. Today, I want to talk more generally about watermarks and how they tend to fail.

First, some background. Watermarks are subtle signals embedded in the background of media files. They are supposed to be unobtrusive but easy to detect if you know where to look. Different media have different kinds of watermarks. In a photo, the watermark might be hidden in subtle patterns of shading. In music, it might be in a very soft background buzz, or a barely audible echo.

In many applications, a watermark must resist attempts by an adversary to remove it. For example, in Randy’s scheme, a user might want to remove the identifying watermark from a media file because he wants to share the file illegally, or because he doesn’t want his personal information exposed to cyber-intruders. It is often important to know how resistant a particular watermark is to removal. There has been plenty of research on this topic, from which we can draw lessons about how watermark removal tends to work.

One theme is the power of Rosetta Stone attacks. The original Rosetta Stone was a stone tablet with the same text written in three ancient languages. This gave scholars who understood one of the languages a big boost in deciphering another one that they didn’t understand. Similarly, watermarks tend to be defeated if an adversary can get his hands on a watermarked file, and the same file without the watermark. By comparing the two, the adversary can determine where the watermark lives, which is usually sufficient to remove the watermark from other files. Alex used this method in deciphering the MediaMax watermark (as described in our Sony CD DRM paper), and my colleagues and I used it also in analyzing the SDMI watermarks back in 2000.

Almost as powerful as a Rosetta Stone attack is a comparison attack, where the adversary does not have an unwatermarked file, but does have the same file with several different watermarks in it. Any place where two of the files differ is a place where watermark information lives. Given several marked files, an attacker can locate all or most of the places the watermark is hidden, which is again the first step in removing the watermark.

(In theory it might be possible to stop an adversary with access to a limited number of individually watermarked files from completely removing the watermark, if the watermark has lots of places to hide and is constructed cleverly. There is an interesting body of theory about how to do this and when it works. But in practice the assumptions underlying that theory rarely hold.)

Even if the adversary cannot get access to multiple versions of a file (so that Roseta Stone or comparison attacks are not possible), he can usually still defeat a watermark if he has access to a device that can detect watermarks. By reverse engineering the device, he can figure out where it is looking for the watermark, which again puts him in a position to remove it. (Even if he can’t dissect the device, he can use it as an oracle that tells him whether a particular file has a detectable watermark. Oracles are very helpful in attacking watermarks – Alex used one in his MediaMax watermark analysis, and my colleagues and I used one in our SDMI analysis.)

All of this helps us to understand where watermarks are likely to be effective and where they’re not. The best case for watermarking is where each file is published in a single version, with a watermark in a location that is not disclosed to the public and is not implemented in a device available to the public. This would hold true, for example, in a system that put a distinctive mark into all released versions of a file, and then looked for such watermarks in content broadcast on the radio or TV or downloaded from the net.

Not nearly as strong is a system where there is a single watermark per file, and consumer devices check for the mark – it is subject to reverse engineering and oracle attacks.

Weaker yet is a system where files are watermarked individually for each consumer – it is subject to comparison attacks.

Weakest of all is a system where files are watermarked individually for each consumer and everyone is told how to read the watermarks. Here the adversary can use comparison attacks, and reverse engineering is not even necessary because the inner working of the watermark detector are well known.

Alert readers will have noticed that all of the uses of watermarks for DRM (copy protection) seem to fall into the weak categories. That is because DRM applications require either that all devices check for the watermark – opening up reverse engineering and oracle attacks – or alternatively that a file be given separate watermarks for separate consumers – opening up comparison attacks. Watermarking has its uses, but it doesn’t seem well suited for DRM.

Mistrust-Based DRM

Randy Picker has an interesting post on the Chicago Law Faculty blog, describing what he calls “mistrust-based DRM”. The idea is that when an online music store gives you a song, it embeds into the song a watermark that contains your credit card number, or some other information that would let a (dishonest) person spend your money. This gives you an incentive not to distribute the song.

This is an instructive idea, but not a practical one.

In analyzing this idea, it’s helpful to divide it into two pieces: (1) embed a watermark that identifies the user, and (2) make that watermark a secret of the user and readable by the anyone who gets the file. Piece (1), taken alone, is a widely discussed DRM strategy which has not been used much in practice, for reasons I plan to discuss tomorrow. Today, I want to focus on the second piece.

Specifically, I want to compare two systems. In the more traditional system, the watermark is secret – it can be read only by the copyright owner or its agents – and users fear being sued for infringement if their files end up on P2P. In Randy’s system, the watermark is public – anybody can read it – and users fear being victimized by fraud if their files end up on P2P. I’ll call these two alternatives “secret-watermark” and “public-watermark”.

How do they compare? For starters, a secret watermark is much harder for an adversary to find and remove. If a watermark is public, everybody knows exactly where in the music it is stored. Common sense, and experience too, says that if you know where in a file information is stored, you can modify that part of the file and obliterate the information. But if the watermark is secret, then an adversary isn’t told where to look for it or how to change the file to remove it. Robustness of the watermark is an important issue that has been the downfall of past watermark systems.

A bigger problem with the public-watermark design, I think, are the forces unleashed when your design principle is to enable fraud. For example, the system will lose its force if unrelated anti-fraud measures become more effective, or if the financial system acts to protect users from fraud. Today, a consumer’s liability for fraudulent credit card transactions is capped at $50, and credit card companies often forgive even that $50. (You could use some other account information instead of the credit card number, but similar issues would still apply.) Copyright owners would be the only online merchants who wanted a higher level of fraud on the Net.

Worse yet, even law-abiding consumers would face a higher risk of fraud, because any loss or theft of their music or movie files would expose their financial information. Spyware programs could collect this information from users’ computers – and studies show that at least half of end-user PCs are infected with spyware. Law-abiding users would have a strong incentive to scrub the information out of their files, even if they had no intention of infringing. Alert anti-virus or anti-spyware vendors would be eager to provide this service.

Given the disadvantages of a public-watermark scheme, what are the arguments for it? Randy Picker argues that it gives end users an incentive to distrust fly-by-night purveyors of ripping software, worrying that they might steal the user’s information from the files and commit fraud. This isn’t entirely convincing: some such tools already contain heinous spyware that could cause users lots of harm, and reputable security suppliers are likely to provide watermark-scrubbing tools anyway. I think the threat of secret watermarks hidden in files, which fly-by-night vendors have no incentive to remove, would probably scare users enough.

On the whole, then, I think a secret-watermark scheme is better than a public-watermark one. But it should be noted that secret-watermark schemes themselves aren’t looking too good. They have mostly failed in the market, for reasons I’ll start digging into tomorrow.