November 21, 2024

Self-Destructing DVDs

Last week a company called FlexPlay announced Self-Destructing DVDs (SD-DVDs), which oxidize themselves – and so become unplayable – 48 hours after removal from their package. (The official name is, amusingly, “EZ-D”.) The idea is to provide the equivalent of a rental, while saving the consumer the trouble of returning the disk to the rental store afterwards.

This is an interesting kind of Digital Restrictions Management (DRM). Unlike most uses of DRM, this one does nothing to prevent copying or access to the disk. Consumers will be able to copy these DVDs as easily as any other DVDs. (Copying DVDs is often illegal, but many consumers are apparently willing to do it anyway.) SD-DVDs don’t do anything to make copying harder, and in fact their limited lifetime may create a new incentive to copy. While the use of DRM to (try to) control copying and access has gotten lots of attention, SD-DVDs are a nice illustration of the use of DRM to enable business models.

SD-DVDs may be a convenience for DVD-rental customers, but I doubt they will catch on, because consumers will find them offensive. Consumers hate planned obsolescence. The idea that a company would deliberately make a product worse, or make it wear out sooner than necessary, offends their sense of fairness. If Universal can press a regular DVD for one dollar, then why, ordinary consumers will ask, would they spend the same dollar to make a product that breaks? Fancy-pants economic arguments about efficiency and market segmentation won’t overcome this basic sense of unfairness.

Worse yet (and despite a claim to the contrary in FlexPlay’s press release), the nature of a chemical process like oxidation seems to imply that the disk’s decay will be gradual. Since DVDs use error correction, FlexPlay’s engineers can make the disk reliable for any desired period; but after that there will be an inevitable period of intermittent glitches as the disk gets worse and worse, until it becomes unusable. Seeing the decay, even if it lasts only for a short time, will only make consumers angrier.

The underlying problem is that because SD-DVDs will be sold for less than ordinary DVDs, they will draw consumers’ attention to the fact that ordinary DVDs are priced well above the marginal cost of producing them. That seems unfair to many consumers.

At this point, readers who are armchair economists (or real ones, for that matter) are raising their hands and bouncing in their seats, eager to point out that marginal-cost pricing isn’t sustainable in the movie business, given the high fixed cost of making a movie and the very low marginal cost of distributing a copy of it. That’s true, but I think consumers’ sense of fairness is based on a different kind of market in which variable costs of production dominate fixed costs.

As long as it seemed inherently expensive to manufacture and distribute a copy of a recorded movie, consumers tended not to notice that the copy was priced above marginal cost. As marginal cost approaches zero, the gap between marginal cost and price becomes much more apparent, and consumers increasingly conclude that the studios are ripping them off.

I see this as a big problem for the studios. The last thing they should want, at this point is to introduce a product like the Self-Destructing DVD that heightens consumers’ sensitivity to “unfair” pricing.

UPDATE (12:25 PM): Eric Rescorla has an interesting follow-up about consumer psychology. He also points out, in a separate post, that it is possible, at least in theory, to make an SD-DVD that fails cleanly and suddenly, rather than gradually.

Rounding

Cory Doctorow writes on Cruelty to Analog about an MPAA presentation to the ARDG, the group that is trying to bring Digital Restrictions Management (DRM) to analog content.

The presentation talks about a “rounding problem” that arises because of an assumption that analog DRM is unable to micromanage the use of content to the same degree that digital DRM can. When a work is converted from digital to analog form, the detailed DRM restrictions from the digital domain are supposed to be “rounded off” to some roughly equivalent analog restrictions, so that “equivalence” can be maintained between the digital and analog domains. A fight is brewing over whether to “round down” (so that the analog rules are more restrictive than the digital) or to “round up” (so that the analog rules are looser than the digital).

This debate is a wonderful illustration of how far off the rails the DRM “standardization” groups have gone. Rather than worrying about the lack of any effective digital DRM scheme, or about the lack of any effective analog DRM scheme, the group chooses instead to just assume that both exist, and to further assume that the two are incompatible. They then proceed to argue about the consequences of that incompatibility. Rather than arguing about strategy for resolving a hypothetical incompatibility between two hypothetical products, why not worry first about whether analog DRM can work at all?

There is a well-known pathology in standardization processes, in which the group argues obsessively over some trivial detail which becomes a proxy for deeper philosophical disagreements. The antidote is for somebody to yank the group back to reality by pointing out all of the deployed products that operate perfectly well without accounting for that detail. But this antidote only works when there are deployed systems that are known to work well. Perhaps it is inevitable that if you try to standardize a conjectural product category, you will become hopelessly entangled in minutiae.

DRM, and the First Rule of Security Analysis

When I teach Information Security, the first lecture is dedicated to the basics of security analysis. And the first rule of security analysis is this: understand your threat model. Experience teaches that if you don’t have a clear threat model – a clear idea of what you are trying to prevent and what technical capabilities your adversaries have – then you won’t be able to think analytically about how to proceed. The threat model is the starting point of any security analysis.

Advocates of DRM (technology that restricts copying and usage) often fail to get their threat model straight. And as Derek Slater observes, this leads to incoherent rhetoric, and incoherent action.

If you’re a copyright owner, you have two threat models to choose from. The first, which I’ll call the Napsterization model, assumes that there are many people, some of them technically skilled, who want to redistribute your work via peer-to-peer networks; and it assumes further that once your content appears on a p2p network, there is no stopping these people from infringing. The second threat model, which I’ll call the casual-copying model, assumes that you are worried about widespread, but small-scale and unorganized, copying among small groups of ordinary consumers.

If you choose the Napsterization threat model, then you fail if even one of your customers can defeat your DRM technology, because that one customer will inject your content into a p2p network and all will be lost. So if this is your model, your DRM technology must be strong enough to stymie even the most clever and determined adversary.

If you choose the casual-copying threat model, then it’s enough for your DRM technology to frustrate most would-be infringers, most of the time. If a few people can defeat your DRM, that’s not the end of the world, because you have chosen not to worry about widespread redistribution of any one infringing copy.

Many DRM advocates make the classic mistake of refusing to choose a threat model. When they complain about the problem, they seem to be using the Napsterization model – they talk about one infringing copy propagating across the world. But when they propose solutions they seem to be solving the casual-copying problem, asking only that the technology keep the majority of customers from ripping content. So naturally the systems they are building don’t solve the problem they complain about.

If you’re a DRM advocate, the first rule of security analysis says that you have to choose a threat model, and stick to it. Either you choose the Napsterization model, and accept that your technology must be utterly bulletproof; or you choose the casual-copying model, and accept that you will not prevent Napsterization. You can’t have it both ways.

DRM in Cell Phones?

Elisa Batista at Wired News reports on the Cellular Telecommunications and Internet Association (CTIA) trade show. Rep. Billy Tauzin gave his perspective in a speech:

But Tauzin did offer [CTIA CEO Tom] Wheeler some advice in order to avoid more regulation: Have the industry clean up its act. If it doesn’t want to be hit by legislation, it should improve cell-phone coverage, roll out enhanced 911 service in a timely fashion so that anyone who dials 911 on a cell phone can get help immediately, and build a mechanism to protect content from piracy over wireless devices, he said.

That’s right, folks – DRM for cell phones.

(Thanks to Mark Seecof for the link.)

DRM and the Regulatory Ratchet

Regular readers know that one of my running themes is the harm caused when policy makers don’t engage with technical realities. One of the most striking examples of this has to do with DRM (or copy-restriction) technologies. Independent technical experts agree almost universally that DRM is utterly unable to prevent the leakage of copyrighted material onto file sharing networks. And yet many policy-makers act as if DRM is the solution to the file-sharing problem.

The result is a kind of regulatory ratchet effect. When DRM seems not to be working, perhaps it can be rescued by imposing a few regulations on technology (think: DMCA). When somehow, despite the new regulations, DRM still isn’t working, perhaps what is needed is a few more regulations to backstop it further (think: broadcast flag). When even these expanded regulations prove insufficient, the answer is yet another layer of regulations (think: consensus watermark). The level of regulation ratchets up higher and higher – but DRM still doesn?t work.

The advocates of regulation argue at each point that just one more level of regulation will solve the problem. In a rational world, the fact that they were wrong last time would be reason to doubt them this time. But if you simply take on faith that DRM can prevent infringement, the failure of each step becomes, perversely, evidence that the next step is needed. And so the ratchet clicks along, restricting technical progress more and more, while copyright infringement goes on unabated.