December 23, 2024

Encryption and Copying

Last week I criticized Richard Posner for saying that labeling content and adding filtering to P2P apps would do much to reduce infringement on P2P net. In responding to comments, Judge Posner unfortunately makes a very similar mistake:

Several pointed out correctly that tags on software files, indicating that the file is copyrighted, can probably be removed; and this suggests that only encryption, preventing copying, is likely to be effective in protecting the intellectual property rights of the owner of the copyright.

The error is rooted in the phrase “encryption, preventing copying”. Encryption does nothing to prevent copying – nor is it intended to. Encrypted data can be readily copied. Once decrypted, the plaintext data can again be readily copied. Encryption prevents one and only one thing – decryption without knowledge of the secret key.

It’s easy to see, then, why encryption has so little value in preventing infringement. You can ship content to customers in encrypted form, and the content won’t be decrypted in transit. But if you want to play the content, you have to decrypt it. And this means two things. First, the decrypted content will exist on the customer’s premises, where it can be readily copied. Second, the decryption key (and any other knowledge needed to decrypt) will exist on the customer’s premises, where it can be reverse-engineered. Either of these facts by itself would allow decrypted content to leak onto the Internet. So it’s not surprising that every significant encryption-based anticopying scheme has failed.

We need to recognize that these are not failures of implementation. Nor do they follow from the (incorrect) claim that every code can be broken. The problem is more fundamental: encryption does not stop copying.

Why do copyright owners keep building encryption-based systems? The answer is not technical but legal and economic. Encryption does not prevent infringement, but it does provide a basis for legal strategems. If content is encrypted, then anyone who wants to build a content-player device needs to know the decryption key. If you make the decryption key a trade secret, you can control entry to the market for players, by giving the key only to acceptable parties who will agree to your licensing terms. This ought to raise antitrust concerns in some cases, but the antitrust authorities have not shown much interest in scrutinizing such arrangements.

To his credit, Judge Posner recognizes the problems that result from anticompetitive use of encryption technology.

But this in turn presents the spectre of overprotection of [copyright owners’] rights. Copyright is limited in term and, more important (given the length of the term), is limited in other ways as well, such as by the right to make one copy for personal use and, in particular, the right of “fair use,” which permits a significant degree of unauthorized copying. To the extent that encryption creates an impenetrable wall to copying, it eliminates these limitations on copyright. In addition, encryption efforts generate countervailing circumvention efforts, touching off an arms race that may create more costs than benefits.

Once we recognize this landscape, we can get down to the hard work of defining a sensible policy.

Regulation by Software

The always interesting James Grimmelmann has a new paper, Regulation by Software (.pdf), on how software relates to law. He starts by dissecting Lessig’s “code is law” argument. Lessig argues that code is a form of “architecture” – part of the environment in which we live. And we know that the shape of our living environment regulates behavior, in the sense that we would behave differently if our environment were different.

Orin Kerr at Volokh wrote about Grimmelmann’s paper, leading to a vigorous discussion. Commenters, including Dan Simon, argued that if all designed objects regulate, then the observation that software regulates in the same way isn’t very useful. If toothpicks regulate, and squeaky tennis shoes regulate, what makes software so special?

Which brings us to the point of Grimmelmann’s paper. He argues that software is very different from ordinary physical objects, so that software-based regulation is not the same animal as object-based regulation. It’s best, he says, to think of software as a different medium of regulation.

Software-based regulation has four characteristics, according to Grimmelmann. It is extremely formal and rule-bound. It can impose rules without disclosing what the rules are. Its rules are always applied and cannot be ignored by mutual agreement. It is fragile since software tends to be insecure and buggy.

Regulation by software will work best, Grimmelmann argues, where these four characteristics are consistent with the regulator’s goals. He looks at two case studies, and finds that software is ill-suited for controlling access to copyrighted works, but software does work well for managing online marketplaces. Both findings are consistent with reality.

This is a useful contribution to the discussion, and it couldn’t have come at a better time for Freedom to Tinker book club members.

CDT Closes Eyes, Wishes for Good DRM

The Center for Democracy and Technology just released a new copyright policy paper. Derek Slater notes, astutely, that it tries at all costs to take the middle ground. It’s interesting to see what CDT sees as the middle ground.

Ernest Miller gives the paper a harsh review. I think Ernie is too harsh in some areas.

Rather than reviewing the whole paper, I’ll look here at the section on DRM. Here CDT’s strategy is essentially to wish that we lived on a planet where DRM could be consumer-friendly while preventing infringement. They’re smart enough not to claim that we live on such a planet now, only that people hope that we will soon:

While DRM systems can be very restrictive, much work is underway to create content protections that allow expansive consumer uses, while still protecting against widespread distribution.

(They footnote this by referring to FairPlay, TivoToGo, and AACS-LA, which all fall well short of their goal.) CDT asserts that if DRM systems that made everyone happy did exist, it would be good to use them. Fair enough. But what should we do in the actual world, where DRM that everyone loves is about as likely as teleportation or perpetual motion?

This means producers must be free to experiment with various models of digital distribution, using different content protection technologies and offering different sets of permissions and limitations. [Government DRM mandates are bad.]

Consumers, meanwhile, must have real options for purchasing different bundles of rights at different price points.

Producers should be free to experiment. Consumers should be free to buy. Gee, thanks.

Actually, this would be fine if CDT really meant that producers were free to experiment with DRM systems. Nowadays, everybody is a producer. If you take photographs, you’re a producer of copyrighted images. If you take home movies, you’re a producer of copyrighted video. If you write, you’re a producer of copyrighted text. We’re all producers. A world where we could all experiment would be good.

What they really mean, of course, is that some producers are more equal than others. Those who are expected to sell a few works to many people – or, given the way policy really gets made, those who have done so in the recent past – are called “producers”, while those who produce the vast majority of new copyrighted works are somehow called “consumers”. (And don’t say that big media produces the only works of value. Quick: Which still images do you value most in the world? I’ll bet they’re photos, and that they weren’t taken by a big media company.)

Here’s the bottom line: In the real world, DRM policy involves tradeoffs, and requires choices. Wishing for a magical DRM technology that will please everyone is not a strategy.

MacIntel: It's Not About DRM

The big tech news today is that Apple will start using Intel microprocessors (the same brand used in PCs) in its Macintosh computers, starting next year. Some have speculated that this might be motivated by DRM. The theory is that Apple wants the anticopying features that will be built into the hardware of future Intel processors.

The theory is wrong.

Though they’re not talking much about it, savvy people in the computer industry have already figured out that hardware DRM support is a non-starter on general-purpose computers. At most, hardware DRM can plug one hole in a system with many holes, by preventing attacks that rely on running an operating system on top of an emulator rather than on top of a real hardware processor. Plenty of other attacks still work, by defeating insecure operating systems or applications, or by exploiting the analog hole, or by capturing content during production or distribution. Hardware DRM blocks one of the less likely attacks, which makes little if any difference.

If DRM is any part of Apple’s motivation – which I very much doubt – the reason can only be as a symbolic gesture of submission to Hollywood. One of the lessons of DVD copy protection is that Hollywood still seems to need the security blanket of DRM to justify accepting a new distribution medium. DVD copy protection didn’t actually keep any content from appearing on the darknet, but it did give Hollywood a false sense of security that seemed to be necessary to get them to release DVDs. It’s awfully hard to believe that Hollywood is so insistent on symbolic DRM that it could induce Apple to pay the price of switching chip makers.

Most likely, Apple is switching to Intel chips for the most basic reason: the Intel chips meet Apple’s basic needs better than IBM chips do. Some stories report that Intel had an advantage in producing fast chips that run cool and preserve battery power, for laptops. Perhaps Apple just believes that Intel, which makes many more chips than IBM, is a better bet for the future. Apple has its reasons, but DRM isn’t one of them.

Broadcast Flag and Compatibility

National Journal Tech Daily (an excellent publication, but behind a paywall) has an interesting story, by Sarah Lai Stirland, about an exchange between Mike Godwin of Public Knowledge and some entertainment industry lobbyists, at a DC panel last week. Godwin argued that the FCC’s broadcast flag rule, if it is reinstated, will end up regulating a very broad range of devices.

Godwin said any regulations concerning digital television copy-protection schemes would necessarily have to affect any devices that hook up to digital television receivers. That technical fact could have far-reaching implications, such as making gadgets incompatible with each other and crimping technology companies’ ability to innovate, he said.

“I don’t want to be the legislator or the legislative staff person in charge of shutting off connectivity and compatibility for consumers, and I don’t think you want to do that either,” he told a roomful of technology policy lobbyists and congressional staffers. “It’s going to make consumers’ lives hell.”

Godwin’s talk drew a sharp protest from audience member Rick Lane, vice president of government affairs at News Corp.

“Compatibility is not a goal,” he said, pointing out that there are currently a plethora of consumer electronics and entertainment products that are not interoperable. Lane was seconded by NBC Universal’s Senior Counsel for Government Relations Alec French, who also was in the audience.

To consumers, compatibility is a goal. When devices don’t work together, that is a problem to be solved, not an excuse to mandate even more incompatibility.

The FCC and Congress had better be careful in handling the digital TV issue, or they’ll be blamed for breaking the U.S. television system. Mandating incompatibility, via the Broadcast Flag, will not be a popular policy, especially at a time when Congress is talking about shutting off analog TV broadcasts.

The most dangerous place in Washington is between Americans and their televisions.