July 19, 2024

"Network-Based" Copy Protection

One more comment on Lessig’s Red Herring piece, then I’ll move on to something else. Really I will.

Lessig argues that one kind of DRM is less harmful than another. He says

To see the point, distinguish between DRM systems that control copying (copy-protection systems) and DRM systems that control who can do what with a particular copy (“token” systems that Palladium would enable). Copy-protection systems regulate whether machine X can copy content Y. Token systems regulate whether, and how, machine X is allowed to use content Y.

The difference can be critical to network design: if a technology could control who used what content, there would be little need to control how many copies of that content lived on the Internet. Peer-to-peer systems, for example, depend upon many copies of the same content living in many different places across the Net. Copy-protection systems defeat this design; token systems that respect the network’s end-to-end design need not.

This relies on the assumption that copy-protection systems would be implemented in the network rather than in the end-hosts. From an engineering standpoint, that assumption looks wrong to me.

Consider a peer-to-peer system like Aimster. (I know: they have changed the name to Madster. But most people know it as Aimster, so I’ll use that name.) Aimster runs on end hosts, and it encrypts all files in transit. Assuming Aimster does its crypto correctly, a network-based system has no hope of knowing what is being transferred. It has no hope even of identifying which encrypted connections are Aimster transfers and which are not. Any network-based copy- or transfer-prevention system will be totally flummoxed by basic crypto. Even Secure HTTP will defeat it.

If copy-protection is to have any hope at all of working, it must operate on the end hosts. It must try to keep Aimster from running, or to keep it from getting access to files containing copyrighted material.

I am making a classic end-to-end argument here. As the original end-to-end paper says,

In reasoning about [whether to provide a function in the network or in the endpoints], the requirements of the application provide the basis for a class of arguments, which go as follows:

The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the end points of the communication system. Therefore, providing that questioned function as a feature of the communication system itself is not possible….

We call this line of reasoning against low-level function implementation the “end-to-end argument.”

Ironically, my end-to-end argument contradicts Lessig’s end-to-end argument.
How can this happen? It’s not because Lessig is a heretic against the true end-to-end religion. His argument is based just as firmly in the end-to-end scriptures as mine. The problem is that those scriptures teach more than one lesson.

(I’m currently working on a paper that untangles the various types of end-to-end arguments made in tech/policy/law circles. The gist of my argument is that there are really three separate principles that call themselves “end-to-end,” and that we need to work harder to keep them separate in our collective heads.)