November 24, 2024

Archives for 2002

Etzioni: Reply to Spammers

Oren Etzioni has an op-ed in today’s New York Times about spam. His proposal:

Though spammers hope to lure us with their dubious propositions (“URGENT AND CONFIDENTIAL BUSINESS PROPOSAL”), they rely on those of us who don’t want to participate to delete their messages quietly and go about our daily business. What would happen if recipients instead replied en masse to each message?

… Faced with hundreds of thousands of responses, the spammer would have to use substantial resources to store the responses, sift through them and identify those registering genuine interest.

This is a well-known Bad Idea. The return addresses on spam emails are often forged, so the “hundred of thousands” of replies might well end up in an innocent bystander’s inbox. If replying to spam became common practice, then forged spam would provide an easy denial of service attack against anybody’s email service, by sending a spam message claiming to come from them.

Like it or not, email messages are easy to forge, so any method of retaliation against the purported sender of spam is bound to backfire.

It’s disappointing to see a suggestion this lame in the Paper of Record, even on the op-ed page.

"Network-Based" Copy Protection

One more comment on Lessig’s Red Herring piece, then I’ll move on to something else. Really I will.

Lessig argues that one kind of DRM is less harmful than another. He says

To see the point, distinguish between DRM systems that control copying (copy-protection systems) and DRM systems that control who can do what with a particular copy (“token” systems that Palladium would enable). Copy-protection systems regulate whether machine X can copy content Y. Token systems regulate whether, and how, machine X is allowed to use content Y.

The difference can be critical to network design: if a technology could control who used what content, there would be little need to control how many copies of that content lived on the Internet. Peer-to-peer systems, for example, depend upon many copies of the same content living in many different places across the Net. Copy-protection systems defeat this design; token systems that respect the network’s end-to-end design need not.

This relies on the assumption that copy-protection systems would be implemented in the network rather than in the end-hosts. From an engineering standpoint, that assumption looks wrong to me.

Consider a peer-to-peer system like Aimster. (I know: they have changed the name to Madster. But most people know it as Aimster, so I’ll use that name.) Aimster runs on end hosts, and it encrypts all files in transit. Assuming Aimster does its crypto correctly, a network-based system has no hope of knowing what is being transferred. It has no hope even of identifying which encrypted connections are Aimster transfers and which are not. Any network-based copy- or transfer-prevention system will be totally flummoxed by basic crypto. Even Secure HTTP will defeat it.

If copy-protection is to have any hope at all of working, it must operate on the end hosts. It must try to keep Aimster from running, or to keep it from getting access to files containing copyrighted material.

I am making a classic end-to-end argument here. As the original end-to-end paper says,

In reasoning about [whether to provide a function in the network or in the endpoints], the requirements of the application provide the basis for a class of arguments, which go as follows:

The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the end points of the communication system. Therefore, providing that questioned function as a feature of the communication system itself is not possible….

We call this line of reasoning against low-level function implementation the “end-to-end argument.”

Ironically, my end-to-end argument contradicts Lessig’s end-to-end argument.
How can this happen? It’s not because Lessig is a heretic against the true end-to-end religion. His argument is based just as firmly in the end-to-end scriptures as mine. The problem is that those scriptures teach more than one lesson.

(I’m currently working on a paper that untangles the various types of end-to-end arguments made in tech/policy/law circles. The gist of my argument is that there are really three separate principles that call themselves “end-to-end,” and that we need to work harder to keep them separate in our collective heads.)

Lessig/DRM/Palladium Summary, at Copyfight

Donna Wentworth offers a pithy summary of the commentary on Lessig’s DRM piece, over at Copyfight.

Rebecca Mercuri on the Florida Voting Fiasco

Rebecca Mercuri writes, in the RISKS Forum:

Well, Florida’s done it again.

Tuesday’s Florida primary election marked its first large-scale roll-out of tens of thousands of brand-new voting machines that were promised to resolve the problems of the 2000 Presidential election. Instead, from the very moment the polls were supposed to open, problems emerged throughout the state, especially in counties that had spent millions of dollars to purchase touchscreen electronic balloting devices.

Mercuri goes on to discuss the problems in detail. She is perhaps the leading independent expert on voting technology, and is well worth reading if you’re interested in that topic.

Voting poses a particularly difficult information security problem, because so much is at stake, and because the requirements are so difficult. (For example, the secret ballot is a particularly troublesome requirement.) My sense is that we are still far from having an all-electronic system that deserves our trust.

[Link credit: Dan Gillmor]

Lessig, DRM, and Palladium

As I noted yesterday, Lessig’s Red Herring piece on Palladium has generated a lot of interesting talk among techno-law-bloggers. (See e.g. Copyfight, Ernie the Attorney, Lessig, and Frank Field.)

This is all interesting, but it’s very speculative. As Bruce Schneier points out, in the best technical perspective on Palladium I’ve seen, we really know very little about how Palladium will actually work. When it comes to security, the devil is in the details; and we know only the barest outline of how Palladium will work.

Even if we did know the technical details of Palladium, it is far from obvious what effect it would have on the everyday practice of computing. My own view is that Palladium will make less difference than people expect. It won’t do much to prevent viruses and network attacks, since it doesn’t address the vulnerabilities that those attacks usually exploit.

More to the point, even if we assume that Palladium is totally bulletproof, I doubt that it will enable the kind of pervasive DRM that some people seem to want – at least, it won’t do so without making the PC essentially useless for ordinary computing tasks. (I plan to elaborate on this argument in a future posting.) A pervasive-DRM “computer” will be more like a CD player than like a computer.

Real computers are so useful that people will insist on having them, and the market will continue to provide them. Most likely it will provide them by pressuring software vendors into not using any draconian features of Palladium.