March 19, 2024

Archives for April 2004

Off-the-record Conferences

In writing about the Harvard Speedbump conference, I noted that its organizers declared it to be off the record, so that statements made or positions expressed at the conference would not be attributed publicly to any particular person or organization. JD Lasica asks, quite reasonably, why this was done: “Can someone explain to me why a conference needs to be ‘off the record’ in order for people to exchange ideas freely? What kind of society are we living in?”

This is the second off-the-record conference I have been to in my twenty years as a researcher. The first was a long-ago conference on parallel computing. Why that one was off the record was a mystery to me then, and it still is now. Nobody there had anything controversial to say, and no participant was important enough that anyone outside a small research community would even care what was said.

As to the recent Speedbump conference, I can at least understand the motivation for putting it off the record. Some of the participants, like Cary Sherman from RIAA and Fritz Attaway from MPAA, would be understood as speaking for their organizations; and the hope was that such people might depart from their talking points and speak more freely if they knew their statements wouldn’t leave that room.

Overall, there was less posturing at this meeting than one usually sees at similar meetings. My guess is that this wasn’t because of the off-the-record rule, but just because some time has passed in the copyright wars and cooler heads are starting to prevail. Nobody at the meeting took a position that really surprised me.

As far as I could tell, there were only two or three brief exchanges that would not have happened in an on-the-record meeting. These were discussions of various deals that either might be made between different entities, or that one entity had quietly offered to another in the past. For me, these discussions were less interesting than the rest of the meeting: clearly no deal could be made in a room with thirty bystanders, and the deals that were discussed were of the sort that savvy observers of the situation might have predicted anyway.

In retrospect, it looks to me like the conference needn’t have been off the record. We could just as easily have followed the rule used in at least one other meeting I have attended, with everything on the record by default, but speakers allowed to place specific statements off the record.

To some extent, the off-the-record rule at the conference was a consequence of blogging. In pre-blog days, this issue could have been handled by not inviting any reporters to the meeting. Nowadays, at any decent-sized meeting, odds are good that several of the participants have blogs; and odds are also good that somebody will blog the meeting in real time. On the whole this is a wonderful thing; nobody has the time or money to go to every interesting conference.

I have learned a lot from bloggers’ conference reports. It would be a shame to lose them because people are afraid of being quoted.

[My plan still calls for one more post on the substance of the conference, as promised yessterday.]

Stopgap Security

Another thing I learned at the Harvard Speedbumps conference (see here for a previous discussion) is that most people have poor intuition about how to use stopgap measures in security applications. By “stopgap measures” I mean measures that will fail in the long term, but might do some good in the short term while the adversary figures out how to work around them. For example, copyright owners use simple methods to identify the people who are offering files for upload on P2P networks. It’s only a matter of time before P2P designers deploy better methods for shielding their users’ identities so that today’s methods of identifying P2P users no longer work.

Standard security doctrine says that stopgap measures are a bad idea – that the right approach is to look for a long-term solution that the bad guys can’t defeat simply by changing their tactics. Standard doctrine doesn’t demand an impregnable mechanism, but it does insist that a good mechanism must not become utterly useless once the adversary adapts to it.

Yet sometimes, as in copyright owners’ war on P2P infringement, there is no good solution, and stopgap measures are the only option you have. Typically you’ll have many stopgaps to choose from. How should you decide which ones to adopt? I have three rules of thumb to suggest.

First, you should look carefully at the lifetime cost of each stopgap measure, compared to the value it will provide you. Since a measure will have a limited – and possibly quite short – lifetime, any measure that is expensive or time-consuming to deploy will be a loser. Equally unwise is any measure that incurs a long-term cost, such as a measure that requires future devices to implement obsolete stopgaps in order to remain compatible. A good stopgap can be undeployed fully once it has become obsolete.

Second, recognize that when the adversary adapts to one stopgap, he may thereby render a whole family of potential stopgaps useless. So don’t plan on rolling out an endless sequence of small variations on the same method. For example, if you encrypt data in transit, the adversary may shift to a strategy of observing your data at the destination, after the data has been decrypted. Once the adversary has done this, there is no point in changing cryptographic keys or shifting to different encryption methods. Plan to use different kinds of tactics, rather than variations on a single theme.

Third, remember that the adversary will rarely attack a stopgap head-on. Instead, he will probably work around it, by finding a tactic that makes it irrelevant. So don’t worry too much about how well your stopgap resists direct attack, and don’t choose a more expensive stopgap just because it stands up marginally better against direct attacks. If you’re throwing an oil slick onto the road in front of your adversary, you needn’t worry too much about the quality of the oil.

There are some hopeful signs that the big copyright owners are beginning to use stopgaps more effectively. But their policy prescriptions still reflect a poor understanding of stopgap strategy. In the third and final installment of my musings on speedbumps, I’ll talk about the public policy implications of the speedbump/stopgap approach to copyright enforcement.

Extreme Branding

Yesterday I saw something so odd that I just can’t let it pass unrecorded.

I was on a plane from Newark to Seattle, and I noticed that I was sitting next to Adidas Man. Nearly everything about this guy bore the Adidas brand, generally both the name and the logo. His shirt. His pants. His shoes. His jacket. His suitcase. His watch. His CD player. And – I swear I’m not making this up – his wedding ring. Yes, the broad silver band worn on the fourth finger of his left hand was designed in classic wedding-band style, except for the addition of the Adidas logo, and the letters a-d-i-d-a-s embossed prominently on the outside.

Princeton Faculty Passes Grade Quota

Yesterday the Princeton faculty passed the proposed grade inflation resolution (discussed here), establishing a quota on A-level grades. From now on, no more than 35% of the course grades awarded by any department may be A-level grades, and no more than 55% of independent work grades may be A-level.

I had to miss the meeting due to travel, so I can’t report directly on the debate at the faculty meeting. I’ll update this post later if I hear anything interesting about the debate.

What is a Speedbump?

One thing I learned at the Harvard Speedbumps conference is that many people agree that “speedbump DRM” is a good idea; but they seem to have very different opinions of what “speedbump DRM” means. (The conference was declared “off the record” so I can’t attribute specific opinions to specific people or organizations.)

One vision of speedbump DRM tries to delay the leakage of DRM’ed content onto the darknet (i.e., onto open peer-to-peer systems where they’re available to anybody). By delaying this leakage for long enough, say for three months, this vision tries to protect a time window in which a copyrighted work can sold at a premium price.

The problem with this approach is that it assumes that you can actually build a DRM system that will prevent leakage of the content for a suitable length of time. So far, that has not been the case – not even close. Most DRM systems are broken within hours, or a within few days at most. And even if they’re not broken, the content leaks out in other ways, by leaks in the production process or via the analog hole. Once content is available on the darknet, DRM is nearly useless, since would-be infringers will ignore the DRM’ed content and get unconstrained copies from the darknet instead.

In any case, this approach isn’t really trying to build a speedbump, it’s trying to build a safe. (Even top-of-the-line office safes can only stand up to skilled safecrackers for hours.) A speedbump does delay passing cars, but only briefly. A three-month speedbump isn’t really a speedbump at all.

A real speedbump doesn’t stop drivers from following a path that they’re deterrmined to follow. Its purpose, instead, is to make one path less convenient than another. A speedbump strategy for copyright holders, then, tries to make illegal acquisition of content (via P2P, say) less convenient than the legitimate alternative.

There are several methods copyright owners can (and do) use to frustrate P2P infringers. Copyright owners can flood the P2P systems with spoofed files, so that users have to download multiple instances of file before they get a real one. They can identify P2P uploaders offering copyrighted files, and send them scary warning messages, to reduce the supply of infringing files. These methods make it harder for P2P users to get the copyrighted files they want – they acts as speedbumps.

These kinds of speedbumps are very feasible. They can make a significant difference, if they’re coupled with a legitimate alternative that’s really attractive. And if they’re done carefully, these measures have the virtue of inflicting little or no pain on noninfringers.

From an analytical, information security viewpoint, looking for speedbumps rather than impregnable walls requires us to think differently. How exactly we must change our thinking, and how the speedbump approach impacts public policy, are topics for another day.