April 17, 2014

avatar

Off-the-record Conferences

In writing about the Harvard Speedbump conference, I noted that its organizers declared it to be off the record, so that statements made or positions expressed at the conference would not be attributed publicly to any particular person or organization. JD Lasica asks, quite reasonably, why this was done: “Can someone explain to me why a conference needs to be ‘off the record’ in order for people to exchange ideas freely? What kind of society are we living in?”

This is the second off-the-record conference I have been to in my twenty years as a researcher. The first was a long-ago conference on parallel computing. Why that one was off the record was a mystery to me then, and it still is now. Nobody there had anything controversial to say, and no participant was important enough that anyone outside a small research community would even care what was said.

As to the recent Speedbump conference, I can at least understand the motivation for putting it off the record. Some of the participants, like Cary Sherman from RIAA and Fritz Attaway from MPAA, would be understood as speaking for their organizations; and the hope was that such people might depart from their talking points and speak more freely if they knew their statements wouldn’t leave that room.

Overall, there was less posturing at this meeting than one usually sees at similar meetings. My guess is that this wasn’t because of the off-the-record rule, but just because some time has passed in the copyright wars and cooler heads are starting to prevail. Nobody at the meeting took a position that really surprised me.

As far as I could tell, there were only two or three brief exchanges that would not have happened in an on-the-record meeting. These were discussions of various deals that either might be made between different entities, or that one entity had quietly offered to another in the past. For me, these discussions were less interesting than the rest of the meeting: clearly no deal could be made in a room with thirty bystanders, and the deals that were discussed were of the sort that savvy observers of the situation might have predicted anyway.

In retrospect, it looks to me like the conference needn’t have been off the record. We could just as easily have followed the rule used in at least one other meeting I have attended, with everything on the record by default, but speakers allowed to place specific statements off the record.

To some extent, the off-the-record rule at the conference was a consequence of blogging. In pre-blog days, this issue could have been handled by not inviting any reporters to the meeting. Nowadays, at any decent-sized meeting, odds are good that several of the participants have blogs; and odds are also good that somebody will blog the meeting in real time. On the whole this is a wonderful thing; nobody has the time or money to go to every interesting conference.

I have learned a lot from bloggers’ conference reports. It would be a shame to lose them because people are afraid of being quoted.

[My plan still calls for one more post on the substance of the conference, as promised yessterday.]

avatar

Stopgap Security

Another thing I learned at the Harvard Speedbumps conference (see here for a previous discussion) is that most people have poor intuition about how to use stopgap measures in security applications. By “stopgap measures” I mean measures that will fail in the long term, but might do some good in the short term while the adversary figures out how to work around them. For example, copyright owners use simple methods to identify the people who are offering files for upload on P2P networks. It’s only a matter of time before P2P designers deploy better methods for shielding their users’ identities so that today’s methods of identifying P2P users no longer work.

Standard security doctrine says that stopgap measures are a bad idea – that the right approach is to look for a long-term solution that the bad guys can’t defeat simply by changing their tactics. Standard doctrine doesn’t demand an impregnable mechanism, but it does insist that a good mechanism must not become utterly useless once the adversary adapts to it.

Yet sometimes, as in copyright owners’ war on P2P infringement, there is no good solution, and stopgap measures are the only option you have. Typically you’ll have many stopgaps to choose from. How should you decide which ones to adopt? I have three rules of thumb to suggest.

First, you should look carefully at the lifetime cost of each stopgap measure, compared to the value it will provide you. Since a measure will have a limited – and possibly quite short – lifetime, any measure that is expensive or time-consuming to deploy will be a loser. Equally unwise is any measure that incurs a long-term cost, such as a measure that requires future devices to implement obsolete stopgaps in order to remain compatible. A good stopgap can be undeployed fully once it has become obsolete.

Second, recognize that when the adversary adapts to one stopgap, he may thereby render a whole family of potential stopgaps useless. So don’t plan on rolling out an endless sequence of small variations on the same method. For example, if you encrypt data in transit, the adversary may shift to a strategy of observing your data at the destination, after the data has been decrypted. Once the adversary has done this, there is no point in changing cryptographic keys or shifting to different encryption methods. Plan to use different kinds of tactics, rather than variations on a single theme.

Third, remember that the adversary will rarely attack a stopgap head-on. Instead, he will probably work around it, by finding a tactic that makes it irrelevant. So don’t worry too much about how well your stopgap resists direct attack, and don’t choose a more expensive stopgap just because it stands up marginally better against direct attacks. If you’re throwing an oil slick onto the road in front of your adversary, you needn’t worry too much about the quality of the oil.

There are some hopeful signs that the big copyright owners are beginning to use stopgaps more effectively. But their policy prescriptions still reflect a poor understanding of stopgap strategy. In the third and final installment of my musings on speedbumps, I’ll talk about the public policy implications of the speedbump/stopgap approach to copyright enforcement.

avatar

Extreme Branding

Yesterday I saw something so odd that I just can’t let it pass unrecorded.

I was on a plane from Newark to Seattle, and I noticed that I was sitting next to Adidas Man. Nearly everything about this guy bore the Adidas brand, generally both the name and the logo. His shirt. His pants. His shoes. His jacket. His suitcase. His watch. His CD player. And – I swear I’m not making this up – his wedding ring. Yes, the broad silver band worn on the fourth finger of his left hand was designed in classic wedding-band style, except for the addition of the Adidas logo, and the letters a-d-i-d-a-s embossed prominently on the outside.

avatar

Princeton Faculty Passes Grade Quota

Yesterday the Princeton faculty passed the proposed grade inflation resolution (discussed here), establishing a quota on A-level grades. From now on, no more than 35% of the course grades awarded by any department may be A-level grades, and no more than 55% of independent work grades may be A-level.

I had to miss the meeting due to travel, so I can’t report directly on the debate at the faculty meeting. I’ll update this post later if I hear anything interesting about the debate.

avatar

What is a Speedbump?

One thing I learned at the Harvard Speedbumps conference is that many people agree that “speedbump DRM” is a good idea; but they seem to have very different opinions of what “speedbump DRM” means. (The conference was declared “off the record” so I can’t attribute specific opinions to specific people or organizations.)

One vision of speedbump DRM tries to delay the leakage of DRM’ed content onto the darknet (i.e., onto open peer-to-peer systems where they’re available to anybody). By delaying this leakage for long enough, say for three months, this vision tries to protect a time window in which a copyrighted work can sold at a premium price.

The problem with this approach is that it assumes that you can actually build a DRM system that will prevent leakage of the content for a suitable length of time. So far, that has not been the case – not even close. Most DRM systems are broken within hours, or a within few days at most. And even if they’re not broken, the content leaks out in other ways, by leaks in the production process or via the analog hole. Once content is available on the darknet, DRM is nearly useless, since would-be infringers will ignore the DRM’ed content and get unconstrained copies from the darknet instead.

In any case, this approach isn’t really trying to build a speedbump, it’s trying to build a safe. (Even top-of-the-line office safes can only stand up to skilled safecrackers for hours.) A speedbump does delay passing cars, but only briefly. A three-month speedbump isn’t really a speedbump at all.

A real speedbump doesn’t stop drivers from following a path that they’re deterrmined to follow. Its purpose, instead, is to make one path less convenient than another. A speedbump strategy for copyright holders, then, tries to make illegal acquisition of content (via P2P, say) less convenient than the legitimate alternative.

There are several methods copyright owners can (and do) use to frustrate P2P infringers. Copyright owners can flood the P2P systems with spoofed files, so that users have to download multiple instances of file before they get a real one. They can identify P2P uploaders offering copyrighted files, and send them scary warning messages, to reduce the supply of infringing files. These methods make it harder for P2P users to get the copyrighted files they want – they acts as speedbumps.

These kinds of speedbumps are very feasible. They can make a significant difference, if they’re coupled with a legitimate alternative that’s really attractive. And if they’re done carefully, these measures have the virtue of inflicting little or no pain on noninfringers.

From an analytical, information security viewpoint, looking for speedbumps rather than impregnable walls requires us to think differently. How exactly we must change our thinking, and how the speedbump approach impacts public policy, are topics for another day.

avatar

How Much Information Do Princeton Grades Convey?

One of the standard arguments against grade inflation is that inflated grades convey less information about students’ performaces to employers, graduate schools, and the students themselves.

In light of the grade inflation debate at Princeton, I decided to apply information theory, a branch of computer science theory, to the question of how much information is conveyed by students’ course grades. I report the results in a four-page memo, in which I conclude that Princeton grades convey 11% less information than they did thirty years ago, and that imposing a 35% quota on A-level grades, as Princeton is proposing doing, would increase the information content of grades by 10% at most.

I’m trying to convince the Dean of the Faculty to distribute my memo to the faculty before the Monday vote on the proposed A quota.

Today’s Daily Princetonian ran a story, by Alyson Zureick, about my study.

avatar

California Panel Recommends Decertifying One Diebold System

The State of California’s Voting Systems Panel has voted to recommend the decertification of Diebold’s TSx e-voting system, according to a release from verifiedvoting.org. The final decision will be made by Secretary of State Kevin Shelley, but he is expected to approve the recommendation within the next week.

The TSx is only one of the Diebold e-voting systems used in California, but this is still an important step.

avatar

Copyright and Cultural Policy

James Grimmelmann offers another nice conference report, this time from the Seton Hall symposium on “Peer to Peer at the Crossroads”. I had expressed concern earlier about the lack of technologists on the program at the symposium, but James reports that the lawyers did just fine on their own, steering well clear of the counterfactual technology assumptions one sometimes sees at lawyer conferences.

Among other interesting bits, James summarizes Tim Wu’s presentation, based on a recent paper arguing that much of what passes for copyright policy is really just communications policy in disguise.

We’re all familiar, by now, with the argument that expansive copyright is bad because it’s destructive to innovation and allows incumbent copyright industries to prevent the birth of new competitors. Content companies tied to old distribution models are, goes this argument, strangling new technologies in their crib. We’re also familiar, by now, with the argument that changes in technology are destroying old, profitable, and socially-useful business, without creating anything stable, profitable, or beneficial in their place. In this strain of argument, technological Boston Stranglers roam free, wrecking the enormous investments that incumbents have made and ruining the incentives for them to put the needed money into building the services and networks of the future.

Tim’s insight, to do it the injustice of a sound-bite summarization, is that these are not really arguments that are rooted in copyright policy. These are communications policy arguments; it just so happens that the relevant which happens to affect communications policy is copyright law. Where in the past we’d have argued about how far to turn the “antitrust exemption for ILECs” knob, or which “spectrum auction” buttons to push, now we’re arguing about where to set the “copyright” slider for optimal communications policy. That means debates about copyright are being phrased in terms of a traditional political axis in communications law: whether to favor vertically-integrated (possibly monopolist) incumbents who will invest heavily because they can capture the profits from their investments, or to favor evolutionary competition with open standards in which the pressure for investment is driven by the need to stay ahead of one’s competitors.

The punch line: right now, our official direction in communications policy is moving towards the latter model. The big 1996 act embraced these principles, and the FCC is talking them up big time. Copyright, to the extent that it is currently pushing towards the former model, is pushing us to a communications model that flourished in decades past but is now out of favor.

This is a very important point, because the failure to see copyright in the broader context of communications policy has been the root cause of many policy errors, such as the FCC’s Broadcast Flag ruling.

I would have liked to attend the Seton Hall symposium myself, but I was at the Harvard Speedbumps conference that day. And I would have produced a Grimmelmann-quality conference report – really I would – but the Harvard conference was officially off-the-record. I’ll have more to say in future posts about the ideas discussed at the speedbumps conference, but without attributing them to any particular people.

avatar

Another Form of Grade Inflation

You may recall Princeton’s proposal to fight grade inflation by putting a quota on the number of A’s that can be awarded. Joe Barillari made a brilliant followup proposal in yesterday’s Daily Princetonian, to fight the “problem” of inflation in students’ ratings of their professors’ teaching.

avatar

Diebold Misled Officials about Certification

Diebold Election Systems knowingly used uncertified software in California elections, despite warnings from its lawyers that doing so was illegal and might subject the company to criminal sanctions and decertification in California, according to Ian Hoffman’s story in the Oakland Tribune.

The story says that Diebold made false representations about certification to state officials:

The drafts [of letters to the state] show [Diebold's lawyers] staked out a firm position that a critical piece of Diebold’s voting system – its voter-card encoders – didn’t need national or state approval because they were commercial-off-the-shelf products, never modified by Diebold.

But on the same day the letter was received, Diebold-hired techs were loading non-commercial Diebold software into voter-card encoders in a West Sacramento warehouse for shipment to Alameda and San Diego counties.

Many of these encoders failed on election day, causing voters to be turned away from the polls in San Diego and Alameda Counties.

This brings Diebold one step closer to being decertified in California:

“Diebold may suffer from gross incompetence, gross negligence. I don’t know whether there’s any malevolence involved,” said a senior California elections official who spoke on condition of anonymity. “I don’t know why they’ve acted the way they’ve acted and the way they’re continuing to act. Notwithstanding their rhetoric, they have not learned any lessons in terms of dealing with this secretary (of state).”

California voting officials will discuss Diebold’s behavior at a two-day hearing that starts today.

[link via Dan Gillmor]