November 27, 2024

Self-Help for Consumers

Braden Cox at Technology Liberation Front writes about a law school symposium on “The Economics of Self-Help and Self-Defense in Cyberspace”. Near the end of an interesting discussion, Cox says this:

The conference ended with Dan Burk at Univ of Minnesota Law School giving a lefty analysis for how DRM will be mostly bad for consumers unless the government steps in and sets limits that preserve fair use. I had to challenge him on this one, and asked where is the market failure here? Consumers will get what they demand, and if some DRM is overly restrictive there will be companies that will provide more to consumers. He said that the consumers of DRM technology are not the general public, but the recording companies, and because society-at-large is not properly represented in this debate the government needs to play a larger role.

I would answer Cox’s question a bit differently. I’m happy to agree with Cox that the market, left to itself, would find a reasonable balance between the desires of media publishers and consumers. But the market hasn’t been left to itself. Congress passed the DMCA, which bans some products that let consumers exercise their rights to make noninfringing use (including fair use) of works.

The best solution would be to repeal the DMCA, or at least to create a real exemption for technologies that enable fair use and other lawful uses. If that’s not possible, and Congress continues to insist on decreeing which media player technologies can exist, the second-best solution is to make those decrees more wisely.

Because of the DMCA, consumers have not gotten what they demand. For example, many consumers demand a DVD player that runs on Linux, but when somebody tried to build one it was deemed illegal.

Perhaps the Technology Liberation Front can help us liberate these technologies.

Security by Obscurity

Adam Shostack points to a new paper by Peter Swire, entitled “A Model for When Disclosure Helps Security”. How, Swire asks, can we reconcile the pro-disclosure “no security by obscurity” stance of crypto weenies with the pro-secrecy, “loose lips sink ships” attitude of the military? Surely both communities understand their own problems; yet they come to different conclusions about the value of secrecy.

Swire argues that the answer lies in the differing characteristics of security problems. For example, when an attacker can cheaply probe a system to learn how it works, secrecy doesn’t help much; but when probing is impossible, expensive, or pointless, secrecy makes more sense.

This is a worthwhile discussion, but I think it slightly misses the point of the “no security by obscurity” principle. The point is not to avoid secrecy altogether; that would almost never be feasible. Instead, the point is to be very careful about what kind of secrecy you rely on.

“Security by obscurity” is really just a perjorative term for systems that violate Kerckhoffs’ Principle, which says that you should not rely on keeping an algorithm secret, but should only rely on keeping a numeric key secret. Keys make better secrets than algorithms do, for at least two reasons. First, it’s easy to use different keys in different times and places, thereby localizing the effect of lost secrets; but it’s hard to vary your algorithms. Second, if keys are generated randomly then we can quantify the effort required for an adversary to guess them; but we can’t predict how hard it will be for an adversary to guess which algorithm we’re using.

So cryptographers do believe in keeping secrets, but are very careful about which kinds of secrets they keep. True, the military’s secrets sometimes violate Kerckhoffs’ principle, but this is mainly because there is no alternative. After all, if you have to get a troopship safely across an ocean, you can’t just encrypt the ship under a secret key and beam it across the water. Your only choice is to rely on keeping the algorithm (i.e., the ship’s route) secret.

In the end, I think there’s less difference between the methods of cryptographers and the military than some people would think. Cryptographers have more options, so they can be pickier about which secrets to keep; but the military has to deal with the options it has.

Absentee Voting Horror Stories

Absentee ballots are a common vector for election fraud, and several U.S. states have inadquate safeguards in their handling, according to a Michael story in today’s New York Times. The story recounts many examples of absentee ballot fraud, including blatant vote-buying.

For in-person voting, polling-place procedures help to authenticate voters and to ensure that votes are cast secretly and are not lost in transit. Absentee voting has weaker safeguards all around. In some states, party workers are even allowed to “help” voters fill out their ballots and to transport completed ballots to election officials. (The latter is problematic because certain ballots might be “lost” in transit.)

Traditional voting security relies on having many eyes in the polling place, watching what happens. Of course, the observers don’t see how each voter votes, but they do see that the vote is cast secretly and by the correct voter. Moving our voting procedures behind closed doors, as with absentee ballots, or inside a technological black box, as with paperless e-voting, undermines these protections.

Without safeguards, absentee ballots are too risky. Even with proper safeguards, they are at best a necessary compromise for voters who genuinely can’t make it to the polls.

Privacy and Toll Transponders

Rebecca Bolin at LawMeme discusses novel applications for the toll transponder systems that are used to collect highway and bridge tolls.

These systems, such as the EZ-Pass system used in the northeastern U.S., operate by putting a tag device in each car. When a car passes through a tollbooth, a reader in the tollbooth sends a radio signal to the tag. The tag identifies itself (by radio), and the system collects the appropriate toll (by credit card charge) from the tag’s owner.

This raises obvious privacy concerns, if third parties can build base stations that mimic tollbooths to collect information about who drives where.

Rebecca notes that Texas A&M engineers built a useful system that reads toll transponders at various points on Houston-area freeways, and uses the results to calculate the average traffic speed on each stretch of road. This is then made available to the public on a handy website.

The openness of the toll transponder system to third-party applications is both a blessing and a curse, since it allows good applications like the real-time traffic map, and bad applications like privacy-violating vehicle tracking.

Here’s where things get interesting. The tradeoff that Rebecca notes is not a necessary consequence of using toll transponders. It’s really the result of technical design decisions that could have been made differently. Want a toll transponder system that can’t be read usefully by third parties? We can design it that way. Want a system that allows only authorized third parties to be able to track vehicles? We can design it that way. Want a system that allows anyone to be able to tell that the same vehicle has passed two points, but without knowing which particular vehicle it was? We can design it that way, too.

Often, apparent tradeoffs in new technologies are not inherent, but could have been eliminated by thinking more carefully in advance about what the technology is supposed to do and what it isn’t supposed to do.

Even if it’s too late to change the deployed system, we can often learn by turning back the clock and thinking about how we would have designed a technology if we knew then what we know now about the technology’s implications. And on the first day of classes (e.g., today, here at Princeton) this is also a useful source of homework problems.

When Wikipedia Converges

Many readers, responding to my recent quality-check on Wikipedia, have argued that over time the entries in question will improve, so that in the long run Wikipedia will outpace conventional encyclopedias like Britannica. It seems to me that this is the most important claim made by Wikipedia boosters.

If a Wikipedia entry gets enough attention, then it will likely change over time. When the entry is new, it will almost certainly improve by adding more detail. But once it matures, it seems likely that it will reach some level of quality and then level off, executing a quality-neutral random walk, with the changes reflecting nothing more than minor substitutions of one contributor’s style or viewpoint for another.

I’d expect a similar story for Wikipedia as a whole, with early effort spent mostly on expanding the scope of the site, and later effort spent more on improving (or at least changing) existing entries. Given enough effort spent on the site, more and more entries should approach maturity, and the rate of improvement in Wikipedia as a whole should approach zero.

This leaves us with two questions: (1) Will enough effort be spent on Wikipedia to cause it to reach the quality plateau? (2) How high is the quality plateau anyway?

We can shed light on both questions by studying the evolution of individual entries over time. Such a study is possible today, since Wikipedia tracks the history of every entry. I would like to see the results of such a study, but unfortunately I don’t have time to do it myself.