November 26, 2024

Reading Code in 2005

[This post is part of the Book Club reading Lawrence Lessig’s Code and Other Laws of Cyberspace. Please use the comments area below to discuss the Preface and Chapter 1. For next Friday, we’ll read Chapter 2.]

“Code is law.” Lawrence Lessig’s dictum pervades our thinking about Internet policy. Sometimes it’s hard, reading Code in 2005, to remember the impact the book had when it was published back in 1999. Six years is a long time on the net: four iterations of Moore’s Law. Dealing with that gap in time – the dotcom bubble, 9/11, the Induce Act, the Broadcast Flag, and everything else that has happened – is one of the challenges for us reading Code today.

To understand Code, we need to turn back the clock to 1999. Naive cyberlibertarianism ruled the day. The ink on Barlow’s Declaration of Independence of Cyberspace had barely dried. As Lessig puts it,

The claim now was that government could not regulate cyberspace, that cyberspace was essentially, and unavoidably, free. Governments could threaten, but behavior could not be controlled; laws could be passed, but they would be meaningless. There was no choice about which government to install—none could reign. Cyberspace would be a society of a very different sort. There would be definition and direction, but built from the bottom up, and never through the direction of a state. The society of this space would be a fully self-ordering entity, cleansed of governors and free from political hacks.

Most everyone seemed to believe this. Then Lessig’s Code appeared. Suddenly, people could see alternative versions of cyberspace that weren’t inherently free and uncontrolled. To many, including Lessig, the future of cyberspace held more control, more constraint.

Today, Internet policy looks like a war of attrition between freedom and constraint. So Code won’t give us the “Aha!” reaction that it might have given our younger selves six years ago. “Code is law” is the new conventional wisdom.

In turning back to Code, I wondered how well it would hold up. Would it seem fresh, or dated? Would it still have things to teach us?

Based on the Preface and Chapter 1 – a truncated portion, to be sure – the book holds up pretty well. The questions Lessig asks in the Preface still matter to us today.

The challenge of our generation is to reconcile these two forces. How do we protect liberty when the architectures of control are managed as much by the government as by the private sector? How do we assure privacy when the ether perpetually spies? How do we guarantee free thought when the push is to propertize every idea? How do we guarantee self-determination when the architectures of control are perpetually determined elsewhere?

Lessig’s crafty comparison, in Chapter 1, of cyberspace to the newly freed countries of Eastern Europe may be even more intruiging today than it was in 1999, given what has happened in Eastern Europe in the intervening years. I’ll leave it to all of you to unpack this analogy.

Lessig once said, “Pessimism is my brand.” His pessimism is on display at the end of Chapter 1.

I end by asking whether we—meaning Americans—are up to the challenge that these choices present. Given our present tradition in constitutional law and our present faith in representative government, are we able to respond collectively to the changes I describe?

My strong sense is that we are not.

We face serious challenges, but I suspect that Lessig is a bit more hopeful today.

Welcome to the Book Club. Let the discussion begin!

Analysis of Fancy E-Voting Protocols

Karlof, Sastry, and Wagner have an interesting new paper looking at fancy voting protocols designed by Neff and Chaum, and finding that they’re not yet ready for use.

The protocols try to use advanced cryptography to make electronic voting secure. The Neff scheme (I’ll ignore the Chaum scheme, for brevity) produces three outputs: a paper receipt for each voter to take home, a public list of untabulated scrambled ballots, and a final tabulation. These all have special cryptographic properties that can be verified to detect fraud. For example, a voter’s take-home receipt allows the voter to verify that his vote was recorded correctly. But to prevent coercion, the receipt does not allow the voter to prove to a third party how he voted.

The voting protocols are impressive cryptographic results, but the new paper shows that when the protocols are embedded into full voting systems, serious problems arise.

Some of these problems are pretty simple. For example, a voter who keeps his receipt can ensure that crooked election officials don’t alter his vote. But if the voter discards his receipt at the polling place, an official who notices this can change the voter’s vote. Or if the voter is coerced into handing over his receipt to his employer or union boss, then his vote can be altered.

Another simple problem is that the protocols allow some kinds of vote-counting problems to be detected but not corrected. In other words, we will be able to tell that the result is not the true vote count, but we may not be able to recover the true vote count. This means that somebody who doesn’t like the way the election is going can cause the technology to make errors, thereby invalidating the election. A malicious voting machine could even do this if it sees too many votes being cast for the wrong candidate.

There are also more subtle problems, such as subliminal channels by which malicious voting-machine software could encode information into seemingly random parts of the voter’s receipt. Since some information from the voter’s receipt is posted on the public list, this information would be available to anybody who was in cahoots with the malicious programmer. A malicious voting machine could secretly encode the precise time a vote was cast, and how it was cast, in a way that a malicious person could secretly decode later. Since most polling places allow the time of a particular voter’s vote to be recorded, this would allow individual voter’s votes to be leaked. Just the possibility of this happening would cause voters to doubt that their votes were really secret.

Interestingly, many of these problems can be mitigated by adding a voter verified paper ballot, which is generated by the voting machine and dropped into an old-fashioned ballot box. (This is in addition to the cryptographically-generated paper receipt that the voter would take home.) The paper ballots provide an additional check against fraud, an audit mechanism to guage the accuracy of the cryptographic system, and a fallback in case of failure. Perhaps the best solution is one that uses both cryptography and voter-verified paper ballots, as independent anti-fraud measures.

The take-home lesson of this paper is cryptographic protocols are promising but more work is needed to make them ready for use. It seems likely that cryptographic protocols will help to improve the accuracy of elections some day.

[Thanks to Joe Hall for pointing me to the paper.]

CDT Closes Eyes, Wishes for Good DRM

The Center for Democracy and Technology just released a new copyright policy paper. Derek Slater notes, astutely, that it tries at all costs to take the middle ground. It’s interesting to see what CDT sees as the middle ground.

Ernest Miller gives the paper a harsh review. I think Ernie is too harsh in some areas.

Rather than reviewing the whole paper, I’ll look here at the section on DRM. Here CDT’s strategy is essentially to wish that we lived on a planet where DRM could be consumer-friendly while preventing infringement. They’re smart enough not to claim that we live on such a planet now, only that people hope that we will soon:

While DRM systems can be very restrictive, much work is underway to create content protections that allow expansive consumer uses, while still protecting against widespread distribution.

(They footnote this by referring to FairPlay, TivoToGo, and AACS-LA, which all fall well short of their goal.) CDT asserts that if DRM systems that made everyone happy did exist, it would be good to use them. Fair enough. But what should we do in the actual world, where DRM that everyone loves is about as likely as teleportation or perpetual motion?

This means producers must be free to experiment with various models of digital distribution, using different content protection technologies and offering different sets of permissions and limitations. [Government DRM mandates are bad.]

Consumers, meanwhile, must have real options for purchasing different bundles of rights at different price points.

Producers should be free to experiment. Consumers should be free to buy. Gee, thanks.

Actually, this would be fine if CDT really meant that producers were free to experiment with DRM systems. Nowadays, everybody is a producer. If you take photographs, you’re a producer of copyrighted images. If you take home movies, you’re a producer of copyrighted video. If you write, you’re a producer of copyrighted text. We’re all producers. A world where we could all experiment would be good.

What they really mean, of course, is that some producers are more equal than others. Those who are expected to sell a few works to many people – or, given the way policy really gets made, those who have done so in the recent past – are called “producers”, while those who produce the vast majority of new copyrighted works are somehow called “consumers”. (And don’t say that big media produces the only works of value. Quick: Which still images do you value most in the world? I’ll bet they’re photos, and that they weren’t taken by a big media company.)

Here’s the bottom line: In the real world, DRM policy involves tradeoffs, and requires choices. Wishing for a magical DRM technology that will please everyone is not a strategy.

Intellectual Property, Innovation, and Decision Architectures

Tim Wu has an interesting new draft paper on how public policy in areas like intellectual property affects which innovations are pursued. It’s often hard to tell in advance which innovations will succeed. Organizational economists distinguish centralized decision structures, in which one party decides whether to proceed with a proposed innovation, from decentralized structures, in which any one of several parties can decide to proceed.

This distinction gives us a new perspective on when intellectual property rights should be assigned, and what their optimal scope is. In general, economists favor decentralized decision structures in economic systems, based on the observation that free market economies perform better than planned centralized economies. This suggests – even accepting the useful incentives created by intellectual property – at least one reason to be cautious about the assignment of broad rights. The danger is that centralization of investment decision-making may block the best or most innovative ideas from coming to market. This concern must be weighed against the desirable ex ante incentives created by an intellectual property grant.

This is an interesting observation that opens up a whole series of questions, which Wu discusses briefly. I can’t do his discussion justice here, so I’ll just extract two issue he raises.

The first issue is whether the problems with centralized management can be overcome by licensing. Suppose Alice owns a patent that is needed to build useful widgets. Alice has centralized control over any widget innovation, and she might make bad decisions about which innovations to invest in. Suppose Bob believes that quabbling widgets will be a big hit, but Alice doesn’t like them and decides not to invest in them. If Bob can pay Alice for the right to build quabbling widgets, then perhaps Bob’s good sense (in this case) can overcome Alice’s doubts. Alice is happy to take Bob’s money in exchange for letting him sell a product that she thinks will fail; and quabbling widgets get built. If the story works out this way, then the centralization of decisionmaking by Alice isn’t much of a problem, because anyone who has a better idea (or thinks they do) can just cut a deal with Alice.

But exclusive rights won’t always be licensed efficiently. The economic literature considers the conditions under which efficient licensing will occur. Suffice it to say that this is a complicated question, and that one should not simply assume that efficient licensing is a given. Disruptive technologies are especially likely to go unlicensed.

Wu also discusses, based on his analysis, which kinds of industries are the best candidates for strong grants of exclusive rights.

An intellectual property regime is most clearly desirable for mature industries, by definition technologically stable, and with low or negative economic growth…. [I]f by definition profit margins are thin in a declining industry, it will be better to have only the very best projects come to market…. By the same logic, the case for strong intellectual property protections may be at its weakest in new industries, which can be described as industries that are expanding rapidly and where technologies are changing quickly…. A [decentralized] decision structure may be necessary to uncover the innovative ideas that are the most valuable, at the costs of multiple failures.

As they say in the blogosphere, read the whole thing.

MacIntel: It's Not About DRM

The big tech news today is that Apple will start using Intel microprocessors (the same brand used in PCs) in its Macintosh computers, starting next year. Some have speculated that this might be motivated by DRM. The theory is that Apple wants the anticopying features that will be built into the hardware of future Intel processors.

The theory is wrong.

Though they’re not talking much about it, savvy people in the computer industry have already figured out that hardware DRM support is a non-starter on general-purpose computers. At most, hardware DRM can plug one hole in a system with many holes, by preventing attacks that rely on running an operating system on top of an emulator rather than on top of a real hardware processor. Plenty of other attacks still work, by defeating insecure operating systems or applications, or by exploiting the analog hole, or by capturing content during production or distribution. Hardware DRM blocks one of the less likely attacks, which makes little if any difference.

If DRM is any part of Apple’s motivation – which I very much doubt – the reason can only be as a symbolic gesture of submission to Hollywood. One of the lessons of DVD copy protection is that Hollywood still seems to need the security blanket of DRM to justify accepting a new distribution medium. DVD copy protection didn’t actually keep any content from appearing on the darknet, but it did give Hollywood a false sense of security that seemed to be necessary to get them to release DVDs. It’s awfully hard to believe that Hollywood is so insistent on symbolic DRM that it could induce Apple to pay the price of switching chip makers.

Most likely, Apple is switching to Intel chips for the most basic reason: the Intel chips meet Apple’s basic needs better than IBM chips do. Some stories report that Intel had an advantage in producing fast chips that run cool and preserve battery power, for laptops. Perhaps Apple just believes that Intel, which makes many more chips than IBM, is a better bet for the future. Apple has its reasons, but DRM isn’t one of them.