April 23, 2024

Archives for June 2005

A 'Darknet' backgrounder

OK, time to dive in here from my hotel room. A little while ago I posted a guest entry on the Berkman blog that offers a few details about how Darknet and Ourmedia came to be. It’s hard to summarize a book’s major themes in a paragraph or two, but the basic thrust is:

– Increasingly we’re become creators and co-creators of our media experiences instead of merely passive receptacles for big media content. I call this the personal media revolution.

– The law is fast becoming out of sync with what people want to do with media – to reclaim it, borrow from it, remix it and recirculate it.

– As such, the law is turning millions of us into a nation of digital felons. I cite example after example of people (a Boston pastor, an Intel vice president) using media in reasonable ways and yet finding themselves on the wrong side of the law because they’re broken the encryption on a DVD or tried to apply the precepts of fair use to our increasingly visual culture.

– But most of “Darknet” is not about the law – it’s about the future of media (movies, television, music, computing, games) and what kind of media we want as a society: spoonfed, one-way, traditional media or a more vibrant, interactive form of media filled with grassroots, shared experiences?

I suspect you can tell where I come down.

Guest Blogger: JD Lasica

I’m happy to welcome JD Lasica, author of the new book Darknet: Hollywood’s War Against the Digital Generation, and co-founder of OurMedia, who will be guest-blogging here this coming week. This is part of JD’s virtual book tour.

I’ll be taking the week off, but the Friday book club will go on as normal.

Reading Code in 2005

[This post is part of the Book Club reading Lawrence Lessig’s Code and Other Laws of Cyberspace. Please use the comments area below to discuss the Preface and Chapter 1. For next Friday, we’ll read Chapter 2.]

“Code is law.” Lawrence Lessig’s dictum pervades our thinking about Internet policy. Sometimes it’s hard, reading Code in 2005, to remember the impact the book had when it was published back in 1999. Six years is a long time on the net: four iterations of Moore’s Law. Dealing with that gap in time – the dotcom bubble, 9/11, the Induce Act, the Broadcast Flag, and everything else that has happened – is one of the challenges for us reading Code today.

To understand Code, we need to turn back the clock to 1999. Naive cyberlibertarianism ruled the day. The ink on Barlow’s Declaration of Independence of Cyberspace had barely dried. As Lessig puts it,

The claim now was that government could not regulate cyberspace, that cyberspace was essentially, and unavoidably, free. Governments could threaten, but behavior could not be controlled; laws could be passed, but they would be meaningless. There was no choice about which government to install—none could reign. Cyberspace would be a society of a very different sort. There would be definition and direction, but built from the bottom up, and never through the direction of a state. The society of this space would be a fully self-ordering entity, cleansed of governors and free from political hacks.

Most everyone seemed to believe this. Then Lessig’s Code appeared. Suddenly, people could see alternative versions of cyberspace that weren’t inherently free and uncontrolled. To many, including Lessig, the future of cyberspace held more control, more constraint.

Today, Internet policy looks like a war of attrition between freedom and constraint. So Code won’t give us the “Aha!” reaction that it might have given our younger selves six years ago. “Code is law” is the new conventional wisdom.

In turning back to Code, I wondered how well it would hold up. Would it seem fresh, or dated? Would it still have things to teach us?

Based on the Preface and Chapter 1 – a truncated portion, to be sure – the book holds up pretty well. The questions Lessig asks in the Preface still matter to us today.

The challenge of our generation is to reconcile these two forces. How do we protect liberty when the architectures of control are managed as much by the government as by the private sector? How do we assure privacy when the ether perpetually spies? How do we guarantee free thought when the push is to propertize every idea? How do we guarantee self-determination when the architectures of control are perpetually determined elsewhere?

Lessig’s crafty comparison, in Chapter 1, of cyberspace to the newly freed countries of Eastern Europe may be even more intruiging today than it was in 1999, given what has happened in Eastern Europe in the intervening years. I’ll leave it to all of you to unpack this analogy.

Lessig once said, “Pessimism is my brand.” His pessimism is on display at the end of Chapter 1.

I end by asking whether we—meaning Americans—are up to the challenge that these choices present. Given our present tradition in constitutional law and our present faith in representative government, are we able to respond collectively to the changes I describe?

My strong sense is that we are not.

We face serious challenges, but I suspect that Lessig is a bit more hopeful today.

Welcome to the Book Club. Let the discussion begin!

Analysis of Fancy E-Voting Protocols

Karlof, Sastry, and Wagner have an interesting new paper looking at fancy voting protocols designed by Neff and Chaum, and finding that they’re not yet ready for use.

The protocols try to use advanced cryptography to make electronic voting secure. The Neff scheme (I’ll ignore the Chaum scheme, for brevity) produces three outputs: a paper receipt for each voter to take home, a public list of untabulated scrambled ballots, and a final tabulation. These all have special cryptographic properties that can be verified to detect fraud. For example, a voter’s take-home receipt allows the voter to verify that his vote was recorded correctly. But to prevent coercion, the receipt does not allow the voter to prove to a third party how he voted.

The voting protocols are impressive cryptographic results, but the new paper shows that when the protocols are embedded into full voting systems, serious problems arise.

Some of these problems are pretty simple. For example, a voter who keeps his receipt can ensure that crooked election officials don’t alter his vote. But if the voter discards his receipt at the polling place, an official who notices this can change the voter’s vote. Or if the voter is coerced into handing over his receipt to his employer or union boss, then his vote can be altered.

Another simple problem is that the protocols allow some kinds of vote-counting problems to be detected but not corrected. In other words, we will be able to tell that the result is not the true vote count, but we may not be able to recover the true vote count. This means that somebody who doesn’t like the way the election is going can cause the technology to make errors, thereby invalidating the election. A malicious voting machine could even do this if it sees too many votes being cast for the wrong candidate.

There are also more subtle problems, such as subliminal channels by which malicious voting-machine software could encode information into seemingly random parts of the voter’s receipt. Since some information from the voter’s receipt is posted on the public list, this information would be available to anybody who was in cahoots with the malicious programmer. A malicious voting machine could secretly encode the precise time a vote was cast, and how it was cast, in a way that a malicious person could secretly decode later. Since most polling places allow the time of a particular voter’s vote to be recorded, this would allow individual voter’s votes to be leaked. Just the possibility of this happening would cause voters to doubt that their votes were really secret.

Interestingly, many of these problems can be mitigated by adding a voter verified paper ballot, which is generated by the voting machine and dropped into an old-fashioned ballot box. (This is in addition to the cryptographically-generated paper receipt that the voter would take home.) The paper ballots provide an additional check against fraud, an audit mechanism to guage the accuracy of the cryptographic system, and a fallback in case of failure. Perhaps the best solution is one that uses both cryptography and voter-verified paper ballots, as independent anti-fraud measures.

The take-home lesson of this paper is cryptographic protocols are promising but more work is needed to make them ready for use. It seems likely that cryptographic protocols will help to improve the accuracy of elections some day.

[Thanks to Joe Hall for pointing me to the paper.]

CDT Closes Eyes, Wishes for Good DRM

The Center for Democracy and Technology just released a new copyright policy paper. Derek Slater notes, astutely, that it tries at all costs to take the middle ground. It’s interesting to see what CDT sees as the middle ground.

Ernest Miller gives the paper a harsh review. I think Ernie is too harsh in some areas.

Rather than reviewing the whole paper, I’ll look here at the section on DRM. Here CDT’s strategy is essentially to wish that we lived on a planet where DRM could be consumer-friendly while preventing infringement. They’re smart enough not to claim that we live on such a planet now, only that people hope that we will soon:

While DRM systems can be very restrictive, much work is underway to create content protections that allow expansive consumer uses, while still protecting against widespread distribution.

(They footnote this by referring to FairPlay, TivoToGo, and AACS-LA, which all fall well short of their goal.) CDT asserts that if DRM systems that made everyone happy did exist, it would be good to use them. Fair enough. But what should we do in the actual world, where DRM that everyone loves is about as likely as teleportation or perpetual motion?

This means producers must be free to experiment with various models of digital distribution, using different content protection technologies and offering different sets of permissions and limitations. [Government DRM mandates are bad.]

Consumers, meanwhile, must have real options for purchasing different bundles of rights at different price points.

Producers should be free to experiment. Consumers should be free to buy. Gee, thanks.

Actually, this would be fine if CDT really meant that producers were free to experiment with DRM systems. Nowadays, everybody is a producer. If you take photographs, you’re a producer of copyrighted images. If you take home movies, you’re a producer of copyrighted video. If you write, you’re a producer of copyrighted text. We’re all producers. A world where we could all experiment would be good.

What they really mean, of course, is that some producers are more equal than others. Those who are expected to sell a few works to many people – or, given the way policy really gets made, those who have done so in the recent past – are called “producers”, while those who produce the vast majority of new copyrighted works are somehow called “consumers”. (And don’t say that big media produces the only works of value. Quick: Which still images do you value most in the world? I’ll bet they’re photos, and that they weren’t taken by a big media company.)

Here’s the bottom line: In the real world, DRM policy involves tradeoffs, and requires choices. Wishing for a magical DRM technology that will please everyone is not a strategy.