April 28, 2024

Search Results for: voting

Swarthmore Students Re-Publish Diebold Memos

A group of Swarthmore students has published a damning series of internal memos from electronic-voting vendor Diebold. The memos appear to document cavalier treatment of security issues by Diebold, and the use of non-certified software in real elections. Diebold, claiming that the students are infringing copyright, has sent a series of DMCA takedown letters to Swarthmore. Swarthmore is apparently shutting off the Internet access of students who publish the memos. The students are responding by finding new places to report the memos, setting up what Ernest Miller calls a “whack-a-mole game”. (See, for example, posts from Ernest Miller and Aaron Swartz.)

Here is my question for the lawyers: Is this really copyright infringement? I know that copyright attaches even to pedestrian writings like business memos. But don’t the students have some kind of fair use argument? It seems to me that their purpose is noncommercial; and it can hardly be said that they are depriving Diebold of the opportunity to sell the memos to the public. So the students would seem to have a decent argument on at least two of the four fair-use factors. So it might be fair use.

Even if the students are breaking the law, what Diebold is doing in trying to suppress the memos certainly doesn’t further the goals underlying copyright law. A trade secret argument from Diebold would seem to make more sense here, although the students would seem to have a free-speech counterargument, bolstered by the strong public interest in knowing how our votes are counted.

Can any of my lawyer readers (or fellow bloggers) help clear up these issues?

Bizarro Compliments

To a technologist, law and policy debates sometimes seem to be held in a kind of bizarro world, where words and concepts lose their ordinary meanings. Some technologists never get used to the bizarro rules, but some us of do catch on eventually.

One of the bizarro rules is that you should be happy when the other side accuses you of lying or acting in bad faith. In the normal world, such accusations will make you angry; but in bizarro world they indicate that the other side has lost confidence in its ability to win the argument on the merits. And so you learn to swallow your outrage and smile when people call you a scoundrel.

Which brings us to Brigid Schulte’s electronic voting article in this morning’s Washington Post. The article reports that the computer scientists’ campaign for more secure (and less secret) electronic voting technology is getting some real traction, especially in light of the recent Johns Hopkins report detailing severe flaws in a Diebold e-voting product. The computer scientists’ progress is certified, bizarro style, by none other than the head of the Federal Election Commission’s Office of Electrion Adminstration:

“The computer scientists are saying, ‘The machinery you vote on is inaccurate and could be threatened; therefore, don’t go. Your vote doesn’t mean anything,’ ” said Penelope Bonsall, director of the Office of Election Administration at the Federal Election Commission. “That negative perception takes years to turn around.”

You can’t buy that kind of bizarro endorsement!

Layers

Lawrence Solum and Minn Chung have a new paper, “The Layers Principle: Internet Architecture and the Law,” in which they argue that layering is an essential part of the Internet’s architecture and that Internet regulation should therefore respect the Internet’s layered nature. It’s a long paper, so no short commentary can do it justice, but here are a few reactions.

First, there is no doubt that layering is a central design principle of the Internet, or of any well-designed multipurpose network. When we teach computer science students about networks, layering is one of the most important concepts we try to convey. Solum and Chung are right on target about the importance of layering.

They’re on shakier ground, though, when they relate their layering principle to the end-to-end principle that Lessig has popularized in the legal/policy world. (The end-to-end principle says that most of the “brains” in the Internet should be at the endpoints, e.g. in end users’ computers, rather than in the core of the network itself.) Solum and Chung say that end-to-end is a simple consequence of their layering principle. That’s true, but only because the end-to-end principle is built in to their assumptions, in a subtle way, from the beginning. In their account, layering occurs only at the endpoints, and not in the network itself. While this is not entirely accurate, it’s not far wrong, since the layering is much deeper at the endpoints than in the core of the Net. But the reason this is true is that the Net is designed on the end-to-end principle. There are alternative designs that use deep layering everywhere, but those were not chosen because they would have violated the end-to-end principle. End-to-end is not necessarily a consequence of layering; but end-to-end is, tautologically, a consequence of the kind of end-to-end style layering that Solum and Chung assume.

Layering and end-to-end, then, are both useful rules of thumb for understanding how the Internet works. It follows, naturally, that regulation of the Net should be consistent with both principles. Any regulatory proposal, in any sphere of human activity, is backed by a story about how the proposed regulation will lead to a desirable result. And unless that story makes sense – unless it is rooted in an understanding of how the sphere being regulated actually works – then the proposal is almost certainly a bad one. So regulatory plans that are inconsistent with end-to-end or layering are usually unwise.

Of course, these two rules of thumb don’t give us the complete picture. The Net is more complicated, and sometimes a deeper understanding is needed to evaluate a policy proposal. For example, a few widespread and helpful practices such as Network Address Translation violate both the end-to-end principle and layering; and so a ban on address translation would be consistent with end-to-end and layering, but inconsistent with the actual Internet. Rules of thumb are at best a lesser substitute for detailed knowledge about how the Net really works. Thus far, we have done a poor job of incorporating that knowledge into the regulatory process. Solum and Chunn’s paper has its flaws, but it is a step in the right direction.

[UPDATE (Sept. 11, 2003): Looking back at this entry, I realize that by devoting most of my “ink” to my area of disagreement with Solum and Chunn, I might have given the impression that I didn’t like their paper. Quite the contrary. It’s a very good paper overall, and anyone serious about Internet policy should read it.]

DRM and Black Boxes

Lisa Rein has posted (with permission) a video of my short presentation at the Berkeley DRM conference. I talked about the push to turn technologies into “black boxes” that the public is not allowed to study, understand, or discuss, and how that paralyzes public debate on important issues such as electronic voting.