January 18, 2025

Bad Protocol

Dan Wallach from Rice University was here on Monday and gave a talk on e-voting. One of the examples in his talk was interesting enough that I thought I would share it with you, both as an introductory example of how security analysts think, and as an illustration of how badly Diebold botched the design of their voting system.

One of the problems in voting system design is making sure that each voter who signs in is allowed to vote only once. In the Diebold AccuVote-TS system, this is done using smartcards. (Smartcards are the size and shape of credit cards, but they have tiny computers inside.) After signing in, a voter would be given a smartcard – the “voter card” – that had been activated by a poll worker. The voter would slide the voter card into a voting machine. The voting machine would let the voter cast one vote, and would then cause the voter card to deactivate itself so that the voter couldn’t vote again. The voter would return the deactivated voter card after leaving the voting booth.

This sounds like a decent plan, but Diebold botched the design of the protocol that the voting terminal used to talk to the voter card. The protocol involved a series of six messages, as follows:

terminal to card: “My password is [8 byte value]”
card to terminal: “Okay”
terminal to card: “Are you a valid card?”
card to terminal: “Yes.”
terminal to card: “Please deactivate yourself.”
card to terminal: “Okay.”

Can you spot the problem here? (Hint: anybody can make their own smartcard that sends whatever messages they like.)

As most of you probably noticed – and Diebold’s engineers apparently did not – the smartcard doesn’t actually do anything surprising in this protocol. Anybody can make a smartcard that sends the three messages “Okay; Yes; Okay” and use it to cast an extra vote. (Do-it-yourself smartcard kits cost less than $50.)

Indeed, anybody can make a smartcard that sends the three-message sequence “Okay; Yes; Okay” over and over, and can thereby vote as many times as desired, at least until a poll worker asks why the voter is spending so long in the booth.

One problem with the Diebold protocol is that rather than asking the card to prove that it is valid, the terminal simply asks the card whether it is valid, and accepts whatever answer the card gives. If a man calls you on the phone and says he is me, you can’t just ask him “Are you really Ed Felten?” and accept the answer at face value. But that’s the equivalent of what Diebold is doing here.

This system was apparently used in a real election in Georgia in 2002. Yikes.

Experimental Use Exception Evaporating?

Doug Tygar points to a front-page article in yesterday’s Wall Street Journal about a lawsuit that raises troubling questions about researchers’ ability to use patented technologies for experimental purposes.

Patent law, which makes it illegal to make or use a patented invention without permission of the patent owner, has an exception for experimental use. The exception, as I understand it, applies only to non-commercial, curiosity-driven experiments.

John Madey invented, and patented, an important technology called the free-electron laser (FEL). He was a professor at Duke University, where he headed an FEL laboratory. Then he was ousted after a nasty squabble with Duke, and he moved to another university. Duke continued to operate the FEL.

Madey sued Duke for patent infringement, for using the FEL without his permission. Duke wrapped itself in the experimental use exception, but Madey argued that Duke, in its use of the FEL, was not engaged in idle inquiry but was carrying on its business of research and education. The Federal Circuit Court of Appeals agreed with Madey that Duke was not eligible for the exception:

Our precedent clearly does not immunize use that is in any way commercial in nature. Similarly, our precedent does not immunize any conduct that is in keeping with the alleged infringer’s legitimate business, regardless of commercial implications. For example, major research universities, such as Duke, often sanction and fund research projects with arguably no commercial application whatsoever. However, these projects unmistakably further the institutions’ legitimate business objectives, including educating and enlightening students and faculty participating in these projects. These projects also serve, for example, to increase the status of the institution and lure studentss, faculty, and lucrative research grants.

It’s hard to see, in light of this decision, how anybody could ever qualify for the experimental use exception.

If this decision stands, it could have a big impact on university researchers. Up to now, researchers have been free to concentrate on discovery rather than patent negotiations, and to build and use whatever equipment was necessary for their experiments without worrying that somebody would sue to shut down their labs. Now that may have to change change.

Here’s a tip for law students: current trends indicate hiring growth in research universities’ general counsel offices.

Latest Induce Act Draft Still Buggy

Reportedly the Induce Act has stalled, after the breakdown of negotiations over statutory language. Ernest Miller has the last draft offered by the entertainment industry.

(Notice how the entertainment industry labels its draft as the “copyright owners'” proposal. It takes some chutzpah to call your side the “copyright owners” when the largest copyright-owning industry – the software industry – is on the other side.)

The draft tries makes yet another attempt to define “peer-to-peer”. While the last draft’s definition was too broad, including, for example, the Web, this one is too narrow. It probably encompasses most or all of the P2P systems currently being used, but its narrowness allows those systems to be redesigned to evade the definition.

Here’s the definition:

The term “covered peer-to-peer product” shall mean a widely available device, or computer program for execution on a large number of devices, communicating over the Internet or any other publicly available network and performing or causing the performance at each such device all of the following functions:

(i) providing search information relating to copies or phonorecords available for transmission to other devices;

(ii) locating other devices that provide information relating to copies or phonorecords available for transmission that is responsive to search requests describing desired copies or phonorecords; and

(iii) transmitting a requested copy or phonorecord to another device that located the copy or phonorecord through such other device’s performance of the function described in clause (ii);

unless the provider of the device or computer program has the right and ability to control the copies or phonorecords that may be located by its use.

It looks like there are several ways to design a P2P system that evades this definition:

The definition requires each device to do all three of the enumerated functions. A system could have some devices do a subset of the functions.

The product must be a device or a program, which would appear to exempt systems that use multiple programs to perform different functions.

Function (iii) requires that the copy be transmitted to another device, and that other device must have located the copy to be transmitted via function (ii). Data could move through intermediaries that don’t use function (ii).

As I’ve written before, it’s awfully hard to come up with a statutory definition of peer-to-peer, because many popular and completely legitimate services on the net are designed in a peer-to-peer style; and because there is nothing special about the particular design strategy used by today’s P2P filesharing systems.

Business Week on Chilled Researchers

Heather Green at Business Week has a nice new piece, “Commentary: Are the Copyright Wars Chilling Innovation?” Despite the question mark in the title, it’s clear from the piece that innovation is being chilled, especially in the research community.

The piece starts out by retelling the story of the legal smackdown threatened against my colleagues and me over a paper on digital watermarking technology. It goes on to discuss the chilling effect of copyright-related overregulation on others:

Intimidation isn’t hard to spot in academia. Aviel Rubin, a Johns Hopkins University professor who last year uncovered flaws in electronic-voting software developed by Diebold Inc. (DBD ), says he spends precious time plotting legal strategies before publishing research connected in any way to copyrights. Matthew Blaze, a computer scientist at the University of Pennsylvania, avoids certain types of computer security-related research because the techniques are also used in copy protection.

The pall has spread over classrooms as well. Eugene H. Spafford, a professor and digital-security expert at Purdue University, and David Wagner, an associate professor of computer science at the University of California at Berkeley, are refusing to take on teaching assignments in certain areas relating to computer security. “The problem isn’t that we’re worried about prosecution from the government. The problem is the civil lawsuits from the movie and music industries,” Spafford says. “I don’t have the resources to deal with that.”

Rubin, Blaze, Spafford, and Wagner are all leaders in the field, and all are avoiding legitimate and useful research and/or teaching because of the DMCA and laws like it.

The movie industry, as usual, offers nothing but the suspension of disbelief. Fritz Attaway: “It’s easy to assert you feel chilled, but I don’t see any evidence to support that”. This from an industry with a long record of suing technical innovators.

[link via SNTReport.com]

Recent Induce Act Draft

Reportedly, the secret negotiations to rewrite the Induce Act are ongoing. I got hold of a recent staff discussion draft of the Act. It’s labeled “10/1” but I understand that the negotiators were working from it as late as yesterday.

I’ll be back later with comment.

UPDATE (8 PM): This draft is narrower than previous ones, in that it tries to limit liability to products related “peer-to-peer” infringement. Unfortunately, the definition of peer-to-peer is overbroad. Here’s the definition:

the term “peer-to-peer” shall mean any generally available product or service that enables individual consumers’ devices or computers, over a publicly available network, to make a copy or phonorecord available to, and locate and obtain a copy or phonorecord from, the computers or devices of other consumers who make such content publicly available by means of the same or an interoperable product or service, where –

(1) such content is made publicly available among individuals whose actual identities [and electronic mail address] are unknown to one another; and

(2) such program is used in a manner in which there is no central operator of a central repository, index or [directory] who can remove or disable access to allegedly infringing content.

By this definition, the Web is clearly a peer-to-peer system. Arguably, the Internet itself may be a peer-to-peer system as well.