December 22, 2024

Archives for October 2004

Fast-Forwarding Becomes a Partisan Issue

Remember when I suggested that Republicans might be more prone to copyright sanity than Democrats? Perhaps I was on to something. Consider a recent Senate exchange that was caught by Jason Schultz and Frank Field.

Senator John McCain (Republican from Arizona) has placed a block on two copyright-expansion bills, H.R. 2391 and H.R. 4077, because they contain language implying that it’s not legal to fast-forward through the commercials when you’re watching a recorded TV show. McCain says he won’t unblock the bills unless the language is removed. (As I understand it, the block makes it extremely difficult to bring the bill up for a vote.)

Sen. Patrick Leahy (Democrat from Vermont) responded by blasting McCain, saying he had blocked the bill for partisan reasons. Here’s Leahy:

In blocking this legislation, these Republicans are failing to practice what they have so often preached during this Congress. For all of their talk about jobs, about allowing the American worker to succeed, they are now placing our economy at greater risk through their inaction. It is a failure that will inevitably continue a disturbing trend: our economy loses literally hundreds of billions of dollars every year to various forms of piracy.

Instead of making inroads in this fight, we have the Republican intellectual property roadblock.

Do the Democrats really want to be known as the party that would ban fast-forwarding?

Another Broken Diebold Protocol

Yesterday I wrote about a terribly weak security protocol in the Diebold AccuVote-TS system (at least as it existed in 2002), as reported in a talk by Dan Wallach. That wasn’t the only broken Diebold protocol Dan discussed. Here’s another one which may be even scarier.

The Diebold system allows a polling place administrator to use a smartcard to control a voting machine, performing operations such as closing the polls for the day. The administrator gets a special administrator smartcard (a credit-card-sized computing device) and puts it into the voting machine. The machine uses a special protocol to validate the card, and then accepts commands from the administrator.

This is a decent plan, but Diebold botched the design of the protocol. Here’s the protocol they use:

terminal to card: “What kind of card are you?”
card to terminal: “Administrator”
terminal to card: “What’s the password?”
card to terminal: [Value1]
terminal to user: “What’s the password?”
user to terminal: [Value2]

If Value1=Value2, then the terminal allows the user to execute administrative commands.

Like yesterday’s protocol, this one fails because malicious users can make their own smartcard. (Smartcard kits cost less than $50.) Suppose Zeke is a malicious voter. He makes a smartcard that answers “Administrator” to the first question and (say) “1234” to the second question. He shows up to vote, signs in, goes into the voting booth, and inserts his malicious smartcard. The malicious smartcard tells the machine that the secret password is 1234; when the machine asks Zeke himself for the secret password, he enters 1234. The machine will then execute any administrative command Zeke wants to give it.
For example, he can tell the machine that the election is over.

This system was apparently used in the Georgia 2002 election. Has Diebold fixed this problem, or the one I described yesterday? We don’t know.

UPDATE (1:30 PM): Just to be clear, telling a machine that the election is over is harmful because it puts the machine in a mode where it won’t accept any votes. Getting the machine back into vote-accepting mode, without zeroing the vote counts, will likely require a visit from a technician, which could keep the voting machine offline for a significant period. (If there are other machines at the same precinct, they could be targeted too.) This attack could affect an election result if it is targeted at a precinct or a time of day in which votes are expected to favor a particular candidate.

Bad Protocol

Dan Wallach from Rice University was here on Monday and gave a talk on e-voting. One of the examples in his talk was interesting enough that I thought I would share it with you, both as an introductory example of how security analysts think, and as an illustration of how badly Diebold botched the design of their voting system.

One of the problems in voting system design is making sure that each voter who signs in is allowed to vote only once. In the Diebold AccuVote-TS system, this is done using smartcards. (Smartcards are the size and shape of credit cards, but they have tiny computers inside.) After signing in, a voter would be given a smartcard – the “voter card” – that had been activated by a poll worker. The voter would slide the voter card into a voting machine. The voting machine would let the voter cast one vote, and would then cause the voter card to deactivate itself so that the voter couldn’t vote again. The voter would return the deactivated voter card after leaving the voting booth.

This sounds like a decent plan, but Diebold botched the design of the protocol that the voting terminal used to talk to the voter card. The protocol involved a series of six messages, as follows:

terminal to card: “My password is [8 byte value]”
card to terminal: “Okay”
terminal to card: “Are you a valid card?”
card to terminal: “Yes.”
terminal to card: “Please deactivate yourself.”
card to terminal: “Okay.”

Can you spot the problem here? (Hint: anybody can make their own smartcard that sends whatever messages they like.)

As most of you probably noticed – and Diebold’s engineers apparently did not – the smartcard doesn’t actually do anything surprising in this protocol. Anybody can make a smartcard that sends the three messages “Okay; Yes; Okay” and use it to cast an extra vote. (Do-it-yourself smartcard kits cost less than $50.)

Indeed, anybody can make a smartcard that sends the three-message sequence “Okay; Yes; Okay” over and over, and can thereby vote as many times as desired, at least until a poll worker asks why the voter is spending so long in the booth.

One problem with the Diebold protocol is that rather than asking the card to prove that it is valid, the terminal simply asks the card whether it is valid, and accepts whatever answer the card gives. If a man calls you on the phone and says he is me, you can’t just ask him “Are you really Ed Felten?” and accept the answer at face value. But that’s the equivalent of what Diebold is doing here.

This system was apparently used in a real election in Georgia in 2002. Yikes.

Experimental Use Exception Evaporating?

Doug Tygar points to a front-page article in yesterday’s Wall Street Journal about a lawsuit that raises troubling questions about researchers’ ability to use patented technologies for experimental purposes.

Patent law, which makes it illegal to make or use a patented invention without permission of the patent owner, has an exception for experimental use. The exception, as I understand it, applies only to non-commercial, curiosity-driven experiments.

John Madey invented, and patented, an important technology called the free-electron laser (FEL). He was a professor at Duke University, where he headed an FEL laboratory. Then he was ousted after a nasty squabble with Duke, and he moved to another university. Duke continued to operate the FEL.

Madey sued Duke for patent infringement, for using the FEL without his permission. Duke wrapped itself in the experimental use exception, but Madey argued that Duke, in its use of the FEL, was not engaged in idle inquiry but was carrying on its business of research and education. The Federal Circuit Court of Appeals agreed with Madey that Duke was not eligible for the exception:

Our precedent clearly does not immunize use that is in any way commercial in nature. Similarly, our precedent does not immunize any conduct that is in keeping with the alleged infringer’s legitimate business, regardless of commercial implications. For example, major research universities, such as Duke, often sanction and fund research projects with arguably no commercial application whatsoever. However, these projects unmistakably further the institutions’ legitimate business objectives, including educating and enlightening students and faculty participating in these projects. These projects also serve, for example, to increase the status of the institution and lure studentss, faculty, and lucrative research grants.

It’s hard to see, in light of this decision, how anybody could ever qualify for the experimental use exception.

If this decision stands, it could have a big impact on university researchers. Up to now, researchers have been free to concentrate on discovery rather than patent negotiations, and to build and use whatever equipment was necessary for their experiments without worrying that somebody would sue to shut down their labs. Now that may have to change change.

Here’s a tip for law students: current trends indicate hiring growth in research universities’ general counsel offices.

Latest Induce Act Draft Still Buggy

Reportedly the Induce Act has stalled, after the breakdown of negotiations over statutory language. Ernest Miller has the last draft offered by the entertainment industry.

(Notice how the entertainment industry labels its draft as the “copyright owners'” proposal. It takes some chutzpah to call your side the “copyright owners” when the largest copyright-owning industry – the software industry – is on the other side.)

The draft tries makes yet another attempt to define “peer-to-peer”. While the last draft’s definition was too broad, including, for example, the Web, this one is too narrow. It probably encompasses most or all of the P2P systems currently being used, but its narrowness allows those systems to be redesigned to evade the definition.

Here’s the definition:

The term “covered peer-to-peer product” shall mean a widely available device, or computer program for execution on a large number of devices, communicating over the Internet or any other publicly available network and performing or causing the performance at each such device all of the following functions:

(i) providing search information relating to copies or phonorecords available for transmission to other devices;

(ii) locating other devices that provide information relating to copies or phonorecords available for transmission that is responsive to search requests describing desired copies or phonorecords; and

(iii) transmitting a requested copy or phonorecord to another device that located the copy or phonorecord through such other device’s performance of the function described in clause (ii);

unless the provider of the device or computer program has the right and ability to control the copies or phonorecords that may be located by its use.

It looks like there are several ways to design a P2P system that evades this definition:

The definition requires each device to do all three of the enumerated functions. A system could have some devices do a subset of the functions.

The product must be a device or a program, which would appear to exempt systems that use multiple programs to perform different functions.

Function (iii) requires that the copy be transmitted to another device, and that other device must have located the copy to be transmitted via function (ii). Data could move through intermediaries that don’t use function (ii).

As I’ve written before, it’s awfully hard to come up with a statutory definition of peer-to-peer, because many popular and completely legitimate services on the net are designed in a peer-to-peer style; and because there is nothing special about the particular design strategy used by today’s P2P filesharing systems.