January 18, 2025

Preemptive Blame-Shifting by the E-Voting Industry

The November 2nd election hasn’t even happened yet, and already the e-voting industry is making excuses for the election-day failures of their technology. That’s right – they’re rebutting future reports of future failures. Here’s a sample:

Problem

Voting machines will not turn on or operate.

Explanation

Voting machines are not connected to an active power source. Machines may have been connected to a power strip that has been turned off or plugged into an outlet controlled by a wall switch. Power surges or outages caused by electrical storms or other natural occurrences are not unheard of. If the power source to the machine has been lost, voting machines will generally operate on battery power for brief periods. Once battery power is lost, however, the machines will cease to function (although votes cast on such machines will not be lost). Electronic voting machines may require the election official or precinct worker to enter a password in order to operate. Lost or forgotten passwords may produce lengthy delays as this information is retrieved from other sources.

In the past, of course, voting machines have failed to operate for other reasons, as in the 2002 California gubernatorial recall election, when Diebold machines, which turned out to be uncertified, failed to boot properly at many polling places in San Diego and Alameda counties. (Verified-voting.org offers a litany of these and other observed e-voting failures.)

The quote above comes from a document released by the Election Technology Council, a trade group of e-voting vendors. (The original, tellingly released only in the not-entirely-secure Word format, is here.)

The tone of the ETC document is clear – our technology is great, but voters and poll workers aren’t smart enough to use it correctly. Never mind that the technology is deeply flawed (see, e.g., my discussion of Diebold’s insecure protocols, not to mention all of the independent studies of the technology). Never mind that the vendors are the ones who design the training regimes whose inadequacy they blame. Never mind that it is their responsibility to make their products usable.

[Link credit: Slashdot]

Privacy, Recording, and Deliberately Bad Crypto

One reason for the growing concern about privacy these days is the ever-decreasing cost of storing information. The cost of storing a fixed amount of data seems to be dropping at the Moore’s Law rate, that is, by a factor of two every 18 months, or equivalently a factor of about 100 every decade. When storage costs less, people will store more information. Indeed, if storage gets cheap enough, people will store even information that has no evident use, as long as there is even a tiny probability that it will turn out to be valuable later. In other words, they’ll store everything they can get their hands on. The result is that more information about our lives will be accessible to strangers.

(Some people argue that the growth in available information is on balance a good thing. I want to put that argument aside here, and ask you to accept only that technology is making more information about us available to strangers, and that an erosion of our legitimate privacy interests is among the consequences of that trend.)

By default, information that is stored can be accessed cheaply. But it turns out that there are technologies we can use to make stored information (artificially) expensive to access. For example, we can encrypt the information using a weak encryption method that can be broken by expending some predetermined amount of computation. To access the information, one would then have to buy or rent sufficient computer time to break the encryption method. The cost of access could be set to whatever value we like.

(For techies, here’s how it works. (There are fancier methods. This one is the simplest to explain.) You encrypt the data, using a strong cipher, under a randomly chosen key K. You provide a hint about the value of K (e.g. upper and lower bounds on the value of K), and then you discard K. Reconstructing the data now requires doing an exhaustive search to find K. The size of the search required depends on how precise the hint is.)

This method has many applications. For example, suppose the police want to take snapshots of public places at fixed intervals, and we want them to be able to see any serious crimes that happen in front of their cameras, but we don’t want them to be able to browse the pictures arbitrarily. (Again, I’m putting aside the question of whether it’s wise for us to impose this requirement.) We could require them to store the pictures in such a way that retrieving any one picture carried some moderate cost. Then they would be able to access photos of a few crimes being committed, but they couldn’t afford to look at everything.

One drawback of this approach is that it is subject to Moore’s Law. The price of accessing a data item is paid not in dollars but in computing cycles, a resource whose dollar cost is cut in half every 18 months. So what is expensive to access now will be relatively cheap in, say, ten years. For some applications, that’s just fine, but for others it may be a problem.

Sometimes this drop in access cost may be just what you want. If you want to make a digital time capsule that cannot be opened now but will be easy to open 100 years from now, this method is perfect.

DoJ To Divert Resources to P2P Enforcement

Last week the Department of Justice issued a report on intellectual property enforcement. Public discussion has been slow to develop, since the report seems to be encoded in some variant of the PDF format that stops many people from reading it. (I could read it fine on one of my computers, but ran into an error message saying the file was encrypted on the rest of my machines. Does anybody have a non-crippled version?)

The report makes a strong case for the harmfulness of intellectual property crimes, and then proceeds to suggest some steps to strengthen enforcement. I couldn’t help noticing, though, that the enforcement effort is not aimed at the most harmful crimes cited in the report.

The report leads with the story of a criminal who sold counterfeit medicines, which caused a patient to die because he was not taking the medicines he (and his doctors) thought he was. This is a serious crime. But what makes it serious is the criminal’s lying about the chemical composition of the medicines, not his lying about their brand name. This kind of counterfeiting is best treated as an attack on public safety rather than a violation of trademark law.

(This is not to say that counterfeiting of non-safety-critical products should be ignored, only that counterfeiting of safety-critical products can be much more serious.)

Similarly, the report argues that for-profit piracy, mostly of physical media, should be treated seriously. It claims that such piracy funds organized crime, and it hints (without citing evidence) that physical piracy might fund terrorism too. All of which argues for a crackdown on for-profit distribution of copied media.

But when it comes to action items, the report’s target seems to shift away from counterfeiting and for-profit piracy, and toward P2P file sharing. Why else, for example, would the report bother to endorse the Induce Act, which does not apply to counterfeiters or for-profit infringers but only to the makers of products, such as P2P software, that merely allow not-for-profit infringement?

It’s hard to believe, in today’s world, that putting P2P users in jail is the best use of our scarce national law-enforcement resources. Copyright owners can already bring down terrifying monetary judgments on P2P infringers. If we’re going to spend DoJ resources on attacking IP crime, let’s go after counterfeiters (especially of safety-critical products) and large-scale for-profit infringers. As Adam Shostack notes, to shift resources to enforcing less critical IP crimes, at a time when possible-terrorist wiretaps go unheard and violent fugitive cases go uninvestigated, is to lose track of our priorities.

Fast-Forwarding Becomes a Partisan Issue

Remember when I suggested that Republicans might be more prone to copyright sanity than Democrats? Perhaps I was on to something. Consider a recent Senate exchange that was caught by Jason Schultz and Frank Field.

Senator John McCain (Republican from Arizona) has placed a block on two copyright-expansion bills, H.R. 2391 and H.R. 4077, because they contain language implying that it’s not legal to fast-forward through the commercials when you’re watching a recorded TV show. McCain says he won’t unblock the bills unless the language is removed. (As I understand it, the block makes it extremely difficult to bring the bill up for a vote.)

Sen. Patrick Leahy (Democrat from Vermont) responded by blasting McCain, saying he had blocked the bill for partisan reasons. Here’s Leahy:

In blocking this legislation, these Republicans are failing to practice what they have so often preached during this Congress. For all of their talk about jobs, about allowing the American worker to succeed, they are now placing our economy at greater risk through their inaction. It is a failure that will inevitably continue a disturbing trend: our economy loses literally hundreds of billions of dollars every year to various forms of piracy.

Instead of making inroads in this fight, we have the Republican intellectual property roadblock.

Do the Democrats really want to be known as the party that would ban fast-forwarding?

Another Broken Diebold Protocol

Yesterday I wrote about a terribly weak security protocol in the Diebold AccuVote-TS system (at least as it existed in 2002), as reported in a talk by Dan Wallach. That wasn’t the only broken Diebold protocol Dan discussed. Here’s another one which may be even scarier.

The Diebold system allows a polling place administrator to use a smartcard to control a voting machine, performing operations such as closing the polls for the day. The administrator gets a special administrator smartcard (a credit-card-sized computing device) and puts it into the voting machine. The machine uses a special protocol to validate the card, and then accepts commands from the administrator.

This is a decent plan, but Diebold botched the design of the protocol. Here’s the protocol they use:

terminal to card: “What kind of card are you?”
card to terminal: “Administrator”
terminal to card: “What’s the password?”
card to terminal: [Value1]
terminal to user: “What’s the password?”
user to terminal: [Value2]

If Value1=Value2, then the terminal allows the user to execute administrative commands.

Like yesterday’s protocol, this one fails because malicious users can make their own smartcard. (Smartcard kits cost less than $50.) Suppose Zeke is a malicious voter. He makes a smartcard that answers “Administrator” to the first question and (say) “1234” to the second question. He shows up to vote, signs in, goes into the voting booth, and inserts his malicious smartcard. The malicious smartcard tells the machine that the secret password is 1234; when the machine asks Zeke himself for the secret password, he enters 1234. The machine will then execute any administrative command Zeke wants to give it.
For example, he can tell the machine that the election is over.

This system was apparently used in the Georgia 2002 election. Has Diebold fixed this problem, or the one I described yesterday? We don’t know.

UPDATE (1:30 PM): Just to be clear, telling a machine that the election is over is harmful because it puts the machine in a mode where it won’t accept any votes. Getting the machine back into vote-accepting mode, without zeroing the vote counts, will likely require a visit from a technician, which could keep the voting machine offline for a significant period. (If there are other machines at the same precinct, they could be targeted too.) This attack could affect an election result if it is targeted at a precinct or a time of day in which votes are expected to favor a particular candidate.