November 24, 2024

"Hotel Minibar" Keys Open Diebold Voting Machines

Like other computer scientists who have studied Diebold voting machines, we were surprised at the apparent carelessness of Diebold’s security design. It can be hard to convey this to nonexperts, because the examples are technical. To security practitioners, the use of a fixed, unchangeable encryption key and the blind acceptance of every software update offered on removable storage are rookie mistakes; but nonexperts have trouble appreciating this. Here is an example that anybody, expert or not, can appreciate:

The access panel door on a Diebold AccuVote-TS voting machine – the door that protects the memory card that stores the votes, and is the main barrier to the injection of a virus – can be opened with a standard key that is widely available on the Internet.

On Wednesday we did a live demo for our Princeton Computer Science colleagues of the vote-stealing software described in our paper and video. Afterward, Chris Tengi, a technical staff member, asked to look at the key that came with the voting machine. He noticed an alphanumeric code printed on the key, and remarked that he had a key at home with the same code on it. The next day he brought in his key and sure enough it opened the voting machine.

This seemed like a freakish coincidence – until we learned how common these keys are.

Chris’s key was left over from a previous job, maybe fifteen years ago. He said the key had opened either a file cabinet or the access panel on an old VAX computer. A little research revealed that the exact same key is used widely in office furniture, electronic equipment, jukeboxes, and hotel minibars. It’s a standard part, and like most standard parts it’s easily purchased on the Internet. We bought several keys from an office furniture key shop – they open the voting machine too. We ordered another key on eBay from a jukebox supply shop. The keys can be purchased from many online merchants.

Using such a standard key doesn’t provide much security, but it does allow Diebold to assert that their design uses a lock and key. Experts will recognize the same problem in Diebold’s use of encryption – they can say they use encryption, but they use it in a way that neutralizes its security benefits.

The bad guys don’t care whether you use encryption; they care whether they can read and modify your data. They don’t care whether your door has a lock on it; they care whether they can get it open. The checkbox approach to security works in press releases, but it doesn’t work in the field.

Update (Oct. 28): Several people have asked whether this entry is a joke. Unfortunately, it is not a joke.

Security Analysis of the Diebold AccuVote-TS Voting Machine

Today, Ari Feldman, Alex Halderman, and I released a paper on the security of e-voting technology. The paper is accompanied by a ten-minute video that demonstrates some of the vulnerabilities and attacks we discuss. Here is the paper’s abstract:

Security Analysis of the Diebold AccuVote-TS Voting Machine

Ariel J. Feldman, J. Alex Halderman, and Edward W. Felten
Princeton University

This paper presents a fully independent security study of a Diebold AccuVote-TS voting machine, including its hardware and software. We obtained the machine from a private party. Analysis of the machine, in light of real election procedures, shows that it is vulnerable to extremely serious attacks. For example, an attacker who gets physical access to a machine or its removable memory card for as little as one minute could install malicious code; malicious code on a machine could steal votes undetectably, modifying all records, logs, and counters to be consistent with the fraudulent vote count it creates. An attacker could also create malicious code that spreads automatically and silently from machine to machine during normal election activities – a voting-machine virus. We have constructed working demonstrations of these attacks in our lab. Mitigating these threats will require changes to the voting machine’s hardware and software and the adoption of more rigorous election procedures.

Silver Bullet Podcast

Today we’re getting hep with the youngsters, and offering a podcast in place of the regular blog entry. Technically speaking, it’s somebody else’s podcast – Gary McGraw’s Silver Bullet – but it is a twenty-minute interview with me, much of it discussing blog-related issues. Excerpts will appear in an upcoming issue of IEEE Security & Privacy. Enjoy.

Attacks on a Plane

Last week’s arrest of a gang of would-be airplane bombers unleashed a torrent of commentary, including much of the I told you so variety. One question that I haven’t heard discussed is why the group wanted to attack planes.

The standard security narrative has attackers striking a system’s weak points, and defenders trying to identify and remedy weak points before the attackers hit them. Surely if you were looking for a poorly secured place with a high density of potential victims, an airplane wouldn’t be your first choice. Airplanes have to be the best-secured places that most people go, and they only hold a few hundred people. A ruthless attacker who was trying to maximize expected death and destruction would attack elsewhere.

(9/11 was an attack against office buildings, using planes as weapons. That type of attack is very unlikely to work anymore, now that passengers will resist hijackers rather than cooperating.)

So why did last week’s arrestees target planes? Perhaps they weren’t thinking too carefully – Perry Metzger argues that their apparent peroxide-bomb plan was impractical. Or perhaps they were trying to maximize something other than death and destruction. What exactly? Several candidates come to mind. Perhaps they were trying to install maximum fear, exploiting our disproportionate fear of plane crashes. Perhaps they were trying to cause economic disruption by attacking the transportation infrastructure. Perhaps planes are symbolic targets representing modernity and globalization.

Just as interesting as the attackers’ plans is the government response of beefing up airport security. The immediate security changes made sense in the short run, on the theory that the situation was uncertain and the arrests might trigger immediate attacks by unarrested co-conspirators. But it seems likely that at least some of the new restrictions will continue indefinitely, even though they’re mostly just security theater.

Which suggests another reason the bad guys wanted to attack planes: perhaps it was because planes are so intensively secured; perhaps they wanted to send the message that nowhere is safe. Let’s assume, just for the sake of argument, that this speculation is right, and that visible security measures actually invite attacks. If this is right, then we’re playing a very unusual security game. Should we reduce airport security theater, on the theory that it may be making air travel riskier? Or should we beef it up even more, to draw attacks away from more vulnerable points? Fortunately (for me) I don’t have space here to suggest answers to these questions. (And don’t get me started on the flaws in our current airport screening system.)

The bad guys’ decision to attack planes tells us something interesting about them. And our decision to exhaustively defend planes tells us something interesting about ourselves.

Banner Ads Launch Security Attacks

An online banner advertisement that ran on MySpace.com and other sites over the past week used a Windows security flaw to infect more than a million users with spyware when people merely browsed the sites with unpatched versions of Windows …

So says Brian Krebs at the Washington Post’s Security Fix blog. The ads, he says, contained a booby-trapped image that exploited a Windows security flaw to install malicious software. (Microsoft released a patch for the flaw back in January.)

Is this MySpace’s fault? I’m not asking whether MySpace is legally liable for the attack, though I’m curious what lawyers have to say about that question. I’m asking from an ethical and practical standpoint. Recognizing that the attacker himself bears primary responsibility, does MySpace bear some responsibility too?

A naive user who saw the ad displayed on a MySpace page would assume the ad was coming from MySpace. On a technical level, MySpace would not have served out the ad image, but would instead have put into the MySpace page some code directing the user’s browser to go to somebody else’s server and get an ad image; this other server would have actually provided the ad. MySpace’s business model relies on getting paid by ad agencies to embed ads in this way.

Of course, MySpace is in the business of displaying content submitted by other people. Any MySpace user could have put a similarly booby-trapped image on his own MySpace page; this has almost certainly happened. But it’s one thing to go to Johnny’s MySpace page and be attacked by Johnny. It’s another thing to go to your friend’s MySpace page and get attacked because of something that MySpace told you to display. If we’re willing to absolve MySpace of responsibility for Johnny’s attack – and I think we should be – it doesn’t follow that we have to hold MySpace blameless for the ad attack.

Nor does the fact that MySpace (presumably) does not vet the individual ads resolve the question. Failure to take a precaution does not in itself imply that the precaution is unnecessary. MySpace could have decided to vet every ad, at some cost, but instead they presumably decided to vet the ad agencies they are working with, and rely on those agencies to vet the ads.

The online ad business is a complicated web of relationships and deals. Some agencies don’t sell ads directly but make deals to display ads sold by others; and those others may in turn make the same kinds of deals, so that ads are not placed on sites not directly but through a chain of intermediaries. The more the sale and placement of ads is automated, the less there are people in the loop to spot harmful or inappropriate ads. And the more complex and indirect the mechanisms of ad placement become, the harder it is for anyone to tell where an ad came from or how it ended up being displayed on a particular site. Ben Edelman has documented how these factors can cause ads for reputable companies to be displayed by spyware. Presumably the same kinds of factors enabled the display of these attack ads on MySpace and elsewhere.

If this is true, then these sorts of ad-based attacks will be a systemic problem unless the structure of the online ad business changes.