May 7, 2024

Diebold Shows How to Make Your Own Voting Machine Key

By now it should be clear that Diebold’s AccuVote-TS electronic voting machines have lousy security. Our study last fall showed that malicious software running on the machines can invisibly alter votes, and that this software can be installed in under a minute by inserting a new memory card into the side of the machine. The last line of defense against such attacks is a cheap lock covering the memory card door. Our video shows that the lock can be picked in seconds, and, infamously, it can also be opened with a key that is widely sold for use in hotel minibars and jukeboxes.

(Some polling places cover the memory card with tamper evident seals, but these provide little real security. In practice, the seals are often ignored or accidentally broken. If broken seals are taken seriously and affected machines are taken offline for inspection, an attacker could launch a cheap denial-of-service attack by going around breaking the seals on election day.)

According to published reports, nearly all the machines deployed around the country use the exact same key. Up to this point we’ve been careful not to say precisely which key or show the particular pattern of the cuts. The shape of a key is like a password – it only provides security if you keep it secret from the bad guys. We’ve tried to keep the shape secret so as not to make an attacker’s job even marginally easier, and you would expect a security-conscious vendor to do the same.

Not Diebold. Ross Kinard of SploitCast wrote to me last month to point out that Diebold offers the key for sale on their web site. Of course, they won’t sell it to just anybody – only Diebold account holders can order it online. However, as Ross observed, Diebold’s online store shows a detailed photograph of the key.

Here is a copy of the page. The original showed the entire key, but we have blacked out the compromising part.

Could an attacker create a working key from the photograph? Ross decided to find out. Here’s what he did:

I bought three blank keys from Ace. Then a drill vise and three cabinet locks that used a different type of key from Lowes. I hoped that the spacing and depths on the cabinet locks’ keys would be similar to those on the voting machine key. With some files I had I then made three keys to look like the key in the picture.

Ross sent me his three homemade keys, and, amazingly, two of them can open the locks on the Diebold machine we used in our study!

This video shows one of Ross’s keys opening the lock on the memory card door:

Ross says he has tried repeatedly to bring this to Diebold’s attention over the past month. However, at the time of this posting, the image was still on their site.

Security experts advocate designing systems with “defense in depth,” multiple layers of barriers against attack. The Diebold electronic voting systems, unfortunately, seem to exhibit “weakness in depth.” If one mode of attack is blocked or simply too inconvenient, there always seems to be another waiting to be exposed.

[UPDATE (Jan. 25): As of this morning, the photo of the key is no longer on Diebold’s site.]

Voting, Secrecy, and Phonecams

Yesterday I wrote about the recent erosion of the secret ballot. One cause is the change in voting technology, especially voting by mail. But even if we don’t change our voting technology at all, changes in other technologies are still eroding the secret ballot.

Phonecams are a good example. You probably carry into the voting booth a silent camera, built into a mobile phone, that can transmit photos around the world within seconds. Many phones can shoot movies, making it even easier to document your vote. Here is an example shot in 2004.

Could such a video be faked? Probably. But if your employer or union boss threatens your job unless you deliver a video of yourself voting “correctly”, will you bet your job that your fake video won’t be detected? I doubt it.

This kind of video recording subverts the purpose of the voting booth. The booth is designed to ensure the secret ballot by protecting voters from being observed while voting. Now a voter can exploit the privacy of the voting booth to create evidence of his vote. It’s not an exact reversal – at least the phonecam attack requires the voter’s participation – but it’s close.

One oft-suggested approach to fighting this problem is to have a way to revise your vote later, or to vote more than once with only one of the votes being real. This approach sounds promising at first, but it seems to cause other problems.

For example, imagine that you can get as many absentee ballots as you want, but only one of them counts and the others will be ignored. Now if somebody sees you complete and mail in a ballot, they can’t tell whether they saw your real vote. But if this is going to work, there must be no way to tell, just by looking at a ballot, whether it is real. The Board of Elections can’t send you an official letter saying which ballot is the real one – if they did, you could show that letter to a third party. (They could send you multiple letters, but that wouldn’t help – how could you tell which letter was the real one?) They can notify you orally, in person, but that makes it harder to get a ballot and lets the clerk at the Board of Elections quietly disenfranchise you by lying about which ballot is real.

(I’m not saying this problem is impossible to solve, only that (a) it’s harder than you might expect, and (b) I don’t know a solution.)

Approaches where you can cancel or revise your vote later have similar problems. There can’t be a “this is my final answer” button, because you could record yourself pushing it. But if there is no way to rule out later revisions to your vote, then you have to worry about somebody else coming along later and changing your vote.

Perhaps the hardest problem in voting system design is how to reconcile the secret ballot with accuracy. Methods that protect secrecy tend to undermine accuracy, and vice versa. Clever design is needed to get enough secrecy and enough accuracy at the same time. Technology seems to be making this tradeoff even nastier.

One More on Biometrics

Simson Garfinkel offers a practical perspective on biometrics, at CSO Magazine.

Washington Post on Biometrics

Today’s Washington Post has an article about the use of biometric technology, and civil-liberties resistance against it.

Interestingly, the article conflates two separate ideas: biometrics (the use of physical bodily characteristics to identify someone), and covert identification (identifying someone in a public place without their knowledge or consent). There are good civil-liberties arguments against covert identification. But the overt use of biometrics, especially in situations where identification already is expected and required, such as entry to an airplane, should be much less controversial.

There might even be ways of using biometrics that are more protective of privacy than existing identification measures are. Sometimes, you might be more comfortable having your face scanned than you would be revealing your name. Biometrics could give you that choice.

By implicitly assuming that biometric systems will be covert, the article, and apparently some of the sources it quotes, are missing the real potential of biometrics.

(Caveat: Biometrics aren’t worth much if they can’t reliably identify people, and there are good reasons to question the reliability of some biometrics.)

Wireless Tracking of Everything

Arnold Kling at The Bottom Line points to upcoming technologies that allow the attachment of tiny tags, which can be tracked wirelessly, to almost anything. He writes:

In my view, which owes much to David Brin, we should be encouraging the use of [these tags], while making sure that no single agency or elite has a monopoly on the ability to engage in tracking. Brin’s view is that tracking ability needs to be symmetric. We need to be able to keep track of politicians, government officials, and corporate executives. The danger is living in a society where one side can track but not be tracked.

Kling’s vision is of a world where nearly every object emits a kind of radio beacon identifying itself, and where these beacons are freely observable, allowing any person or device to take a census of the objects around it. It’s easy to see how this might be useful. Whether it is wise is another question entirely (which I’ll leave aside for now).

One thing is for sure: this vision is wildly implausible. Yes, tracking technology is practical, and may be inevitable. But tracking technology will evolve quickly to make Kling’s vision impossible.

First-generation tracking technolgy works by broadcasting a simple beacon, detectable by anyone, saying something like, “Device #67532712 is here.” If that were the end of the technological story, Kling might be right.

Like all technologies, tracking tags will evolve rapidly. Later generations won’t be so open. A tag might broadcast its identity in encrypted form, so that only authorized devices can track it. It might “lurk,” staying quiet until an authorized device sends it a wakeup signal. It might gossip with other tags across encrypted channels. Rather than being a passive identity tag, it will be an active agent, doing whatever it is programmed to do.

Once this happens, economics will determine what can be tracked by whom. It will be cheap and easy to put a tag into almost anything, but tracking the tag will be impossible without getting a cryptographic secret key that only the owner of the object, or the distributor of the beacon, can provide. And this key will be provided only if doing so is in the interest of the provider.

It’s interesting to contemplate what kinds of products and services will develop in such a world. The one thing that seems pretty certain is that it won’t be the simple, open world that Kling envisions.