December 27, 2024

Sarasota Voting Machines Insecure

The technical team commissioned by the State of Florida to study the technology used in the ill-fated Sarasota election has released its report. (Background: on the Sarasota election problems; on the study.)

One revelation from the study is that the iVotronic touch-screen voting machines are terribly insecure. The machines are apparently susceptible to viruses, and there are many bugs a virus could exploit to gain entry or spread:

We found many instances of [exploitable buffer overflow bugs]. Misplaced trust in the election definition file can be found throughout the iVotronic software. We found a number of buffer overruns of this type. The software also contains array out-of-bounds errors, integer overflow vulnerabilities, and other security holes. [page 57]

The equation is simple: sloppy software + removable storage = virus vulnerability. We saw the same thing with the Diebold touchscreen voting system.

Another example of poor security is in the passwords that protect crucial operations such as configuring the voting machine and modifying its software. There are separate passwords for different operations, but the system has a single backdoor that allows all of the passwords to be bypassed by an adversary who can learn or guess a one-byte secret, which is easily guessed since there are only 256 possibilities. (p. 67) For example, an attacker who gets private access to the machine for just a few minutes can apparently use the backdoor to install malicious software onto a machine.

Though the machines’ security is poor and needs to be fixed before it is used in another election, I agree with the study team that the undervotes were almost certainly not caused by a security attack. The reason is simple: only a brainless attacker would cause undervotes. An attack that switched votes from one candidate to another would be more effective and much harder to detect.

So if it wasn’t a security attack, what was the cause of the undervotes?

Experience teaches that systems that are insecure tend to be unreliable as well – they tend to go wrong on their own even if nobody is attacking them. Code that is laced with buffer overruns, array out-of-bounds errors, integer overflow errors, and the like tends to be flaky. Sporadic undervotes are the kind of behavior you would expect to see from a flaky voting technology.

The study claims to have ruled out reliability problems as a cause of the undervotes, but their evidence on this point is weak, and I think the jury is still out on whether voting machine malfunctions could be a significant cause of the undervotes. I’ll explain why, in more detail, in the next post.

Diebold Shows How to Make Your Own Voting Machine Key

By now it should be clear that Diebold’s AccuVote-TS electronic voting machines have lousy security. Our study last fall showed that malicious software running on the machines can invisibly alter votes, and that this software can be installed in under a minute by inserting a new memory card into the side of the machine. The last line of defense against such attacks is a cheap lock covering the memory card door. Our video shows that the lock can be picked in seconds, and, infamously, it can also be opened with a key that is widely sold for use in hotel minibars and jukeboxes.

(Some polling places cover the memory card with tamper evident seals, but these provide little real security. In practice, the seals are often ignored or accidentally broken. If broken seals are taken seriously and affected machines are taken offline for inspection, an attacker could launch a cheap denial-of-service attack by going around breaking the seals on election day.)

According to published reports, nearly all the machines deployed around the country use the exact same key. Up to this point we’ve been careful not to say precisely which key or show the particular pattern of the cuts. The shape of a key is like a password – it only provides security if you keep it secret from the bad guys. We’ve tried to keep the shape secret so as not to make an attacker’s job even marginally easier, and you would expect a security-conscious vendor to do the same.

Not Diebold. Ross Kinard of SploitCast wrote to me last month to point out that Diebold offers the key for sale on their web site. Of course, they won’t sell it to just anybody – only Diebold account holders can order it online. However, as Ross observed, Diebold’s online store shows a detailed photograph of the key.

Here is a copy of the page. The original showed the entire key, but we have blacked out the compromising part.

Could an attacker create a working key from the photograph? Ross decided to find out. Here’s what he did:

I bought three blank keys from Ace. Then a drill vise and three cabinet locks that used a different type of key from Lowes. I hoped that the spacing and depths on the cabinet locks’ keys would be similar to those on the voting machine key. With some files I had I then made three keys to look like the key in the picture.

Ross sent me his three homemade keys, and, amazingly, two of them can open the locks on the Diebold machine we used in our study!

This video shows one of Ross’s keys opening the lock on the memory card door:

Ross says he has tried repeatedly to bring this to Diebold’s attention over the past month. However, at the time of this posting, the image was still on their site.

Security experts advocate designing systems with “defense in depth,” multiple layers of barriers against attack. The Diebold electronic voting systems, unfortunately, seem to exhibit “weakness in depth.” If one mode of attack is blocked or simply too inconvenient, there always seems to be another waiting to be exposed.

[UPDATE (Jan. 25): As of this morning, the photo of the key is no longer on Diebold’s site.]

Voting, Secrecy, and Phonecams

Yesterday I wrote about the recent erosion of the secret ballot. One cause is the change in voting technology, especially voting by mail. But even if we don’t change our voting technology at all, changes in other technologies are still eroding the secret ballot.

Phonecams are a good example. You probably carry into the voting booth a silent camera, built into a mobile phone, that can transmit photos around the world within seconds. Many phones can shoot movies, making it even easier to document your vote. Here is an example shot in 2004.

Could such a video be faked? Probably. But if your employer or union boss threatens your job unless you deliver a video of yourself voting “correctly”, will you bet your job that your fake video won’t be detected? I doubt it.

This kind of video recording subverts the purpose of the voting booth. The booth is designed to ensure the secret ballot by protecting voters from being observed while voting. Now a voter can exploit the privacy of the voting booth to create evidence of his vote. It’s not an exact reversal – at least the phonecam attack requires the voter’s participation – but it’s close.

One oft-suggested approach to fighting this problem is to have a way to revise your vote later, or to vote more than once with only one of the votes being real. This approach sounds promising at first, but it seems to cause other problems.

For example, imagine that you can get as many absentee ballots as you want, but only one of them counts and the others will be ignored. Now if somebody sees you complete and mail in a ballot, they can’t tell whether they saw your real vote. But if this is going to work, there must be no way to tell, just by looking at a ballot, whether it is real. The Board of Elections can’t send you an official letter saying which ballot is the real one – if they did, you could show that letter to a third party. (They could send you multiple letters, but that wouldn’t help – how could you tell which letter was the real one?) They can notify you orally, in person, but that makes it harder to get a ballot and lets the clerk at the Board of Elections quietly disenfranchise you by lying about which ballot is real.

(I’m not saying this problem is impossible to solve, only that (a) it’s harder than you might expect, and (b) I don’t know a solution.)

Approaches where you can cancel or revise your vote later have similar problems. There can’t be a “this is my final answer” button, because you could record yourself pushing it. But if there is no way to rule out later revisions to your vote, then you have to worry about somebody else coming along later and changing your vote.

Perhaps the hardest problem in voting system design is how to reconcile the secret ballot with accuracy. Methods that protect secrecy tend to undermine accuracy, and vice versa. Clever design is needed to get enough secrecy and enough accuracy at the same time. Technology seems to be making this tradeoff even nastier.

One More on Biometrics

Simson Garfinkel offers a practical perspective on biometrics, at CSO Magazine.

Washington Post on Biometrics

Today’s Washington Post has an article about the use of biometric technology, and civil-liberties resistance against it.

Interestingly, the article conflates two separate ideas: biometrics (the use of physical bodily characteristics to identify someone), and covert identification (identifying someone in a public place without their knowledge or consent). There are good civil-liberties arguments against covert identification. But the overt use of biometrics, especially in situations where identification already is expected and required, such as entry to an airplane, should be much less controversial.

There might even be ways of using biometrics that are more protective of privacy than existing identification measures are. Sometimes, you might be more comfortable having your face scanned than you would be revealing your name. Biometrics could give you that choice.

By implicitly assuming that biometric systems will be covert, the article, and apparently some of the sources it quotes, are missing the real potential of biometrics.

(Caveat: Biometrics aren’t worth much if they can’t reliably identify people, and there are good reasons to question the reliability of some biometrics.)