November 24, 2024

Sarasota Voting Machines Insecure

The technical team commissioned by the State of Florida to study the technology used in the ill-fated Sarasota election has released its report. (Background: on the Sarasota election problems; on the study.)

One revelation from the study is that the iVotronic touch-screen voting machines are terribly insecure. The machines are apparently susceptible to viruses, and there are many bugs a virus could exploit to gain entry or spread:

We found many instances of [exploitable buffer overflow bugs]. Misplaced trust in the election definition file can be found throughout the iVotronic software. We found a number of buffer overruns of this type. The software also contains array out-of-bounds errors, integer overflow vulnerabilities, and other security holes. [page 57]

The equation is simple: sloppy software + removable storage = virus vulnerability. We saw the same thing with the Diebold touchscreen voting system.

Another example of poor security is in the passwords that protect crucial operations such as configuring the voting machine and modifying its software. There are separate passwords for different operations, but the system has a single backdoor that allows all of the passwords to be bypassed by an adversary who can learn or guess a one-byte secret, which is easily guessed since there are only 256 possibilities. (p. 67) For example, an attacker who gets private access to the machine for just a few minutes can apparently use the backdoor to install malicious software onto a machine.

Though the machines’ security is poor and needs to be fixed before it is used in another election, I agree with the study team that the undervotes were almost certainly not caused by a security attack. The reason is simple: only a brainless attacker would cause undervotes. An attack that switched votes from one candidate to another would be more effective and much harder to detect.

So if it wasn’t a security attack, what was the cause of the undervotes?

Experience teaches that systems that are insecure tend to be unreliable as well – they tend to go wrong on their own even if nobody is attacking them. Code that is laced with buffer overruns, array out-of-bounds errors, integer overflow errors, and the like tends to be flaky. Sporadic undervotes are the kind of behavior you would expect to see from a flaky voting technology.

The study claims to have ruled out reliability problems as a cause of the undervotes, but their evidence on this point is weak, and I think the jury is still out on whether voting machine malfunctions could be a significant cause of the undervotes. I’ll explain why, in more detail, in the next post.

Why Understanding Programs is Hard

Senator Sam Brownback has reportedly introduced a bill that would require the people rating videogames to play the games in their entirety before giving a rating. This reflects a misconception common among policymakers: that it’s possible to inspect a program and figure out what it’s going to do.

It’s true that some programs can be completely characterized by inspection, but this is untrue for many programs, including some of the most interesting and useful ones. Even very simple programs can be hard to understand.

Here, for example, is a three-line Python program I just wrote:

import sys, sha

h = sha.new(sha.new(sys.argv[1]).digest()[:9]).digest()

if h.startswith(“abcdefghij”): print “Drat”

(If you don’t speak Python, here’s what the program does: it takes the input you give it, and performs some standard, complicated mathematical operations on the input to yield a character-string. If the first ten characters of that string happen to be “abcedfghij”, then the program prints the word “Drat”; otherwise it doesn’t print anything.)

Will this program ever print a four-letter word? There’s no practical way to tell. It’s obvious that the program’s behavior depends on what input you give it, but is there any input that causes h to start with the magic value abcedfghij? You can run the program and inspect the code but you won’t be able to tell. Even I don’t know, and I wrote the program!

Now you might object that even if we can’t tell whether the program will output a four-letter word, the presence of print “Drat” in the code shows that the programmer at least wanted “Drat” to be a possible output.

Fair enough; but here’s a slightly more complicated program that might or might not print “Drat”.

import sys, sha

h = sha.new(sha.new(sys.argv[1]).digest()[:9]).digest()

if h.startswith(“abcdef”): print h[6:9]

The behavior of this program again depends on its input. For some inputs, it will produce no output. For other inputs, it will produce an output that depends in a complicated way on the input it got. Can this program ever print “Drat”? You can’t tell, and neither can I.

Nonexperts are often surprised to learn that programs can do things the programmers didn’t expect. These surprises can be vexing; but they’re also the main reason computer science is fun.

Sarasota: Limited Investigations

As I wrote last week, malfunctioning voting machines are one of the two plausible theories that could explain the mysterious undervotes in Sarasota’s congressional race. To get a better idea of whether malfunctions could be the culprit, we would have to investigate – to inspect the machines and their software for any relevant errors in design or operation. A well-functioning electoral system ought to be able to do such investigations in an open and thorough manner.

Two attempts have been made to investigate. The first was by representatives of Christine Jennings (the officially losing candidate) and a group of voters, who filed lawsuits challenging the election results and asked, as part of the suits’ discovery process, for access by their experts to the machines and their code. The judge denied their request, in a curious order that seemed to imply that they would first have to prove that there was probably a malfunction before they could be granted access to the evidence needed to tell whether there was a malfunction.

The second attempt was by the Department of State (DOS) of the state of Florida, who commissioned a study by outside experts. Oddly, I am listed in the official Statement of Work (SOW) as a principal investigator on the study team, even though I am not a member of the team. Many people have asked how this happened. The short answer is that I discussed with representatives of DOS the possibility of participating, but eventually it became clear that the study they wanted to commission was far from the complete, independent study I had initially thought they wanted.

The biggest limitation on the study is that DOS is withholding information and resources needed for a complete study. Most notably, they are not providing access to voting machines. You don’t have to be a rocket scientist to realize that if you want to understand the behavior of voting machines, it helps to have a voting machine to examine. DOS could have provided or facilitated access to a machine, but it apparently chose not to do so. [Correction (Feb. 28): The team’s final report revealed that DOS had changed its mind and given the team access to voting machines.] The Statement of Work is clear that the study is to be “a … static software analysis on the iVotronics version 8.0.1.2 firmware source code”.

(In computer science, “static” analysis of software refers to methods that examine the text of the software; “dynamic” methods observe and measure the software while it is running.)

The good news is that the team doing the study is very strong technically, so there is some hope of a useful result despite the limited scope of the inquiry. There have been some accusations of political bias against team members, but knowing several members of the team I am confident that these charges are misguided and the team won’t be swayed by partisan politics. The limits on the study aren’t coming from the team itself.

The results of the DOS-sponsored study should be published sometime in the next few months.

What we have not seen, and probably won’t, is a full, independent study of the iVotronic machines. The voters of Sarasota County – and everyone who votes on paperless machines – are entitled to a comprehensive study of what happened. Sadly, it looks like lawyers and politics will stop that from happening.

Why So Many Undervotes in Sarasota?

The big e-voting story from November’s election was in Sarasota, Florida, where a congressional race was decided by about 400 votes, with 18,412 undervotes. That’s 18,412 voters who cast votes in other races but not, according to the official results, in that congressional race. Among voters who used the ES&S iVotronic machines – that is, non-absentee voters in Sarasota County – the undervote rate was about 14%. Something went very wrong. But what?

Since the election there have been many press releases, op-eds, and blog posts about the undervotes, not to mention some lawsuits and scholarly studies. I want to spend the rest of the week dissecting the Sarasota situation, which I have been following closely. I’m doing this now for two reasons: (1) enough time has passed for the dust to settle a bit, and (2) I’m giving a joint talk on the topic next week and I want to work through some thoughts.

There’s no doubt that something about the iVotronic caused the undervotes. Undervote rates differed so starkly in the same race between iVotronic and non-iVotronic voters that the machines must be involved somehow. (For example, absentee voters had a 2.5% undervote rate in the congressional race, compared to 14% for iVotronic voters.) Several explanations have been proposed, but only two are at all plausible: ballot design and machine malfunction.

The ballot design theory says that the ballot offered to voters on the iVotronic’s screen was misdesigned in a way that caused many voters to miss that race. Looking at screenshots of the ballot, one can see how voters might miss the congressional race at the top of the second page. (Depressingly, some sites show a misleading photo that the photographer angled and lit to make the misdesign look worse than it really was.) It’s very plausible that this kind of problem caused some undervotes; and that is consistent with the reports of many voters that the machine did not show them the congressional race.

It’s one thing to say that ballot design could have caused some undervotes, but it’s another thing entirely to say it was the sole cause of so elevated an undervote rate. Each voter, before finalizing his vote, was shown a clearly designed confirmation screen listing his choices and clearly showing a no-candidate-selected message for the congressional race. Did so many voters miss that too? And what about the many voters who reported choosing a candidate in the congressional race, only to have the no-candidate-selected message show up on the confirmation screen anyway?

The malfunction theory postulates a problem or malfunction with the voting machines that caused votes not to be recorded. There are many types of problems that could have caused lost votes. The best way to evaluate the malfunction theory is to conduct a careful and thorough study of the machines themselves. In the next entry I’ll talk about the efforts that have been made toward that end. For now, suffice it to say that no suitable study is available to us.

If we had a voter-verified paper trail, we could immediately tell which theory is correct, by comparing the paper and electronic records. If the voter-verified paper records show the same high undervote race, then the ballot design theory is right. If the paper and electronic records show significantly different undervote rates, then something is wrong with the machines. But of course the advocates of paperless voting argued that paper trails were unnecessary – while also arguing that touchscreen systems reduce undervotes.

Several studies have tried to use statistical analyses of undervote patterns in different races, precincts, and machines to evaluate the two theories. Frisina, Herron, Honaker, and Lewis say the data support the ballot design theory; Mebane and Dill say the data point to malfunction as a likely cause of at least some of the undervotes. Reading these studies, I can’t reach a clear conclusion.

What would convince me, one way or the other, is a good study of the machines. I’ll talk next time about the fight over whether and how to look at the machines.

Diebold Shows How to Make Your Own Voting Machine Key

By now it should be clear that Diebold’s AccuVote-TS electronic voting machines have lousy security. Our study last fall showed that malicious software running on the machines can invisibly alter votes, and that this software can be installed in under a minute by inserting a new memory card into the side of the machine. The last line of defense against such attacks is a cheap lock covering the memory card door. Our video shows that the lock can be picked in seconds, and, infamously, it can also be opened with a key that is widely sold for use in hotel minibars and jukeboxes.

(Some polling places cover the memory card with tamper evident seals, but these provide little real security. In practice, the seals are often ignored or accidentally broken. If broken seals are taken seriously and affected machines are taken offline for inspection, an attacker could launch a cheap denial-of-service attack by going around breaking the seals on election day.)

According to published reports, nearly all the machines deployed around the country use the exact same key. Up to this point we’ve been careful not to say precisely which key or show the particular pattern of the cuts. The shape of a key is like a password – it only provides security if you keep it secret from the bad guys. We’ve tried to keep the shape secret so as not to make an attacker’s job even marginally easier, and you would expect a security-conscious vendor to do the same.

Not Diebold. Ross Kinard of SploitCast wrote to me last month to point out that Diebold offers the key for sale on their web site. Of course, they won’t sell it to just anybody – only Diebold account holders can order it online. However, as Ross observed, Diebold’s online store shows a detailed photograph of the key.

Here is a copy of the page. The original showed the entire key, but we have blacked out the compromising part.

Could an attacker create a working key from the photograph? Ross decided to find out. Here’s what he did:

I bought three blank keys from Ace. Then a drill vise and three cabinet locks that used a different type of key from Lowes. I hoped that the spacing and depths on the cabinet locks’ keys would be similar to those on the voting machine key. With some files I had I then made three keys to look like the key in the picture.

Ross sent me his three homemade keys, and, amazingly, two of them can open the locks on the Diebold machine we used in our study!

This video shows one of Ross’s keys opening the lock on the memory card door:

Ross says he has tried repeatedly to bring this to Diebold’s attention over the past month. However, at the time of this posting, the image was still on their site.

Security experts advocate designing systems with “defense in depth,” multiple layers of barriers against attack. The Diebold electronic voting systems, unfortunately, seem to exhibit “weakness in depth.” If one mode of attack is blocked or simply too inconvenient, there always seems to be another waiting to be exposed.

[UPDATE (Jan. 25): As of this morning, the photo of the key is no longer on Diebold’s site.]