December 22, 2024

Sarasota: Could a Bug Have Lost Votes?

At this point, we still don’t know what caused the high undervote rate in Sarasota’s Congressional election. [Background: 1, 2.] There are two theories. The State-commissioned study released last week argues that for the theory that a badly designed ballot caused many voters to not see that race and therefore not cast a vote.

Today I want to make the case for the other theory: that a malfunction or bug in the voting machines caused votes to be not recorded. The case sits on four pillars: (1) The postulated behavior is consistent with a common type of computer bug. (2) Similar bugs have been found in voting machines before. (3) The state-commissioned study would have been unlikely to find such a bug. (4) Studies of voting data show patterns that point to the bug theory.

(1) The postulated behavior is consistent with a common type of computer bug.

Programmers know the kind of bug I’m talking about: an error in memory management, or a buffer overrun, or a race condition, which causes subtle corruption in a program’s data structures. Such bugs are maddeningly hard to find, because the problem isn’t evident immediately but the corrupted data causes the program to go wrong in subtle ways later. These bugs often seem to be intermittent or “random”, striking sometimes but lying dormant at other times, and seeming to strike more or less frequently depending on the time of day or other seemingly irrelevant factors. Every experienced programmer tells horror stories about such bugs.

Such a bug is consistent with the patterns we saw in the election. Undervotes didn’t happen to every voter, but they did happen in every precinct, though with different frequency in different places.

(2) Similar bugs have been found in voting machines before.

We know of at least two examples of similar bugs in voting machines that were used in real elections. After problems in Maryland voting machines caused intermittent “freezing” behavior, the vendor recalled the motherboards of 4700 voting machines to remedy a hardware design error.

Another example, this time caused by a software bug, was described by David Jefferson:

In the volume testing of 96 Diebold TSx machines … in the summer of 2005, we had an enormously high crash rate: over 20% of the machines crashed during the course of one election day’s worth of votes. These crashes always occurred either at the end of one voting transaction when the voter touched the CAST button, or right at the beginning of the next voter’s session when the voter SmartCard was inserted.

It turned out that, after a huge effort on Diebold’s part, a [Graphical User Interface] bug was discovered. If a voter touched the CAST button a sloppily, and dragged his/her finger from the button across a line into another nearby window (something that apparently happened with only one of every 400 or 500 voters) an exception would be signaled. But the exception was not handled properly, leading to stack corruption or heap corruption (it was never clear to us which), which apparently invariably lead to the crash. Whether it caused other problems also, such as vote corruption, or audit log corruption, was never determined, at least to my knowledge. Diebold fixed this bug, and at least TSx machines are free of it now.

These are the two examples we know about, but note that neither of these examples was made known to the public right away.

(3) The State-commissioned study would have been unlikely to find such a bug.

The State of Florida study team included some excellent computer scientists, but they had only a short time to do their study, and the scope of their study was limited. They did not perform the kind of time-consuming dynamic testing that one would use in an all-out hunt for such a bug. To their credit, they did the best they could given the limited time and tools they had, but they would have had to get lucky to find such a bug if it existed. Their failure to find such a bug is not strong evidence that a bug does not exist.

(4) Studies of voting data show patterns that point to the bug theory.

Several groups have studied detailed data on the Sarasota election results, looking for patterns that might help explain what happened.

One of the key questions is whether there are systematic differences in undervote rate between individual voting machines. The reason this matters is that if the ballot design theory is correct, then the likelihood that a particular voter undervoted would be independent of which specific machine the voter used – all voting machines displayed the same ballot. But an intermittent bug might well manifest itself differently depending on the details of how each voting machine was set up and used. So if undervote rates depend on attributes of the machines, rather than attributes of the voters, this tends to point toward the bug theory.

Of course, one has to be careful to disentangle the possible causes. For example, if two voting machines sit in different precincts, they will see different voter populations, so their undervote rate might differ even if the machines are exactly identical. Good data analysis must control for such factors or at least explain why they are not corrupting the results.

There are two serious studies that point to machine-dependent results. First, Mebane and Dill found that machines that had a certain error message in their logs had a higher undervote rate. According to the State study, this error message was caused by a particular method used by poll workers to wake the machines up in the morning; so the use of this method correlated with higher undervote rate.

Second, Charles Stewart, an MIT political scientist testifying for the Jennings campaign in the litigation, looked at how the undervote rate depended on when the voting machine was “cleared and tested”, an operation used to prepare the machine for use. Stewart found that machines that were cleared and tested later (closer to Election Day) had a higher undervote rate, and that machines that were cleared and tested on the same day as many other machines also had a higher undervote rate. One possibility is that clearing and testing a machine in a hurry, as the election deadline approached or just on a busy day, contributed to the undervote rate somehow.

Both studies indicate a link between the details of a how a machine was set up and used, and the undervote rate on that machine. That’s the kind of thing we’d expect to see with an intermittent bug, but not if undervotes were caused strictly by ballot design and user confusion.

Conclusion

What conclusion can we draw? Certainly we cannot say that a bug definitely caused undervotes. But we can say with confidence that the bug theory is still in the running, and needs to be considered alongside the ballot design theory as a possible cause of the Sarasota undervotes. If we want to get to the bottom of this, we need to investigate further, by looking more deeply into undervote patterns, and by examining the voting machine hardware and software.

[Correction (Feb. 28): I changed part (3) to say that the team “had” only a short time to do their sstudy. I originally wrote that they “were given” only a short time, which left the impression that the state had set a time limit for the study. As I understand it, the state did not impose such a time limit. I apologize for the error.]

Sarasota Voting Machines Insecure

The technical team commissioned by the State of Florida to study the technology used in the ill-fated Sarasota election has released its report. (Background: on the Sarasota election problems; on the study.)

One revelation from the study is that the iVotronic touch-screen voting machines are terribly insecure. The machines are apparently susceptible to viruses, and there are many bugs a virus could exploit to gain entry or spread:

We found many instances of [exploitable buffer overflow bugs]. Misplaced trust in the election definition file can be found throughout the iVotronic software. We found a number of buffer overruns of this type. The software also contains array out-of-bounds errors, integer overflow vulnerabilities, and other security holes. [page 57]

The equation is simple: sloppy software + removable storage = virus vulnerability. We saw the same thing with the Diebold touchscreen voting system.

Another example of poor security is in the passwords that protect crucial operations such as configuring the voting machine and modifying its software. There are separate passwords for different operations, but the system has a single backdoor that allows all of the passwords to be bypassed by an adversary who can learn or guess a one-byte secret, which is easily guessed since there are only 256 possibilities. (p. 67) For example, an attacker who gets private access to the machine for just a few minutes can apparently use the backdoor to install malicious software onto a machine.

Though the machines’ security is poor and needs to be fixed before it is used in another election, I agree with the study team that the undervotes were almost certainly not caused by a security attack. The reason is simple: only a brainless attacker would cause undervotes. An attack that switched votes from one candidate to another would be more effective and much harder to detect.

So if it wasn’t a security attack, what was the cause of the undervotes?

Experience teaches that systems that are insecure tend to be unreliable as well – they tend to go wrong on their own even if nobody is attacking them. Code that is laced with buffer overruns, array out-of-bounds errors, integer overflow errors, and the like tends to be flaky. Sporadic undervotes are the kind of behavior you would expect to see from a flaky voting technology.

The study claims to have ruled out reliability problems as a cause of the undervotes, but their evidence on this point is weak, and I think the jury is still out on whether voting machine malfunctions could be a significant cause of the undervotes. I’ll explain why, in more detail, in the next post.

Sarasota: Limited Investigations

As I wrote last week, malfunctioning voting machines are one of the two plausible theories that could explain the mysterious undervotes in Sarasota’s congressional race. To get a better idea of whether malfunctions could be the culprit, we would have to investigate – to inspect the machines and their software for any relevant errors in design or operation. A well-functioning electoral system ought to be able to do such investigations in an open and thorough manner.

Two attempts have been made to investigate. The first was by representatives of Christine Jennings (the officially losing candidate) and a group of voters, who filed lawsuits challenging the election results and asked, as part of the suits’ discovery process, for access by their experts to the machines and their code. The judge denied their request, in a curious order that seemed to imply that they would first have to prove that there was probably a malfunction before they could be granted access to the evidence needed to tell whether there was a malfunction.

The second attempt was by the Department of State (DOS) of the state of Florida, who commissioned a study by outside experts. Oddly, I am listed in the official Statement of Work (SOW) as a principal investigator on the study team, even though I am not a member of the team. Many people have asked how this happened. The short answer is that I discussed with representatives of DOS the possibility of participating, but eventually it became clear that the study they wanted to commission was far from the complete, independent study I had initially thought they wanted.

The biggest limitation on the study is that DOS is withholding information and resources needed for a complete study. Most notably, they are not providing access to voting machines. You don’t have to be a rocket scientist to realize that if you want to understand the behavior of voting machines, it helps to have a voting machine to examine. DOS could have provided or facilitated access to a machine, but it apparently chose not to do so. [Correction (Feb. 28): The team’s final report revealed that DOS had changed its mind and given the team access to voting machines.] The Statement of Work is clear that the study is to be “a … static software analysis on the iVotronics version 8.0.1.2 firmware source code”.

(In computer science, “static” analysis of software refers to methods that examine the text of the software; “dynamic” methods observe and measure the software while it is running.)

The good news is that the team doing the study is very strong technically, so there is some hope of a useful result despite the limited scope of the inquiry. There have been some accusations of political bias against team members, but knowing several members of the team I am confident that these charges are misguided and the team won’t be swayed by partisan politics. The limits on the study aren’t coming from the team itself.

The results of the DOS-sponsored study should be published sometime in the next few months.

What we have not seen, and probably won’t, is a full, independent study of the iVotronic machines. The voters of Sarasota County – and everyone who votes on paperless machines – are entitled to a comprehensive study of what happened. Sadly, it looks like lawyers and politics will stop that from happening.

Why So Many Undervotes in Sarasota?

The big e-voting story from November’s election was in Sarasota, Florida, where a congressional race was decided by about 400 votes, with 18,412 undervotes. That’s 18,412 voters who cast votes in other races but not, according to the official results, in that congressional race. Among voters who used the ES&S iVotronic machines – that is, non-absentee voters in Sarasota County – the undervote rate was about 14%. Something went very wrong. But what?

Since the election there have been many press releases, op-eds, and blog posts about the undervotes, not to mention some lawsuits and scholarly studies. I want to spend the rest of the week dissecting the Sarasota situation, which I have been following closely. I’m doing this now for two reasons: (1) enough time has passed for the dust to settle a bit, and (2) I’m giving a joint talk on the topic next week and I want to work through some thoughts.

There’s no doubt that something about the iVotronic caused the undervotes. Undervote rates differed so starkly in the same race between iVotronic and non-iVotronic voters that the machines must be involved somehow. (For example, absentee voters had a 2.5% undervote rate in the congressional race, compared to 14% for iVotronic voters.) Several explanations have been proposed, but only two are at all plausible: ballot design and machine malfunction.

The ballot design theory says that the ballot offered to voters on the iVotronic’s screen was misdesigned in a way that caused many voters to miss that race. Looking at screenshots of the ballot, one can see how voters might miss the congressional race at the top of the second page. (Depressingly, some sites show a misleading photo that the photographer angled and lit to make the misdesign look worse than it really was.) It’s very plausible that this kind of problem caused some undervotes; and that is consistent with the reports of many voters that the machine did not show them the congressional race.

It’s one thing to say that ballot design could have caused some undervotes, but it’s another thing entirely to say it was the sole cause of so elevated an undervote rate. Each voter, before finalizing his vote, was shown a clearly designed confirmation screen listing his choices and clearly showing a no-candidate-selected message for the congressional race. Did so many voters miss that too? And what about the many voters who reported choosing a candidate in the congressional race, only to have the no-candidate-selected message show up on the confirmation screen anyway?

The malfunction theory postulates a problem or malfunction with the voting machines that caused votes not to be recorded. There are many types of problems that could have caused lost votes. The best way to evaluate the malfunction theory is to conduct a careful and thorough study of the machines themselves. In the next entry I’ll talk about the efforts that have been made toward that end. For now, suffice it to say that no suitable study is available to us.

If we had a voter-verified paper trail, we could immediately tell which theory is correct, by comparing the paper and electronic records. If the voter-verified paper records show the same high undervote race, then the ballot design theory is right. If the paper and electronic records show significantly different undervote rates, then something is wrong with the machines. But of course the advocates of paperless voting argued that paper trails were unnecessary – while also arguing that touchscreen systems reduce undervotes.

Several studies have tried to use statistical analyses of undervote patterns in different races, precincts, and machines to evaluate the two theories. Frisina, Herron, Honaker, and Lewis say the data support the ballot design theory; Mebane and Dill say the data point to malfunction as a likely cause of at least some of the undervotes. Reading these studies, I can’t reach a clear conclusion.

What would convince me, one way or the other, is a good study of the machines. I’ll talk next time about the fight over whether and how to look at the machines.

Diebold Shows How to Make Your Own Voting Machine Key

By now it should be clear that Diebold’s AccuVote-TS electronic voting machines have lousy security. Our study last fall showed that malicious software running on the machines can invisibly alter votes, and that this software can be installed in under a minute by inserting a new memory card into the side of the machine. The last line of defense against such attacks is a cheap lock covering the memory card door. Our video shows that the lock can be picked in seconds, and, infamously, it can also be opened with a key that is widely sold for use in hotel minibars and jukeboxes.

(Some polling places cover the memory card with tamper evident seals, but these provide little real security. In practice, the seals are often ignored or accidentally broken. If broken seals are taken seriously and affected machines are taken offline for inspection, an attacker could launch a cheap denial-of-service attack by going around breaking the seals on election day.)

According to published reports, nearly all the machines deployed around the country use the exact same key. Up to this point we’ve been careful not to say precisely which key or show the particular pattern of the cuts. The shape of a key is like a password – it only provides security if you keep it secret from the bad guys. We’ve tried to keep the shape secret so as not to make an attacker’s job even marginally easier, and you would expect a security-conscious vendor to do the same.

Not Diebold. Ross Kinard of SploitCast wrote to me last month to point out that Diebold offers the key for sale on their web site. Of course, they won’t sell it to just anybody – only Diebold account holders can order it online. However, as Ross observed, Diebold’s online store shows a detailed photograph of the key.

Here is a copy of the page. The original showed the entire key, but we have blacked out the compromising part.

Could an attacker create a working key from the photograph? Ross decided to find out. Here’s what he did:

I bought three blank keys from Ace. Then a drill vise and three cabinet locks that used a different type of key from Lowes. I hoped that the spacing and depths on the cabinet locks’ keys would be similar to those on the voting machine key. With some files I had I then made three keys to look like the key in the picture.

Ross sent me his three homemade keys, and, amazingly, two of them can open the locks on the Diebold machine we used in our study!

This video shows one of Ross’s keys opening the lock on the memory card door:

Ross says he has tried repeatedly to bring this to Diebold’s attention over the past month. However, at the time of this posting, the image was still on their site.

Security experts advocate designing systems with “defense in depth,” multiple layers of barriers against attack. The Diebold electronic voting systems, unfortunately, seem to exhibit “weakness in depth.” If one mode of attack is blocked or simply too inconvenient, there always seems to be another waiting to be exposed.

[UPDATE (Jan. 25): As of this morning, the photo of the key is no longer on Diebold’s site.]