It’s a curious problem: how do you compare two completely unrelated voting systems and say that one is more or less secure than the other? How can you meaningfully compare the security of paper ballots tabulated by optical scan systems with DRE systems (with or without VVPAT attachments)?
There’s a clear disconnect on this issue. It shows up, among other places, in a recent blog post by political scientist Thad Hall:
The point here is that, when we think about paper ballots and absentee voting, we do not typically think about or evaluate them “naked” but within an implementation context yet we think nothing of evaluating e-voting “naked” and some almost think it “cheating” to think about e-voting security within the context of implementation. However, if we held both systems to the same standard, the people in California probably would not be voting using any voting system; given its long history, it is inconceivable that paper ballots would fail to meet the standards to which e-voting is held, absent evaluating its implementation context.
Hall then goes on to point to his recent book with Mike Alvarez, Electronic Elections, that beats on this particular issue at some length. What that book never offers, however, is a decent comparison between electronic voting and anything else.
I’ve been thinking about this issue for a while: there must be a decent, quantitative way to compare these things. Turns out, we can leverage a foundational technique from computer science theory: complexity analysis. CS theory is all about analyzing the “big-O” complexity of various algorithms. Can we analyze this same complexity for voting systems’ security flaws?
I took a crack at the problem for a forthcoming journal paper. I classified a wide variety of voting systems according to how much effort you need to do to influence all the votes: effort proportional to the total number of voters, effort proportional to the number of precincts, or constant effort; less effort implies less security. I also broke this down by different kinds of attacks: integrity attacks that try to change votes in a stealthy fashion, confidentiality attacks that try to learn how specific voters cast their votes, and denial of service attacks that don’t care about stealth but want to smash parts of the election. This was a fun paper to write, and it nicely responds to Hall and Alvarez’s criticisms. Have a look.
(Joe Hall also responded to Thad Hall’s post.)
This is a nice idea, which captures intuitive ideas about voting system security (at least, it matches my intuition).
In the discussion of related work, I think it would be appropriate to mention the “number of informed participants” (sometimes called “attack team size”) metric in the “Machinery of Democracy” study. You’re proposing an asymptotic version of that metric, I think. Of the various ways of measuring the cost of attacking voting systems (and probably a lot of other systems), I think that attack team size is the best, although it still has some problems. I also agree that it is appropriate to consider different classes of attackers.
This is not entirely true. If a machine with broken paper roll records an exceptionally large number of total votes as compared to other machines in the same precinct then there would be good reason to be suspicious of that machine. Thus, the broken machines can only dent the election integrity by a small margin (per machine) and we are looking at an attack of O(M) where M is number of machines in the system. If all the machines (or a substantial proportion) break their paper rolls, we would hope that some warranty claim might occur and the manufacturer is hit with the cost of rerunning the election (unlikely in real life, but this is all theoretical). It seems reasonable that the buyers of equipment should demand a contract where the vendor is penalized for breakdowns.
Obviously knocking the broken machines out of service improves the overall accuracy. I don’t believe that any election system can guarantee an integrity that goes right down to parts per million, but that isn’t necessary for Democracy. Even an error of up to 1% still keeps our leaders in a position where they stand to lose if they piss enough people off.
This makes me notice another problem with Dan’s analysis — election integrity is not a binary ON/OFF result, it is a continuous error function. Thus, analysis of each attack really needs a curve with “attacker effort” on the X axis and “percent error in tally” on the Y axis. For example, an paper-ballot counting official might be able to sneak in an extra ballot tucked in their sock, they might sneak another ballot tucked into the other sock. Sneaking in a hundred ballots is considerably more difficult.
Howdy,
There is the issue of the security of the vote and it is important. Almost as important to me is the perception of the reliability of the vote. I have served as an election judge several times here in Texas. I can explain to a voter(and some do ask) how their vote is protected. I am a computer programmer and I understand what is going on in these electronic voting machines. But, I can’t really explain to a voter how their vote is protected. Partially, that is because these systems are vulnerable to abuse. Partially, it is because they are too complicated for most people to understand. I think this is all going to undermine peoples faith in the honesty of elections and that is a bad direction to go.
I’m perusing the table in Dan Wallach’s “big-Oh” paper and feel compelled to note that Electronic + VVPAT is O(1) for Confidentiality if the VVPAT is “toilet-paper roll” as provided by some of the vendors. DREs with VVPAT are also O(1) for both integrity and denial of service if the paper runs out or jams, and votes are continued to be allowed to be recorded electronically, as has been observed in about 10% of precincts where they have been used (according to some research, including my own in Franklin County Ohio).
Electronic with VVPAT, especially as presently implemented by the existing “certified” products, is absolutely risky and should NOT be used.
Dan,
Speaking as a state official involved in assessing voting systems, your paper and the constructive discussion it has started could not be more welcome or valuable. As you refine this analytical tool, I hope it will be possible to include practical and detailed hypothetical examples. Most election officials, myself included, have limited mastery of statistical analysis. We need to be able to follow this discussion. For myself at least, examples will strengthen the necessary resolve to dig out a textbook.and start studying.
Your point about doing lower-bound analysis versus upper-bound analysis is an intriguing point. Still, if you say “it’s going to take you at least this much work to flip the election with this attack”, then that’s not nearly as valuable as saying “it’s going to take you at most this much work to flip the election with this attack.”
You make an interesting point, namely that the magnitude of fraud necessary to flip an election goes down as an election gets close. My existing upper-bound analysis could still work for this. If it takes O(N) work to corrupt N ballots for whatever voting system, that sort of attack becomes viable when N is small, and therefore you can’t just ignore it, because small N may be sufficient in tight races.
In other words, if the margin of victory is V, you’ve got an O(N) attack, and V << N, then you need to do O(V) work to flip the verdict. If you’ve got an O(P) attack, i.e., proportional to the number of precincts and the margin of victory is V, then you would need to do O(V*P/N) work. That sort of logic is more precise than saying that that you need to do O(1) work if there’s one vote you need to flip the verdict.
Clearly, your goal is to be more precise, and one of the ways you can be more precise is to add a new variable (the margin of victory). You could also increase your precision by adding in more specific details of any particular voting system and the peculiar procedures used in a specific county with that system. I’d say that that sort of analysis is entirely appropriate if a county clerk comes to you and says “this paper was pretty neat, but I’m curious whether our local procedures improve the situation or not.” At that point, you don’t need to generalize, as I did in my paper.
In my previous post, above, I alluded to two propositions, without formally stating them.
1) In analyzing the effort required for an attack, we’re usually less concerned with asymptotic upper bounds on the worst-case complexity of the attack. Instead, an important concern might be proving an asymptotic lower bound: Ω(f(n)).
2) The worst-case asymptotic lower bound for most classes of attacks on voting systems is a constant function. That is, for any election where the margin of victory is no greater than one vote, the election can be compromised by flipping that one vote.
These two propositions lead me to question the usefulness of the approach set forth in the paper. Nevertheless, I do think this paper makes an important contribution to thinking about the problem of comparing voting systems. It stimulates thinking about what other tools we have in our theoretical toolbox.
Computer scientists emphasize asymptotic analysis. And, indeed, on the practical, engineering side of things, we’ve all heard the horror stories about algorithms and data structures originally designed for ten items. All too often, they work just fine for a hundred items input —on systems with a hundred times as much CPU and a hundred times as much memory—then fall over eternally dead the instant some fool inputs 32,768 items. Asymptotic analysis is truly invaluable.
For some strange reason, though, in contrast to computer science, the electrical engineering field tends to emphasize operating point analysis. Pick a bias point, then make the assumption that everything varies linearly around that quiescent point. The assumption of lineararity is often rather unbelievable, but the approach seems to work: EEs just can’t handle the complex math required for more better analysis. (Stupid EEs—if they could do advanced math they’d have become computer scientists!)
A standard model for an election might assume a jurisdiction (a city or county) with 100 precincts, each precinct containing 1000 actual voters. If we set a standard turnout of 50%, then each precinct only needs to handle a maximum of 2000 potential voters. Two thousand is perhaps a big number, but —informally— I’d be hard-pressed to characterize it as approaching infinity. Even the two hundred thousand potential voters in this (proposed) standard jurisdication isn’t that large.
All in all, I don’t mean to suggest that Dan’s paper should abandon its asymptotic approach. But I think it should point out the problems with asymptotic analysis, and stimulate further thinking in the field.
First, let’s all agree on the usual distinction between O(f(n)) and Theta(f(n)). With that out of the way, though, I think that characterizing some attack classes as requiring effort proportional to the number of voters overestimates their difficulty. Instead, many attacks only require effort sufficient to confidently gain a majority of votes.
Consider an election between Alice and Bob, with 1,000 voters. Let’s say that if the election were fair, then Alice would win 550 to 450. But if Bob can coerce a random selection of only 93 voters to all cast their votes for him, then he has a better than even chance of picking up enough votes to win. And the odds get much, much better if Bob can select a group who are more likely than average to prefer Alice.
Yeah, you could characterize that as O(n), but I don’t think that’s especially useful as a comparative measure of attack effort required.
Certainly security is an important issue with voting in general and electronic voting in particular, but what concerns me even more than the lurking vulnerabilities are other sorts of bugs.
There seems to be very little thought given to detecting and eliminating bugs in the existing electronic systems. When the tallies don’t match (assuming there’s any sort of cross check), there’s no forensic record to figure out what went wrong and what the tallies should have been. It could be fraud–an exploited vulnerability–but it could also be a stupid non-malicious software bug or a hardware glitch.
I suspect that most of the discrepancies we read about are from plain old bugs rather than conspiracies to affect the results. Without some real engineering, our democracy may be at the mercy of mediocre programming and commodity electronics. In my mind, that seems a much more likely threat than a concerted effort to rig an election.
Relative to the end-to-end solutions, you can effectively eliminate the O(1) risk with standardized/public hardware and software, and spot-checks by teams with competing interests before, during, and after the elections. Risk of detection becomes very high – as high or higher than any prior process.
A couple notes…
Postal workers do not have to open absentee voter ballots to change the course of an election. Given information about the probable voting habits of individual voters, the worker could simply discard the ballot, either when sent to the voter, or when sent back by the voter.
A much easier variant is to discard a fraction of the ballots for an area known for voting unfavorably on a cause or candidate you want to promote. Discarding all ballots for an area might be detected, but discarding a fraction is often sufficient to change an election result, while remaining hard to detect.
From experience (I have hosted the local polling place for several years) it seems that a lot of absentee ballots expected by voters are not delivered, and most of those voters do not vote. The entire process is (or seems) usually very error-prone – so the above attack should be effective.
A variant of the same attack applies when the absentee ballots are mailed to voters. A subverted government worker could simply fail to mail a fraction of the ballots for some areas.
Our local Registrar of Voters has representatives of both political parties present to observe the counting of votes, so the risk of detection is high at that point.
As you might guess, I have a very low opinion of vote-by-mail. 🙂
@Tel: Page 13, the header of the table is at the bottom of the page, the rest of the table is on the next page, I really hate that.
Oops. I hate that as well. Normally I write these things in LaTeX, which solves all those sorts of problems for you. The law journal wanted Microsoft Word, so you see what happened. I’ve fixed it now.
Spaghetti sort isn’t really O(1). Cutting the spaghetti in the first place is at least O(n), and for a large enough number of noodles, you’ll have to spend more than constant time looking for the longest noodle to pull out at the end of the game too (then multiply by n noodles to get superlinear time). In fact, it’s quite likely that the spaghetti sort will take O(n^2), worse than computer methods, when the asymptotics are properly accounted for.
This isn’t a trivial issue because it highlights something that goes on in voting machine security too: you can make something look good by shifting the problem to some other part of the system and then ignoring that part. You can say spaghetti sort is O(1) if you don’t examine how long it takes to get the input in and the output out. You can say vulnerability to physical access isn’t important if you say the poll volunteers will take care of it and don’t examine whether they actually *can*.
Page 13, the header of the table is at the bottom of the page, the rest of the table is on the next page, I really hate that.
I’m not sure why you don’t include plain old hand counted paper ballots (as used by the majority of elections worldwide). It would at least provide a point of reference.
It’s an interesting idea, and a very valid research topic. However, I think that the “big-Oh” approach hides something (just like it does in computer science). The classic example is the algorithm for O(1) sorting — you cut each input number into a length of spaghetti (e.g. input number 3.5 becomes a 3.5 inch spaghetti, input number 4 becomes a 4 inch piece, etc) then you bundle up all the spaghetti and allow the bundle to come down onto the table (with the spaghetti endways up). The numbers are now sorted, you can remove the larges spaghetti first by pulling out the one that sticks up on top.
Needless to say, this is not a popular algorithm in general use. Although it is O(1) and beats every other known sort algorithm as far as “big-Oh” is concerned, it is still insanely slow because of the requirement to measure and cut physical objects.
Similarly, shooting the president is an O(1) attack on the election integrity. However, it is deliberately made very difficult, precisely because it is such an obvious single point of failure (and a big penalty if you get caught in the attempt).
Getting closer to the actual article in question, you rate bribery of the election officials as O(P) or O(1) depending on the system but again, this is such an obvious attack that election officials are (at least in theory) carefully chosen to avoid this attack. Typically there would be scrutineers from all the major parties, plus a few randomly chosen citizens from a range of sources. This is by design to make it difficult for anyone to be able to reliably bribe such a diverse group of people (and the one honest one will document and call out when he sees the others cheating).
You might also consider an analysis of what are the risks for the attacker if they do get caught out. Do they end up with conclusive evidence pointing to them as the cheat? Can they fob it off onto some system error? Most of the old-fashioned systems were designed to require the attacker to put themselves personally in physical danger if they wanted to do something nasty. Things like Internet voting change the risk profile where the attacker can get a virus onto a remote machine and sit in China or Romania while changing an election outcome in the USA.
Finally, you regard widespread vote buying and intimidation as feasible, however it very rapidly becomes detected even before it becomes widespread. People know who is intimidating them. Look at the recent Zimbabwe election, everyone knew about the government thugs beating up opposition supporters so the system automatically works and the election integrity is known worldwide to be zero (yes I understand it is possible to be more subtle, but still the threat does not really measure up in such simple terms).
Very neat paper, Dan.