September 29, 2022

Is Internet Voting Secure? The Science and the Policy Battles

I will be presenting a similarly titled paper at the 2022 Symposium Contemporary Issues in Election Law run by the University of New Hampshire Law review, October 7th in Concord, NH. The paper will be published in the UNH Law Review in 2023 and is available now on SSRN.

I have already serialized parts of this paper on Freedom-to-Tinker: Securing the Vote; unsurprising and surprising insecurities in Democracy Live’s OmniBallot; the New Jersey lawsuit (and settlement); the New York (et al.) lawsuit; lawsuits in VA, NJ, NY, NH, and in NC; inherent insecurity; accommodating voters with disabilities; and Switzerland’s system.

Now here it is in one coherent whole, with footnotes.

Abstract. No known technology can make internet voting secure, according to the clear scientific consensus. In some applications—such as e-pollbooks (voter sign-in), voter registration, and absentee ballot request—it is appropriate to use the internet, as the inherent insecurity can be mitigated by other means. But the insecurity of paperless transmission of a voted ballot through the internet, cannot be mitigated.

The law recognizes this in several ways. Courts have enjoined the use of certain paperless or internet-connected voting systems. Federal law requires states to allow voters to use the internet to request absentee ballots, but carefully stops short of internet ballot return (i.e., voting).

But many U.S. states and a few countries go beyond what is safe: they have adopted internet voting, for citizens living abroad and (in some cases) for voters with disabilities.

Most internet voting systems have an essentially common architecture, and they are insecure at least at the same key point, after the voter has reviewed the ballot but before it is transmitted. I review six internet voting systems deployed 2006-2021 that were insecure in practice, just as predicted by theory—and some were also insecure in surprising new ways, “unforced errors”.

We can’t get along without the assistance of computers. U.S. ballots are too long to count entirely by hand unless the special circumstances of a recount require it. So computer-counted paper ballots play a critical role in the security and auditability of our elections. But audits cannot be used to secure internet voting systems, which have no paper ballots that form an auditable paper trail.

So there are policy controversies: trustworthiness versus convenience, security versus accessibility. In 2019-22 there were lawsuits in Virginia, New Jersey, New York, New Hampshire, and North Carolina; legislation enacted in Rhode Island and withdrawn in California. There is a common pattern to these disputes, which have mostly resolved in a way that provides remote accessible vote by mail (RAVBM) but stops short of permitting electronic ballot return (internet voting).

What would it take to thoroughly review a proposed internet voting system to be assured whether it delivers the security it promises? Switzerland provides a case study. In Switzerland, after a few years of internet voting pilot projects, the Federal Chancellery commissioned several extremely thorough expert studies of their deployed system. These reports teach us not only about their internet voting system itself but about how to study those systems before making policy decisions.

Accessibility of election systems to voters with disabilities is a genuine problem. Disability-rights groups have been among those lobbying for internet voting (which is not securable) and other forms of remote accessible vote by mail (which can be adequately securable). I review statistics showing that internet voting is probably not the most effective way to serve voters with disabilities.

The anomaly of cheap complexity

Why are our computer systems so complex and so insecure?  For years I’ve been trying to explain my understanding of this question. Here’s one explanation–which happens to be in the context of voting computers, but it’s a general phenomenon about all our computers:

There are many layers between the application software that implements an electoral function and the transistors inside the computers that ultimately carry out computations. These layers include the election application itself (e.g., for voter registration or vote tabulation); the user interface; the application runtime system; the operating system (e.g., Linux or Windows); the system bootloader (e.g., BIOS or UEFI); the microprocessor firmware (e.g., Intel Management Engine); disk drive firmware; system-on-chip firmware; and the microprocessor’s microcode. For this reason, it is difficult to know for certain whether a system has been compromised by malware. One might inspect the application-layer software and confirm that it is present on the system’s hard drive, but any one of the layers listed above, if hacked, may substitute a fraudulent application layer (e.g., vote-counting software) at the time that the application is supposed to run. As a result, there is no technical mechanism that can ensure that every layer in the system is unaltered and thus no technical mechanism that can ensure that a computer application will produce accurate results. 

[Securing the Vote, page 89-90]

So, computers are insecure because they have so many complex layers.

But that doesn’t explain why there are so many layers, and why those layers are so complex–even for what “should be a simple thing” like counting up votes.

Recently I came across a really good explanation: a keynote talk by Thomas Dullien entitled “Security, Moore’s law, and the anomaly of cheap complexity” at CyCon 2018, the 10th International Conference on Cyber Conflict, organized by NATO.

Thomas Dullien’s talk video is here, but if you want to just read the slides, they are here.

As Dullien explains,

A modern 2018-vintage CPU contains a thousand times more transistors than a 1989-vintage microprocessor.  Peripherals (GPUs, NICs, etc.) are objectively getting more complicated at a superlinear rate. In his experience as a cybersecurity expert, the only thing that ever yielded real security gains was controlling complexity.  His talk examines the relationship between complexity and failure of security, and discusses the underlying forces that drive both.

Transistors-per-chip is still increasing every year; there are 3 new CPUs per human per year.  Device manufacturers are now developing their software even before the new hardware is released.  Insecurity in computing is growing faster than security is improving.

The anomaly of cheap complexity.  For most of human history, a more complex device was more expensive to build than a simpler device.  This is not the case in modern computing. It is often more cost-effective to take a very complicated device, and make it simulate simplicity, than to make a simpler device.  This is because of economies of scale: complex general-purpose CPUs are cheap.  On the other hand, custom-designed, simpler, application-specific devices, which could in principle be much more secure, are very expensive.  

This is driven by two fundamental principles in computing: Universal computation, meaning that any computer can simulate any other; and Moore’s law, predicting that each year the number of transistors on a chip will grow exponentially.  ARM Cortex-M0 CPUs cost pennies, though they are more powerful than some supercomputers of the 20th century.

The same is true in the software layers.  A (huge and complex) general-purpose operating system is free, but a simpler, custom-designed, perhaps more secure OS would be very expensive to build.  Or as Dullien asks, “How did this research code someone wrote in two weeks 20 years ago end up in a billion devices?”

Then he discusses hardware supply-chain issues: “Do I have to trust my CPU vendor?”  He discusses remote-management infrastructures (such as the “Intel Management Engine” referred to above):  “In the real world, ‘possession’ usually implies ‘control’. In IT, ‘possession’ and ‘control’ are decoupled. Can I establish with certainty who is in control of a given device?”

He says, “Single bitflips can make a machine spin out of control, and the attacker can carefully control the escalating error to his advantage.”  (Indeed, I’ve studied that issue myself!)

Dullien quotes the science-fiction author Robert A. Heinlein:

“How does one design an electric motor? Would you attach a bathtub to it, simply because one was available? Would a bouquet of flowers help? A heap of rocks? No, you would use just those elements necessary to its purpose and make it no larger than needed — and you would incorporate safety factors. Function controls design.” 

 Heinlein, The Moon Is A Harsh Mistress

and adds, “Software makes adding bathtubs, bouquets of flowers, and rocks, almost free. So that’s what we get.”

Dullien concludes his talk by saying, “When I showed the first [draft of this talk] to some coworkers they said, ‘you really need to end on a more optimistic note.”  So Dullien gives optimism a try, discussing possible advances in cybersecurity research; but still he gives us only a 10% chance that society can get this right.


Postscript:  Voting machines are computers of this kind.  Does their inherent insecurity mean that we cannot use them for counting votes?  No. The consensus of election-security experts, as presented in the National Academies study, is: we should use optical-scan voting machines to count paper ballots, because those computers, when they are not hacked, are much more accurate than humans.  But we must protect against bugs, against misconfigurations, against hacking, by always performing risk-limiting audits, by hand, of an appropriate sample of the paper ballots that the voters marked themselves.

Magical thinking about Ballot-Marking-Device contingency plans

The Center for Democracy and Technology recently published a report, “No Simple Answers: A Primer on Ballot Marking Device Security”, by William T. Adler.   Overall, it’s well-informed, clearly presents the problems as of 2022, and it’s definitely worth reading.  After explaining the issues and controversies, the report presents recommendations, most of which make a lot of sense, and indeed the states should act upon them.  But there’s one key recommendation in which Dr. Adler tries to provide a simple answer, and unfortunately his answer invokes a bit of magical thinking.  This seriously compromises the conclusions of his report.  By asking but not answering the question of “what should an election official do if there are reports of BMDs printing wrong votes?”, Dr. Adler avoids having to make the inevitable conclusion that BMDs-for-all-voters is a hopelessly flawed, insecurable method of voting.  Because the answer to that question is, unfortunately, there’s nothing that election officials could usefully do in that case.

BMDs (ballot marking devices) are used now in several states and there is a serious problem with them (as the report explains): “a hacked BMD could corrupt voter selections systematically, such that a candidate favored by the hacker is more likely to win.”  That is, if a state’s BMDs are hacked by someone who wants to change the result of an election, the BMDs can print ballots with votes on them different from what the voters indicated on the touchscreen.  Because most voters won’t inspect the ballot paper carefully enough before casting their ballot, most voters won’t notice that their vote has been changed.  The voters who do notice are (generally) allowed to “spoil” their ballot and cast a new one; but the substantial majority of voters, those who don’t check their ballot paper carefully, are vulnerable to having their votes stolen.

One simple answer is not to use BMDs at all: let voters mark their optical-scan paper ballots with a pen (that is, HMPB: hand-marked paper ballots).  A problem with this simple answer (as the report explains) is that some voters with disabilities cannot mark a paper ballot with a pen.  And (as the report explains) if BMDs are reserved just for the use of voters with disabilities, then those BMDs become “second class”: pollworkers are unfamiliar with how to set them up, rarely used machines may not work in the polling place when turned on, paper ballots cast by the disabled are distinguishable from those filled in with a pen, and so on.

So Dr. Adler seems to accept that BMDs, with their serious vulnerabilities, are inevitably going to be adopted—and so he makes recommendations to mitigate their insecurities.  And most of his recommendations are spot-on:  incorporate the cybersecurity measures required by the VVSG 2.0, avoid the use of bar codes and QR codes, adopt risk-limiting audits (RLAs).  Definitely worth doing those things, if election officials insist on adopting this seriously flawed technology in the first place.

But then he makes a recommendation intended to address the problem that if the BMD is cheating then it can print fraudulent votes that will survive any recount or audit.  The report recommends,

Another way is to depend on voter reports. In an election with compromised BMDs modifying votes in a way visible to voters who actively verify and observe those modifications, it is likely that election officials would receive an elevated number of reported errors. In order to notice a widespread issue, election officials must be monitoring election errors in real-time across a county or state. If serious problems are revealed with the BMDs that cast doubt on whether votes were recorded properly, either via parallel testing or from voter reports, election officials must respond. Accordingly, election officials should have a contingency plan in the event that BMDs appear to be having widespread issues. Such a plan would include, for instance, having the ability to substitute paper ballots for BMDs, decommissioning suspicious BMDs, and investigating whether other machines are also misbehaving. Stark (2019) has warned, however, that because it is likely not possible to know how many or which ballots were affected, the only remedy to this situation may be to hold a new election.

This the magical thinking:  “election officials should have a contingency plan.”  The problem is, when you try to write down such a plan, there’s nothing that actually works!  Suppose the election officials rely on voter reports (or on the rate of spoiled ballots); suppose the “contingency plan” says (for example) says “if x percent of the voters report malfunctioning BMDs, or y percent of voters spoil their ballots, then we will . . .”   Then we will what?  Remove those BMDs from service in the middle of the day?  But then all the votes already cast on those BMDs will have been affected by the hack; that could be thousands of votes.  Or what else?  Discard all the paper ballots that were cast on those BMDs?  Clearly you can’t do that without holding an entirely new election.  And what if those x% or y% of voters were fraudulently reporting BMD malfunction or fraudulently spoiling their ballots to trigger the contingency plan?  There’s no plan that actually works.

Everything I’ve explained here was already written down in “Ballot-marking devices cannot ensure the will of the voters” (2020 [non-paywall version]) and in “There is no reliable way to detect hacked ballot-marking devices” (2019), both of which Dr. Adler cites.  But an important purpose of magical thinking is to avoid facing difficult facts.

It’s like saying, “to prevent climate change we should just use machines to pull 40 billion tons of CO2 out of the atmosphere each year.”  But there is no known technology that can do this.  All the direct-air-capture facilities deployed to date can capture just 0.00001 billion tons.  Just because we really, really want something to work is not enough.

There is an inherent problem with BMDs: they can change votes in a way that will survive any audit or recount.  Not only is there “no simple solution” to this problem, there’s no solution period.  Perhaps someday a solution will be identified.  Until then, BMDs-for-all-voters is dangerous, even with all known mitigations.

Switzerland’s E-voting: The Threat Model

Part 5 of a 5-part series starting here

Switzerland commissioned independent expert reviews of the E-voting system built by Swiss Post.   One of those experts concluded, “as imperfect as the current system might be when judged against a nonexistent ideal, the current system generally appears to achieve its stated goals, under the corresponding assumptions and the specific threat model around which it was designed.”

I have explained the ingenious idea (in the Swiss Post system) behind client-side security:  because the voter’s computer may be quite insecure, because the client-side voting app may be under control of a hacker, keep certain secrets on paper that the computer can’t see.  Professor Ford, the systems-security expert, points out that part of the threat model is:  if the printing contractor is corrupt, that prints the paper and mails it, then the system is insecure.

The new threat model in 2022. But I’ll now add something to the threat model that I would not have thought about last year:  Step one of the voter’s workflow is, “type in a 20-character password from the paper into the voting app.”

Start voting key:
ti6v-kvtc-yiju-hh5h-pjsk

In the old days (2020 and before) the voter would do this using a physical or on-screen keyboard.  In the modern era (2022) you might do this using Apple’s “live text”, in which you aim your phone camera at anything with text in it, and then you can copy-paste from the picture.  And, of course, if you do that, then the phone sees all the secrets on the paper.

So the security of the Swiss Post E-voting system relies entirely on a trick–that the voter’s computer can’t know the secret numbers on a piece of paper–that has been made obsolete by advances in consumer technology.

Voter behavior as a component of the threat model.  Experts in voting protocols came to realize, over the past two decades, the importance of dispute resolution in a voting protocol.  That is, suppose a voter comes to realize, while participating in an e-voting system (or at a physical polling-place) that the election system is not properly tallying their vote.  Can the voter prove this to the election officials in a way that appropriate action will be taken, and their vote will be tallied correctly?   If not, then we say the dispute resolution system has failed.

Also, experts have come to understand that voters are only human: they overlook things and they don’t check their work.  So the voting system must have a human-centered design that works for real people.

In my previous post I described the Swiss Post E-voting protocol:

  1. The voter receives a printed sheet in the mail;
  2. The voter copies the start-voting-key into the browser (or voting app);
  3. The voter selects candidates in the app;
  4. The app encodes the ballot using the serial number, and transmits to the servers;
  5. The servers send back “return codes” to the app, which then displays them to the voter;
  6. The voter looks up the return codes on the printed sheet to make sure they match the candidate selections.

But what if most voters omit step 6, checking the return codes?  Then the voting app could get away with cheating: encode the wrong candidates, the server will send return codes for those wrong candidates, and the voter won’t notice.

To address that problem, Swiss Post added more steps to the protocol:

  1. Voter enters “ballot casting key” into the app to confirm that they’ve checked the return codes; app transmits that to servers
  2. Servers transmit another return-code to confirm.
  3. Voter checks that the “vote cast return code” displayed by the app matches the one on the paper.
Ballot casting key: 8147-1584-8
Vote cast return code: 0742-5185

This protocol is becoming ridiculously complex – not exactly human-centered.  Even so, here’s how the app could cheat:  fail to transmit the “ballot casting key” to the servers, and make up a fake “Vote cast return code”.  If the voter omits step 9, then the app has gotten away with cheating: it didn’t manage to cast a vote for the wrong candidate, but it did manage to cancel the voter’s ballot.

And what’s the voter supposed to do if the return codes don’t match?  Recall what’s printed in red on the paper:

Choice Return Code:
Question 1: 
YES: 1225
NO: 7092
EMPTY: 2812

Question 2:
YES: 9817
NO: 2111
EMPTY: 6745

Please check that your device displays the correct choice return codes.  If you cannot see the correct codes or in case of doubt, please contact the election authorities (0XX/ XXX XX XX).

And what should the authorities do if voters call that phone number and claim that the return codes don’t match?  This video (found on this page) suggests the answer: the voter is told, “we didn’t count your e-vote, you should vote on paper by physical mail instead.”

A big danger is that voters skip step 6 (diligently check every return code against the paper printout) and proceed directly to step 7 (enter the “casting key” to submit their ballot).  Would voters really do that?  Of course they would: research has shown over and over that voters don’t carefully reconfirm on paper the choices they made on-screen.

You might think, “I’ll check my own result, so it’s OK.”  But if thousands of your fellow voters are careless with step 6, that allows the voting app (if hacked) to change thousands of their votes, which can change the outcome of your election.  For a full analysis, see Ballot-Marking Devices (BMDs) Cannot Assure the Will of the Voters (here’s the non-paywall version).

In conclusion, the protocol has many, many steps for the voter to follow, and even then it’s not robust against typical voter behavior.   These issues were left out of the threat model that the experts examined.


Other threats:    For brevity, I didn’t even describe some other threats that the experts should probably consider in their next-round evaluation.  For example:

  • When the (hacked) app transmits a fake vote and displays a fake return-code, it could also display a (fake) “return code doesn’t match, try again” button.  If the user clicks there, then the app transmits the real vote (the one the voter selected) and gets back the real return code.  In that case, the app hasn’t succeeded in stealing this voter’s vote, but the voter is reassured and doesn’t alert the hotline.
  • That last point is an indication of a more general threat:  the hacked app can change the protocol, at least the part of the protocol that involves interaction with the voter, by giving the voter fraudulent instructions.  There could be a whole class of threats there; I invite the reader to invent some.
  • Back to step 9:  Suppose an attacker hacks thousands of voter computers/phones, so thousands of voters get bad return codes (because their vote has been stolen), and some percentage of them will notice (in step 9) and call the phone number to report.  That is, a couple hundred calls come in to the hotline.  Hundreds of calls to the hotline is evidence either that thousands of votes are being stolen, or that hundreds of voters are falsely reporting.  Should the election be re-run?  The point is, the election protocol is not complete without a written protocol for what the authorities should do in this case.  And unfortunately, there’s nothing good they can do; which is already a serious flaw in the whole system.
  • I discussed, “the (hacked) computer might be able to see what’s on the paper.”  But consider this:  within a few years after deployment of such a system, it’s easy to imagine that voters will pressure election administrators, “can’t you just e-mail my voter-sheet to me (as a PDF) instead of sending paper in physical mail?”  Then it’s trivial for a (hacked) computer or phone to see all the return codes on the voter sheet.  It will be natural for an election administrator to forget that the security of this whole system relies on the fact that the computer can’t see the paper; sending the “paper” as a PDF defeats that crucial security mechanism.
  • The same is true if the voter-sheet is faxed to the voter; fax is internet, these days.

What the Assessments Say About the Swiss E-voting System

(Part 4 of a 5-part series starting here)

In 2021 the Swiss government commissioned several in-depth technical studies of the Swiss Post E-voting system, by independent experts from academia and private consulting firms.  They sought to assess, does the protocol as documented guarantee the security called for by Swiss law (the “ordinance on electronic voting”, OEV)?  Does the system as implemented in software correctly correspond to the protocol as documented?  Are the networks and systems, on which the system is deployed, adequately secure?

Before the reports even answer those questions, they point out: “the engineers who build the system need to do a better job of documenting how the software, line by line, corresponds to the protocol it’s supposed to be implementing.”  That is, this kind of assessment can’t work on an impenetrable black-box system; the Swiss Post developers have made good progress in “showing their work” so that it can be assessed, but they need to keep improving.

And this is a very complex protocol, and system, because it’s attempting to solve a very difficult problem:  conduct an election securely even though some of the servers and most of the client computers may be under the control of an adversary.   The server-side solution is to split the trust among several servers using a cryptographic consensus protocol.  The client-side solution is what I described in the previous post: even if the client computer is hacked, it’s not supposed to be able to succeed in cheating because there are certain secrets that it can’t see, printed on the paper and only visible to the voter.

Now, does the voting protocol work in principle?  The experts on cryptographic voting protocols say, “The Swiss Post e-voting system protocol documentation, code and security proofs show continuing improvement. The clarity of the protocol and documentation is much improved on earlier versions [which] has exposed many issues that were already present but not visible in the earlier versions of the system; this is progress. … There are, at present, significant gaps in the protocol specification, verification specification, and proofs. … [S]everal of the issues that we found require structural changes …. ”

And, is the system architecture secure?  The expert on system security says, “the SwissPost E-voting system [has] been evolving … for well over a decade. … The current generation of the system under audit takes many important and valuable measures for security and transparency that are to this author’s knowledge unprecedented or nearly-unprecedented among governmental E-voting programs worldwide. At a technical level, these measures include individual and universal verifiability mechanisms, trust-splitting of critical functions across four control components, the incorporation of an independent auditor role in the E-voting process, and the adoption of a reproducible build process for the E-voting software.  [I see] ample evidence overall of both a system and a development process represent[ing] an exemplar that other governments worldwide should examine closely, learn from, and adopt similar state-of-the-art practices where appropriate.”

But on the other hand, he says, “the current system under audit is still far from the ideal system that … perhaps any expert well-versed in this technology domain – would in principle like to see. Some issues [include] the current system’s reliance on a trusted and fully-centralized printing authority, and its exclusion of coercion or vote-buying as a risk to be taken seriously and potentially mitigated.  [And] Explicit documentation of the architecture’s security principles and assumptions, and how the concrete system embodies them, is still incomplete or unclear in many respects … The architecture’s trust-splitting across four control components strengthens vote privacy, but does not currently strengthen either end-to-end election integrity or availability … The architecture critically relies on an independent auditor for universal verifiability, but the measures taken to ensure the auditor’s independence appear incomplete … While the system’s abstract cryptographic protocol is well-specified and rigorously formalized, the security of the lower-level message-based interactions between the critical devices – especially the interactions involving offline devices – do not yet appear to be fully specified or analyzed.”

In conclusion,  the cryptographic-protocol experts recommend, “We encourage the stakeholders in Swiss e-voting to allow adequate time for the system to thoroughly reviewed before restarting the use of e-voting,”  while the system-security expert concludes, “as imperfect as the current system might be when judged against a nonexistent ideal, the current system generally appears to achieve its stated goals, under the corresponding assumptions and the specific threat model around which it was designed.”

In the next part of this series:  Threats that the experts didn’t think of.