April 17, 2021

Archives for 2021

Juan Gilbert’s Transparent BMD

Princeton’s Center for Information Technology Policy recently hosted a talk by Professor Juan Gilbert of the University of Florida, in which he demonstrated his interesting new invention and presented results from user studies.

What’s the problem with ballot-marking devices?

It’s well known that a voting system must use paper ballots to be trustworthy (at least with any known or foreseeable technology). But how should voters mark their ballots? Hand-marked paper ballots (HMPB) allow voters to fill in ovals with a pen, to be counted by an optical scanner. Ballot-marking devices (BMDs) allow voters to use a touchscreen (or other assistive device) and then print out a ballot card listing the voter’s choices.

The biggest problem with BMDs is that most voters don’t check the ballot card carefully, so that if the BMD were hacked and misrepresenting votes on the paper, the voters wouldn’t notice–and even if a few voters did notice, the BMDs would have successfully stolen the votes of many other voters.

One scientific study (not in a real election) showed that some process interventions–such as, “remind voters to check their ballots”–might improve the rate at which voters check their ballots. I am skeptical that those kinds of interventions will be consistently applied in thousands of polling places, or that voters will stay vigilant year after year. And even if the rate of checking can be improved from 6.6% to 50%, there’s still no clear remedy that can protect the outcome of the election as a whole.

The transparent BMD

Instead of reminding the voter, Professor Gilbert’s solution is to force them to look directly at the printout, immediately after voting each contest. In this video, at 0:36, see how the voter is asked to touch the screen directly in front of the spot on the paper where the vote was just printed.

Voter’s finger confirming a printed-on-paper vote by touching the screen directly in front of where the vote was printed.

He explains more in the CITP seminar he presented at Princeton. He also explains his user studies. When the BMD deliberately printed one vote wrong on the paper ballot (out of 12 contests on the ballot), 36% of voters noticed and said something about it–and another 41% noticed but didn’t say anything until asked. This is a significantly higher rate of detection than when using conventional BMDs. Hypothetically, if those 41% could somehow be prompted to speak up, then there’d be a 77% rate at which voters would detect and correct fraudulent vote-flipping.

Somehow, this physically embodied intervention seems more consistently effective than one that requires sustained cooperation from election administrators, poll workers, and voters–all of whom are only human.

Would this make BMDs safe to use?

Recall what the problem is: If the BMD cheats on X% of the votes in a certain contest, and only Y% of the voters check their ballot carefully, and only Z% of those will actually speak up, then only X*Y*Z% voters will speak up. In a very close election, X might be 1/100, Y has been measured as 1/15, and Z might be 1/2, so XYZ=1/3000. Professor Gilbert has demonstrated that (with the right technology) X can be improved to 76% (or 3/4) but Z is still about 1/2. Suppose further tinkering could improve Z to 3/4, then XYZ would be 1/178. That is, if the hacked BMD attempted to steal 1% of the votes, then 9/16 of those voters would notice (and ask the pollworkers for a do-over), so the net rate of theft would be only 7/16 of 1%, or about half a percent.

And in that hypothetical scenario, one voter out of every 178 would have asked for a do-over, saying “what printed on the paper isn’t what I selected on the touchscreen.” That’s (perhaps) two or three in every medium-size polling place–or, in a statewide election with 3 million voters, that’s more than 16,000 voters speaking up. If that happened, and if the margin of victory is less than half-a-percent, then what should the Secretary of State do?

The answer is still not clear. You can read this to see the difficulty.

So, the Transparent BMD is a really interesting research advance; it is a really good design idea; and Professor Gilbert’s user-studies are professionally done. But further research is needed to figure out how such machines could (safely) be used in real elections.

And there’s still no excuse for using conventional BMDs, with their abysmal rate at which voters check their ballot papers, as the default mode for all voters in a public election.


Further caveats. These are considerations for the evaluation of the practical security of “transparent BMDs” in elections, worth further study.

  1. If a voter speaks up and says “the machine changed my vote”, will the local pollworkers respond appropriately? Suppose there have been many elections in a row where the voting machines haven’t been hacked (which we certainly hope is the case!); then whatever training the pollworkers are supposed to have may have been omitted or forgotten.
  2. When analyzing whether a new physical design is more secure, one must be careful to assume that the hacker can install software that can behave any way that the hardware is capable of. Just to take one example, suppose the hacked BMD software is designed to behave like a conventional BMD: first accept all the voter’s choices, then print (without forcing the voter to touch the screen where the gaze is directed to the just-printed candidate). This gives the opportunity to deliberately misprint in a way that we know voters don’t detect very well. But would voters know that the BMD is not supposed to behave this way? I pose this just as an example of how to think about the “threat model” of voting machines.
  3. Those voters who noticed the machine cheating but didn’t speak up in the study, then claimed that if it were a real polling place they would speak up– really? In real life, there are many occurrences of voters seeing something they feel is wrong at the polling place, but waiting until they get home before calling someone to talk about it. Many people feel a bit intimidated in situations like this. So it’s difficult to translate what people say they will do, into what really they will do.
  4. Professor Gilbert suggests (in his talk) that he’ll change the prompt from “Please review your selection below. Touch your selection to continue.” to something like “Please review your selection below. If it is correct, touch it. If it is wrong, please notify a pollworker.” This does seem like it would improve the rate at which voters would report errors. It will be interesting to see.

Expert analysis of Antrim County, Michigan

Preliminary unofficial election results posted at 4am after the November 3rd 2020 election, by election administrators in Antrim County Michigan, were incorrect by thousands of votes–in the Presidential race and in local races. Within days, Antrim County election administrators corrected the error, as confirmed by a full hand recount of the ballots, but everyone wondered: what went wrong? Were the voting machines hacked?

The Michigan Secretary of State and the Michigan Attorney General commissioned an expert to conduct a forensic examination. Fortunately for Michigan, one of the world’s leading experts on voting machines and election cybersecurity is a professor at the University of Michigan: J. Alex Halderman. Professor Halderman submitted his report to the State on March 26, 2021 and the State has released the report.

Analysis of the Antrim County, Michigan
November 2020 Election Incident

J. Alex Halderman
March 26, 2021

And here’s what Professor Halderman found: “In October, Antrim changed three ballot designs to correct local contests after the initial designs had already been loaded onto the memory cards that configure the ballot scanners. … [A]ll memory cards should have been updated following the changes. Antrim used the new designs in its election management system and updated the memory cards for one affected township, but it did not update the memory cards for any other scanners.”

Here’s what that means: Optical-scan voting machines don’t (generally) read the text of the candidates’ names, they look for an oval filled in at a specific position on the page. The Ballot Definition File tells the voting machine what name corresponds to what position. And also informs the election-management system (EMS) that runs on the county’s election management computers how to interpret the memory cards that transfer results from the voting machines to the central computers.

Shown here at left is the original ballot layout, and at right is the new ballot layout. I have added the blue rectangles to explain Professor Halderman’s report.

Original (at left) and updated-October-2020 (at right) ballot layout for Village of Central Lake, MI. From Halderman’s report, Figure 1, page 12.

Now, if the voting machine is loaded with a memory card with the ballot definition at left, but fed ballots in the format at right, what will happen?

A voter’s mark next to the name “Melanie Eckhart” will be interpreted as a vote for “Mark Edward Groenink”. That is, in the first blue rectangle, you can see that the oval at that same ballot position is interpreted differently, in the two different ballot layouts.

A voter’s mark next to “Yes” in Proposal 20-1 will be interpreted as “No” (as you can see by looking at the second blue rectangle).

We’d expect that problem with any bubble-ballot voting system (though there are ways of preventing it, see below). But the Dominion’s results-file format makes the problem far worse.

In Dominion’s file format for storing the results, every subsequent oval on the paper is given a sequential ID number, cumulative across all ballot styles used in the county. Now look at the figure above, just below the first blue rectangle. You’ll see that in the original “Local School District” race (at left) there are two write-in bubbles, but in the revised “Local School District” race (at right), there are three write-in bubbles. That means the ID number of every subsequent bubble, on this ballot and in all the ballot styles that come after it in this county, the ID numbers will be off by one. Figure 2 of the report illustrates:

Figure 2 from Halderman report: D-Suite automatically assigns sequential ID numbers to voting targets across every ballot style. Correcting the ballot design for Central Lake Village required adding a write-in blank, which increased the ID number of every subsequent voting target by 1, including all targets in alphabetically later townships. Scanners in most precincts used the initial election definition (from before the change) and recorded votes under the old ID numbers. The EMS interpreted these ID numbers using the revised election definition, causing it to assign the votes to the wrong candidates.

Within three days, Antrim County officials had basically figured out what went wrong, and corrected most of the errors before publishing and certifying official election results on November 6th. By November 21, Antrim County had corrected almost all of the errors in its official restatement of its election results.

How do we know that the original results were wrong and the new results are right? That is, how do we know that the “corrected” results are true, and not fraudulent? We have two ways of knowing:

  • Hand-marked paper ballots speak for themselves. The contest for President of the U.S. was recounted by hand in Antrim County. Those results–from what bipartisan workers and witnesses could see with their own eyes–matched the results from scanning the paper ballots using the ballot-definition file that matches the layout of the paper ballot.
  • A careful forensic examination by a qualified expert can explain what happened, and that is why Professor Halderman’s report is so valuable–it explains things step by step.

But not every contest was recounted by hand. The expert analysis finds a few contests where the reported vote totals are still incorrect; and in one of those contests (a marijuana ballot question) the outcome of the election was affected.

In the court case of Bailey v. Antrim, plaintiffs had submitted a report (December 13, 2020) from one Russell J. Ramsland making many claims about the Dominion voting machines and their use in Antrim County: adjudication, error rates, log entries, software updates, Venezuela. Section 5 of Professor Halderman’s report addresses all of these claims and finds them unsupported by the evidence.

What can we learn from all of this?

  • Although the unofficial reports posted at 4am on November 4th showed Joseph R. Biden getting more votes in Antrim County than Donald J. Trump, the results posted November 6th show correctly that, in Antrim County, Mr. Trump got more votes.
  • Regarding the presidential contest, election administrators figured this out for themselves without needing any experts.
  • In other contests, where no recount was done, most of the errors got corrected, but not all.
  • There is no evidence that Dominion voting systems used in Antrim County were hacked.

And what can we learn about election administration in general?

  • Hand marked paper ballots are extremely useful as a source of “ground truth”.
  • If the ballot definition doesn’t match the paper ballot, results reported by the optical-scan voting machine can be nonsense. This has happened before–see my report describing a 2011 incident in New Jersey.
  • “Unforced error:” Dominion’s election-management system (EMS) software doesn’t check candidate names. The EMS computer has a file mapping ballot-position numbers to candidate names; and the memory card uploaded from the voting machine has its own file mapping ballot-position numbers to candidate names. If only the EMS software had checked that these files agreed, then the problem would have been detected on election night, during the upload process.
  • Even without that built-in checking, to catch mistakes like this before the election, officials should do the kind of end-to-end pre-election logic-and-accuracy testing described in Professor Halderman’s report.
  • Risk-Limiting Audits (RLAs) could have detected and corrected this error, if they had been used systematically in the State of Michigan. RLAs are good protection not only against hacking, but also against mistakes and bugs.

How lever-action voting machines really worked

Over the years I have written many articles about direct-recording electronic (DRE) voting machines, precinct-count optical-scan (PCOS) voting machines, ballot-marking devices (BMDs), and other 21st-century voting technology. But I haven’t written much about 20th-century lever machines; these machines were banned by the U.S. Congress in the Help America Vote Act and have not been used since 2012.

Photo credit: Paul Buckowski / Times Union

Recently, upon a midnight dreary, while I pondered, weak and weary, over many a quaint and curious volume of forgotten technology, I came across the excellent 1993 book, The Way Things Really Work, by Henry Beard and Rod Barrett. This book has a clear explanation of the inner workings of mechanical lever voting machines, as follows.

I think it should now be clear why Congress banned this technology.

The book also has explanations of “How candy machines eat your quarters,” “How airlines lose your luggage,” “How elevators know to close their doors when you come running,” and so on.