August 18, 2019

How to do a Risk-Limiting Audit

In the U.S. we use voting machines to count the votes. Most of the time they’re very accurate indeed, but they can make big mistakes if there’s a bug in the software, or if a hacker installs fraudulent vote-counting software, or if there’s a misconfigured ballot-definition file, or if the scanner is miscalibrated. Therefore we need a Risk-Limiting Audit of every election to assure, independently of the voting machines, that they got the correct outcome. If your election official picks a risk-limit of 5%, that means that if the voting system got the wrong outcome, there’s a 95% chance that the RLA will correct it (and there’s a 0% chance the RLA will mess up an already-correct outcome).

But how does one conduct an RLA? The statistics are not trivial; the administrative procedures are not obvious–how do you handle all those batches of paper ballots? And every state has different election procedures, so there’s no one-size-fits all RLA method.

Two good ways to learn something are to read a book or find an experienced teacher. But until recently, most (but not all) papers about RLAs were difficult to understand for the election-administrator audience, and practically no one had experience running RLAs because they’re so new.

That’s changing for the better. More states are conducting RLA pilots, that means more people have experience designing and implementing RLAs, and some of those people do us the public service of writing it down in a handbook for election administrators.

Jennifer Morrell has just published the first two parts of a guide to the practical aspects of RLAs: what are they, why do them, how to do them.

Knowing It's Right, Part One Knowing It's Right, Part Two

Knowing It’s Right, Part One: A Practical Guide to Risk-Limiting Audits. A high level overview for state and local stakeholders who want to know more about RLAs before moving on to the implementation phase.

Knowing It’s Right, Part Two: Risk-Limiting Audit Implementation Workbook. Soup-to-nuts information on how election officials can conduct a ballot-comparison audit.

I really like these manuals. And if you’re looking for experts with real experience in RLAs, in addition to Ms. Morrell there are the authors of these experience reports on RLA pilots:

Orange County, CA Pilot Risk-Limiting Audit, by Stephanie Singer and Neal McBurnett, Verified Voting Foundation, December 2018.

City of Fairfax,VA Pilot Risk-Limiting Audit, by Mark Lindeman, Verified Voting Foundation, December 2018.

And stay tuned at for reports from Indiana, Rhode Island, Michigan, and perhaps even New Jersey.

Choosing Between Content Moderation Interventions

How can we design remedies for content “violations” online?

Speaking today at CITP is Eric Goldman (@ericgoldman), a professor of law and co-director of the High Tech Law Institute, at Santa Clara University School of Law. Before he became a full-time academic in 2002, Eric practiced Internet law for eight years in the Silicon Valley. His research and teaching focuses on Internet, IP and advertising law topics, and he blogs on these topics at the Technology & Marketing Law Blog.

Eric reminds us that content moderation questions are front page stories every week. Lawmakers and tech companies are wondering how to create a world where everyone can have their say, people have a chance to hear from them, and people are protected from harms.

Decisions about content moderation depend on a set of questions, says Eric:

“What rules govern online content?” “Who creates those rules? Who adjudicates rule violations?” Eric is most interested in a final question: “what consequences are imposed for rule violations?

So what do should we do once a content violation has been observed? The traditional view is to delete the content or account or to keep the content and account. For example, under the Digital Millennium Copyright Act, platforms are required to “remove or disable access to” copyrighted material. It allows no option less than removing the material from visibility. The DMCA also specifies two other remedies: terminating “repeat infringers” and issue subpoenas to identify/unmask alleged infringers. Overall however, the primary intervention is to remove things, and there’s no lesser action

Next Eric, tells us about civil society principles that adopt a similar idea of removal as the primary remedy. For example, the Manila Principles on Intermediary Liability assume that removal is the one available intervention, but that it should be necessary, proportional, and adopt “the least restrictive technical means.” Similarly, the Santa Clara Principles assume that removal is the one available option.

Eric reminds us that there are many remedies between removal and keeping content. Why should we pay attention to them? With a wider range of options, we can (a) avoid collateral damage from overbroad remedies and develop a (b) broader remedy toolkit to match the needs of different communities. With a wider palette of options, we would also need principles for choosing between those remedies. Eric wants to be able to suggest options that regulators or platforms have at their disposal when making policy decisions.

To illustrate the value of being able to differentiate between remedies, Eric talks about communities that have rich sets of rules with a range of consequences other than full approval or removal, such as churches, fraternities, and sports leagues.

Eric then offers us a taxonomy of remedies, drawn from examples in use online: (a) content restrictions, (b) account restrictions, (c) visibility reductions, (d) financial levers, and (e) other.

Eric asks: once we have listed remedies, how could we possibly choose among them? Eric talks about different theories for choosing – and he doesn’t think that those models are useful for this conversation. Furthermore, conversations about government-imposed remedies are different from internet content violations.

Unlike internet content policies, says Eric, government remedies:

  • are determined by elected officials
  • funded by taxes
  • non-compliance is enforced by police power
  • some remedies are only available to the government (like jail/death)
  • are subject to constitutional limits

Finally, Eric shares some early thoughts about how to choose among possible remedies:

  • Remedy selection manifests a service’s normative priorities, which differ
  • Possible questions to ask when choosing among remedies:
    • How bad is the rule violation?
    • How confident is the service that the rule was actually violated?
    • How open is the community?
    • How will the remedy affect other community members?
    • How to balance between behavior conformance with user engagement?
  • Site design can prevent violations
    • Educate and socialize contributors (for example)
  • Services with only binary remedies aren’t well-positioned to solve problems, and maybe other actors are in a better position
  • Typically, private remedies are better than judicially imposed remedies, but at cost of due process
  • Remedies should be necessary & proportionate
  • Remedies should empower users to choose for themselves what to do

ImageCast Evolution voting machine: Mitigations, misleadings, and misunderstandings

Two months ago I wrote that the New York State Board of Elections was going to request a reexamination of the Dominion ImageCast Evolution voting machine, in light of a design flaw that I had previously described. The Dominion ICE is an optical-scan voting machine. Most voters are expected to feed in a hand-marked optical scan ballot; but the ICE also has an integrated ballot-marking device for use by those voters who wish to mark their ballot by machine. The problem is, if the ICE’s software were hacked, the hacked software could make the machine print additional (fraudulent votes) onto hand-marked paper ballots. This would defeat the purpose of voter-verifiable paper ballots, which are meant to serve as a safeguard against buggy or fraudulent software.

The Board of Elections commissioned an additional report from SLI Compliance, which had done the first certification of this machine back in April 2018. SLI’s new report dated March 14, 2019 is quite naive: they ran tests on the machine and “at no point was the machine observed making unauthorized additions to the ballots.” Well indeed, if you test a machine that hasn’t (yet) been hacked, it won’t misbehave. (SLI’s report is pages 7-9 of the combined document.)

The Board of Elections then commissioned NYSTEC, a technology consulting company, to analyze SLI’s report. NYSTEC seems less naive: they summarized the issue under examination as follows:

NYSTEC, NYS State Board of Elections and computer science experts have long agreed that when an adversary has the ability to modify or replace the software/firmware that controls a voting machine then significant and damaging impacts to an election are possible. What makes this type of attack [the one described by Prof. Appel] different however is that the voted paper ballots from a compromised combination BMD/scanner machine could not be easily used to audit the scanner results because they have been compromised. If the software/firmware was compromised to alter election results, on a regular scanner (without BMD capabilities) one still has the voted ballots to ensure the election can be properly decided. This would not be the case with the
BMD/scanner attack and if such an attack were to occur, then a forensic analysis would be needed on all ballots in question to determine if a human or machine made the mark. Such a process is unlikely to be trusted by the public.

[page 12 of the combined document]

NYSTEC’s report (and not just this paragraph) agrees that (1) the hardware is physically capable of marking additional votes onto a voted ballot and (2) this is a very serious problem. SLI seems more confused: they say the source code they reviewed will not (ask the hardware to) mark additional votes onto a voted ballot.

Mitigations (practical or not?)

NYSTEC suggests that the problem could be mitigated by physically preventing the hardware from printing votes onto any ballot except when the machine is deliberately being used in BMD mode (e.g., to accommodate a voter with a disability). Their suggested physical mitigations are:

* Leave the printer access panel open as this will prevent an unauthorized ballot from being marked without detection.

* Remove the printer ink and only insert it when the system is being used in BMD mode.

* Insert a foam block inside the printer carriage, as this will prevent the system from ever printing on an already voted ballot.

[page 73 of the combined document]

Then they explain why some of these physical mitigations “may not be feasible.”

Without the mitigations, NYSTEC rates the “Impact” of this Threat Scenario as “Very High”, and with the mitigations they rate the impact as “Low”.


Based on the reports from SLI and NYSTEC, the operations staff (Thomas Connolly, Director of Operations) of the Board of Elections prepared a 3-page recommendation [pages 2-4 of the combined document]. The staff’s key statement is a mischaracterization of NYSTEC’s conclusion: they write, “NYSTEC believes that SLI security testing of the Dominion source code provided reasonable assurance that malicious code that could be triggered to enable the machine to print additional marks on an already marked ballot, is not present in the version tested.”

Yes, NYSTEC remarks in passing that Dominion’s source code submitted for review does not already contain malicious code, but that’s not the conclusion of NYSTEC’s own report! NYSTEC’s actual recommendation is that this is a real threat, and election officials who use this machine should perform mitigations.

The staff’s recommendation is to mitigate by (1) leaving the printer access panel open, which prevents printed-on ballots from proceeding automatically to the ballot box (a “preventative control”), (2) checking the printer’s “hardware counter” at the close of polls to see if more pages were printed on than the number of voters who used BMD-mode (a “detective control”), and (3) instructing pollworkers to be aware of the “printer running when it should not be” (a “detective control”). (I wonder whether the so-called “hardware counter” is really under the control of software.)

The NY State Board of Elections, at its meeting of April 29, 2019, accepted the recommendations of the Board staff. (This video, from 37:30 to 44:20). Commissioner Kellner did point out that, indeed, it is a misunderstanding of computer security to say that because the malicious code is not already present in the source code, there is no threat from malicious code.

Misunderstandings (deliberate or not?)

The Board of Elections also directed Dominion to revise its “Threat Register”, that is, the security threats that should be considered when assessing the robustness of their voting machines. In response to the SLI and NYSTEC reports, Dominion added this:

Tampering with installed software
Description – The software installed on the PCOS devices is reviewed, built and tested by a Voting System Test Lab (VSTL). These Trusted Builds are installed on the PCOS devices and control their operation. A special set of credentials is required to install the software and integrity checks are performed during installation to ensure a valid build is being installed. Hash values are generated by the VSTL for both the installation files and the files on the PCOS device after installation. The hash values are recorded in a System ID Guide for jurisdictions to use to verify the integrity of the PCOS software.
Threat – A malicious actor obtains unauthorized physical access to the PCOS devices after pre-election “logic and accuracy” testing but before Election Day, successfully defeating the physical controls that Election Administrators have in place. The installation software is counterfeited and fraudulent software is installed. The malicious actor also defeats the controls in place related to the hash codes which are verified on Election Day. Then, this malicious actor once again obtains unauthorized physical access to the PCOS devices after the Election, again defeating physical security practices in place, and installs the certified software after Election Day.
Impact – By changing the software, the malicious actor makes the voting system inaccurate or inoperable.
Impacted security pillars – Integrity and availability.
Risk rating – Low.
Mitigation – Implement proper processes (access control) for memory card handling and device storage. Verify the integrity of the installation software prior to and after installation. During points where the physical chain of custody of a device is unknown, verify the integrity of the installed software. Cryptographic and digital signing controls mitigate tampering with installation software. Tampering is evident to operators when verifying the software installed on the device. For more information, refer to Sections 4 and 5.5 of this document. Also, refer to the VSTL generated hash values.

[Page 76 of the combined document]

There are two things to note here. First, this wasn’t already in their Threat Register by 2018? Really? Computer Scientists have been explaining for 20 years that the main threat to a voting machine is that someone might install fraudulent vote-stealing software, and Dominion Voting Systems didn’t notice that?

Second, Dominion has written the Threat description in a very limited way: someone has physical access to the machine. But the threat is much broader than that. For example:

(1) Someone anywhere in the world hacks into the computer systems of Dominion Voting Systems and alters the firmware-update image to be installed on new or field-upgraded voting machines. [Notice how they use the passive voice, “These Trusted Builds are installed on the PCOS devices” to avoid thinking about who installs them, and how they are installed, and what threats there might be to that process!]   Now it doesn’t correspond to the source code that was inspected and certified. The hacker doesn’t need physical access to the voting machines at all! And the “hash codes” are not much help, because the fraudulent software can report the nonfraudulent hash codes.

Or, (2) Someone steals the cryptographic keys, thus defeating the “cryptographic and digital signing controls.”

Or (3) Don’t do it just before the election, do it once and let it be in effect for 10 elections in a row.

Or (4) Bypass all the “cryptographic and digital signing controls” by hacking into the lower levels of the computer, through the BIOS, or through the OS, or the USB drivers, etc.

Or (5), (6), (7) that I don’t have room to describe or haven’t even thought of. The point is, there are many ways into a computer system, and Dominion paints a false, rosy picture when limiting it to the same physical access attack that was already demonstrated on their previous generation of machines.


No one is asking companies like Dominion to do the impossible, that is, build a perfectly secure voting machine. (Well, actually, some people are asking, but please let’s recognize that it’s impossible.) Instead, we just want two things:

  1. Make them as secure as you can. Those “cryptographic and digital signing controls” are better than nothing (and weren’t present on voting machines built 15 years ago).
  2. Recognize that there’s no way to absolutely prevent them from being hacked, and that’s why we need Risk-Limiting Audits of the paper ballots. But those RLA’s won’t be effective if the hardware of the machine is designed so that (under the control of hacked software) it can mark more votes on the ballot after the last time the voter saw the paper.

And I ask New York State: If some county actually buys these machines, will the county be required to adopt the mitigation procedures approved at the April 29th Board meeting?