April 19, 2019

Voting machines I recommend

I’ve written several articles critical of specific voting machines, and you might wonder, are there any voting machines I like?

For in-person voting (whether on election day or in early vote centers), I recommend Precinct-Count Optical Scan (PCOS) voting machines, with a ballot-marking device (BMD) available for those voters unable to mark a ballot by hand2.  For vote centers that must handle a wide variety of ballot styles (covering many different election districts), it may be appropriate to use ballot-on-demand printers to produce ballots for voters to fill in with a pen.

Five different U.S. companies make acceptable PCOS and BMD equipment:

PCOS BMD (acceptable for use by voters unable to mark ballots with a pen)
ClearBallot ClearCast ClearAccess
Dominion ICP ICP320, ICX BMD
ES&S DS200 ExpressVote (BMD mode only), Automark (autocast disabled)
Hart Verity Scan Verity TouchWriter
Unisyn OVO OVI,FVT

I do not recommend all-in-one voting machines that combine ballot marking and ballot tabulation in the same paper path, such as the ES&S ExpressVote (in all-in-one mode) or the Dominion ICE.

For mail-in1 ballots, I recommend Central Count Optical Scan (CCOS) voting machines with ballot-serial-number imprinters.

All five companies listed above make CCOS equipment, and at least three of these companies make CCOS with serial-number imprinters:  ClearBallot, ES&S and Dominion.  CCOS printers from Hart (and perhaps Unisyn) do not imprint serial numbers; they can still be used in ballot-level comparison audits5 but less efficiently.

I make these recommendations mainly on the basis of security: let’s have election results we can trust, even though the computers can be hacked.  But PCOS or CCOS voting is also less expensive to equip than touchscreen voting.

Now I will explain the basis for these recommendations.

[Read more…]

Reexamination of an all-in-one voting machine

The co-chair of the New York State Board of Elections has formally requested that the Election Operations Unit of the State Board re-examine the State’s certification of the Dominion ImageCast Evolution voting machine.

The Dominion ImageCast Evolution (also called Dominion ICE) is an “all-in-one” voting machine that combines in the same paper path an optical scanner (for hand-marked bubble ballots) with a printer (for machine-marked ballots via a touchscreen or audio interface).

Last October, I explained that why this is such a bad idea that it should be considered a design flaw:  if a hacker were able to install fraudulent software into the ICE, that software could print additional votes onto a voter’s ballot after the last time the voter sees the ballot.   I’ll just give one example of what the hacker’s vote-stealing software could do:  In any race where the voter undervotes (does not mark a choice), the hacked software could print a vote into the bubble for the candidate that the hacker wants to win.

The manufacturer may argue that “our software doesn’t do that;” true enough, the factory-installed software doesn’t do that–unless hackers hack into the manufacturer’s network.  They may argue that “our voting machines are not hackable;” well, it’s admirable that they are using modern-day authentication methods for the installation of new software, but in the current state of the art, it’s still the case that practically any computer is hackable.

And therefore, we rely on recounts and risk-limiting audits of the paper ballot as marked by the voter as our ultimate protection against computer hacking.  An all-in-one voting machine, that combines printing and scanning into the same paper path, seriously compromises that protection.

Douglas A. Kellner, co-chair of the New York State Board of elections, wrote on March 7 2019 to his fellow Board commissioners,

Two respected professors of computer science have provided reports that the Dominion ImageCast Evolution voting machine has a “design flaw.” … “after you mark your ballot, after you review your ballot, the voting machine can print more votes on it!” …

[New York State] Election Law § 7-201 requires that the State Board of Elections examine and approve each type of voting machine or voting system before it can be used in New York State…. The examination criteria for certification of voting equipment … requires … “the vendor shall identify each potential point of attack.” …

I have carefully reviewed Dominion’s [submission].  I do not see anything in the submission that addressed the point of attack or threats identified by Professors Appel and DeMillo. …

If there is a serious possibility that an insider could install malware that could program the printer to add marks to a ballot without the possibility of verification by the voter, then the entire audit process is compromised and circumvented. If it was possible for the machine to add a voting mark to the ballot without verification by the voter, the audit is not meaningful because it cannot confirm that the ballot was counted in the manner intended by the voter. …

Election Law § 7-201(3) provides that:  “If at any time after any machine or system has been approved,…the state board of elections has any reason to believe that such machine or system does not meet all the requirements for voting machines or systems set forth in this article, it shall forthwith cause such machine or system to be examined again.” …

In view of the omission of the security threats identified by Professors Appel and DeMillo in the submission by Dominion in support of its application for certification of the ImageCast Evolution, and in view of the absence of any analysis of this issue in the SLI and NYSTEC reports, I request that the Election Operations Unit of the State Board examine again the ImageCast Evolution to consider the vulnerability of the voting system because the printer could be programmed to add marks to ballots without verification by the voter, and that SLI and NYSTEC supplement their reports with respect to these issues.

Princeton Students: Learn the Design & Ethics of Large-Scale Experimentation

Online platforms, which monitor and intervene in the lives of billions of people, routinely host thousands of experiments to evaluate policies, test products, and contribute to theory in the social sciences. These experiments are also powerful tools to monitor injustice and govern human and algorithm behavior. How can we do field experiments at scale, reliably, and ethically?

This spring I’m teaching the undergraduate/graduate class SOC 412: Designing Field Experiments at Scale for the second year. In this hands-on class for students in the social sciences, computer science, and HCI, you will start experimenting right away, learn best practices in experiments in real-world settings, and learn to think critically about the knowledge and power of experimentation. The final project is a group project to design or analyze a large-scale experiment in a novel way. I approach the class with an expectation that each project could become a publishable academic research project.

Project Opportunities for 2019

This year, students will have opportunities to develop the following final projects:

  • Working with WikiLovesAfrica to test ideas for broadening global contributions to Wikipedia and broaden how media made by Africans is used and understood by the rest of the world
  • Data-mining a dataset from roughly a thousand experiments conducted on Wikipedia to make new discoveries about participation online
  • Developing new experiments together with moderators on reddit
  • Your own experiment, including your senior project, if your department approves
  • … additional opportunities TBD

Unsolicited Student Reviews from 2018

“I  recently accepted an associate product manager position at [company]. They essentially work to AB test commercial initiatives by running campaigns like field experiments.  In fact, it seems like a lot of what was covered in SOC 412 will be relevant there!  As a result, I felt that working as a product manager there would give me a lot of influence over not only statistical modeling approaches, but also user privacy and ethical use of data.”

“From my experience, very few professors take the time to provide such personalized feedback and I really appreciate it.”

“Take! This! Class! Even if you’ll never do an online experiment in your line of work, it’s important to know the logistical and ethical issues because such experiments are going on in your daily life, whether you know it or not.”

“Instructions were always very clear. Grading guidelines were also very clear and Nathan’s feedback was always super helpful!”

“I appreciate the feedback you gave throughout the course, and I also value the conversations we had outside of class. As somebody that’s still trying to find myself as a researcher, it was very helpful to get your perspective.”

Sample Class Projects from 2018

Here are examples of projects that students did last year:

Promoting Inclusion and Participation in an Online Gender-Related Discussion Community

Many users join gender-related discussions online to discuss current events and their personal experiences. However, people sometimes feel unwelcome those communities for two reasons. First of all, they may be interested in participating in constructive discussions, but their opinions differ from the a community’s vocal majority. Accordingly, they feel uncomfortable voicing these opinions due to fear of an overwhelmingly negative reaction. Second, as we discovered in a survey, many participants in online gender conversations wish to make the experience uncomfortable for commenters.

In this ongoing study, two undergraduate students worked with moderators of an online community to test the effects on newcomer participation of interventions that provide first-time participants with more accurate information about the values of the community and its organizers.


🗳 Auditing Facebook and Google Election Ad Policies 🇺🇸

Austin Hounsel developed software to generate advertisements and direct volunteers to test and compare the boundaries of Google and Facebook’s election advertising policies. In the class, Austin chose statistical methods and developed an experiment plan in the class. Our findings were published in The Atlantic and will also be submitted in a computer science conference paper (full code, data, and details are available on Github).

In this study, we asked how common these mistakes are and what kinds of ads are mistakenly prohibited by Facebook and Google. Over 23 days, 7 U.S. citizens living inside and outside the United States attempted to publish 477 non-election advertisements that varied in the type of ad, its potentially-mistaken political leaning, and its geographic targeting to a federal or state election voter population. Google did not prohibit any of the ads posted. Facebook prohibited 4.2% of the submitted ads.

Improvements in 2019

Last year was the very first prototype of this class. Thanks to helpful feedback from students, I have adjusted the course to:

  • Provide more space for student discussion, via a dedicated precept period
  • Speed up the research design process, with more refined software examples
  • Improve team selection so students can spend more time focused on projects and less on choosing projects
  • Improving the course readings and materials
  • Take class discussions away from Piazza to Slack
  • Streamline the essay grading process for me and for students

I’ve written more about what I learned from the class in a series of posts here at Freedom to Tinker (part 1) (part 2).