April 22, 2019

BMDs are not meaningfully auditable

This paper has just been released on SSRN. In this paper we analyze, if a BMD were hacked to cheat, to print rigged votes onto the paper ballot; and even suppose voters carefully inspected their ballots (which most voters don’t do), and even supposing a voter noticed that the wrong vote was printed, what then? To assess this question, we characterize under what circumstances a voting system is “contestable” or “defensible.” Voting systems must be contestable and defensible in order to be meaningfully audited, and unfortunately BMDs are neither contestable nor defensible. Hand-marked paper ballots, counted by an optical-scan voting machine, are both contestable and defensible.

Ballot-Marking devices (BMDs) cannot assure the will of the voters

by Andrew W. Appel, Richard A. DeMillo, and Philip B. Stark

Abstract:

Computers, including all modern voting systems, can be hacked and misprogrammed. The scale and complexity of U.S. elections may require the use of computers to count ballots, but election integrity requires a paper-ballot voting system in which, regardless of how they are initially counted, ballots can be recounted by hand to check whether election outcomes have been altered by buggy or hacked software. Furthermore, secure voting systems must be able to recover from any errors that might have occurred.

However, paper ballots provide no assurance unless they accurately record the vote as the voter expresses it. Voters can express their intent by hand-marking a ballot with a pen, or using a computer called a ballot-marking device (BMD), which generally has a touchscreen and assistive interfaces. Voters can make mistakes in expressing their intent in either technology, but only the BMD is also subject to systematic error from computer hacking or bugs in the process of recording the vote on paper, after the voter has expressed it. A hacked BMD can print a vote on the paper ballot that differs from what the voter expressed, or can omit a vote that the voter expressed.

It is not easy to check whether BMD output accurately reflects how one voted in every contest. Research shows that most voters do not review paper ballots printed by BMDs, even when clearly instructed to check for errors. Furthermore, most voters who do review their ballots do not check carefully enough to notice errors that would change how their votes were counted. Finally, voters who detect BMD errors before casting their ballots, can correct only their own ballots, not systematic errors, bugs, or hacking. There is no action that a voter can take to demonstrate to election officials that a BMD altered their expressed votes, and thus no way voters can help deter, detect, contain, and correct computer hacking in elections. That is, not only is it inappropriate to rely on voters to check whether BMDs alter expressed votes, it doesn’t work.

Risk-limiting audits of a trustworthy paper trail can check whether errors in tabulating the votes as recorded altered election outcomes, but there is no way to check whether errors in how BMDs record expressed votes altered election out- comes. The outcomes of elections conducted on current BMDs therefore cannot be confirmed by audits. This paper identifies two properties of voting systems, contestability and defensibility, that are necessary conditions for any audit to confirm election outcomes. No commercially available EAC-certified BMD is contestable or defensible.

To reduce the risk that computers undetectably alter election results by printing erroneous votes on the official paper audit trail, the use of BMDs should be limited to voters who require assistive technology to vote independently.

CITP’s OpenWPM privacy measurement tool moves to Mozilla

As part of my PhD at Princeton’s Center for Information Technology Policy (CITP), I led the development of OpenWPM, a tool for web privacy measurement, with the help of many contributors. My co-authors and I first released OpenWPM in 2014 with the goal of lowering the technical costs of large-scale web privacy measurement. The tool’s success exceeded our expectations; it has been used by over 30 academic studies since its release, in research areas ranging from computer science to law.

OpenWPM has a new home at Mozilla. After graduating in 2018, I joined Mozilla’s security engineering team to work on strengthening Firefox’s tracking protection. We’re committed to ensuring users are protected from tracking by default. To that end, we’ve migrated OpenWPM to Mozilla, where it will remain open source to ensure researchers have the tools required to discover privacy-infringing practices on the web. We are also using it ourselves to understand the implications of our new anti-tracking features, to discover fingerprinting scripts and add them to our tracking protection lists, as well as to collect data for a number of ongoing privacy research projects.

Over the past six months we’ve started a number of efforts to significantly improve OpenWPM:

1. Cloud-friendly data storage. OpenWPM has long used SQLite to store crawl data. This makes it easy for anyone to install the tool, run a small measurement, and inspect the dataset locally. However, this is very limiting for large-scale measurements. OpenWPM can now save data directly to Amazon S3 in Parquet format, making it possible to launch crawls on a cluster of machines.

2. Support for modern versions of Firefox. We are in the process of migrating all of OpenWPM’s instrumentation to WebExtensions, which is necessary to run measurements with Firefox 57+.

2. Modular instrumentation. OpenWPM’s instrumentation was previously deeply embedded in the crawler, making it difficult to use outside of a crawling context. We’ve now refactored the instrumentation into a separate npm package that can easily be imported by any Firefox WebExtension. In fact, we’ve already used the module to collect data in one of our user studies.

4. A standard set of analysis utilities. To further ease analyses on OpenWPM datasets, we’ve bundled the many small utility functions we’ve developed over the years into a single utilities package available on PyPI.

5. Data collection and release. Since 2015, CITP has collected monthly 1-million-site web measurements using OpenWPM. All of this data is available for download, but once Gunes Acar moves on from CITP in a few months, the CITP measurements will end. At Mozilla, we are exploring options to regularly collect and release new measurements.

All of these efforts are still underway, and we welcome community involvement as we continue to build upon them. You can find us hanging out in #openwpm on irc.mozilla.org.

The Trust Architecture of Blockchain: Kevin Werbach at CITP

In 2009, bitcoin inventor Satoshi Nakomoto argued that it was “a system for electronic transactions without relying on trust.”

That’s not true, according to today’s CITP’s speaker Kevin Werbach (@kwerb), a professor of Legal Studies and Business Ethics at the Wharton School at UPenn. Kevin is author of a new book with MIT Press, The Blockchain and the New Architecture of Trust.

A world-renowned expert on emerging technology, Kevin examines business and policy implications of developments such as broadband, big data, gamification, and blockchain. Kevin served on the Obama Administration’s Presidential Transition Team, founded the Supernova Group (a technology conference and consulting firm) and helped develop the U.S. approach to internet policy during the Clinton Administration.

Blockchain does actually rely on trust, says Kevin. He tells us the story of the cryptocurrency exchange QuadrigaCX, who claimed that millions of dollars in cryptocurrency were lost when their CEO passed away. While whole story was more complex, Kevin says, it reveals how much bitcoin transactions rely on many kinds of trust.

Rather than removing the need for trust, blockchain offers a new architecture of trust compared to previous models. Peer to peer trust is based on personal relationships.  Leviathan trust, described by Hobbes, is a social contract with the state, which then has the power to enforce private agreements between people. The power of the state makes us more trusting in the private relationships– if you trust the state and if the legal system works. Intermediary trust involves a central entity that manages transactions between people

Blockchain is a new kind of trust, says Kevin. With blockchain trust, you can trust the ledger without (so it seems) trusting any actor to validate it. For this to work, transactions need to be very hard to change without central control – if anyone had the power to make changes, you would have to trust them.

Why would anyone value the blockchain? Blockchain minimizes the need for certain kinds of trust: removing single points of failure, reducing risks of monopoly, and reduces friction from the intermediation. Blockchain also expands trust by minimizing reconciliation, carries out automated execution, and increases the auditability of records.

What could possibly go wrong? Even if the blockchain ledger is auditable and trustworthy, the transaction record isn’t the whole system. Kevin points out that 80% of all bitcoin users rely on centralized key storage. He also reported figures that 20-80% of all Initial Coin Offerings were fraudulent.

Kevin tells us about “Vlad’s conundrum”- there’s a direct conflict between the design of the blockchain system and any regulatory model. The blockchain doesn’t know the difference between transactions, and there’s no entity that can say “no, that’s not okay.” Kevin tells us about the use of the blockchain for money laundering and financing terrorism. He also tells us about the challenge of moderating child pornography data that has been distributed across the blockchain- exposing every bitcoin node to legal risks.

None of these risks are as simple as they seem. Legal enforcement is carried out by humans who often consider intent. Simply possessing digital bits that represent child pornography data will not doom bitcoin. Furthermore, systems are less decentralized or anonymous than they appear. Regulations about parts of the system at the edges and endpoints of the blockchain can promote trust and innovation. Regulators have often been able to pull systems apart, find the involved parties, and hold actors accountable.

Kevin argues that designers of blockchain systems have to manage three trade-offs. Trust, freedom of action, and convenience. Any designer of a system will have to make hard choices about the tradeoffs among each of these factors.

aCiting Vili Lehdonvirta’s blockchain paradox, Kevin tells us several stories about ways that centralized governance processes managed serious problems and fraud in blockchain systems that would have been problems if governance had purely been decentralized.  Kevin also describes technical mechanisms for governance: voting systems, special kinds of contracts, arbitration schemes, and dispute resolution processes

Overall, Kevin tells us that blockchain governance comes back to trust– which shapes how we act with confidence in circumstances of uncertainty and vulnerability.