December 3, 2024

Tinkering with Disclosed Source Voting Systems

As Ed pointed out in October, Sequoia Voting Systems, Inc. (“Sequoia”) announced then that it intended to publish the source code of their voting system software, called “Frontier”, currently under development. (Also see EKR‘s post: “Contrarianism on Sequoia’s Disclosed Source Voting System”.)

Yesterday, Sequoia made good on this promise and you can now pull the source code they’ve made available from their Subversion repository here:
http://sequoiadev.svn.beanstalkapp.com/projects/

Sequoia refers to this move in it’s release as “the first public disclosure of source code from a voting systems manufacturer”. Carefully parsed, that’s probably correct: there have been unintentional disclosures of source code (e.g., Diebold in 2003) and I know of two other voting industry companies that have disclosed source code (VoteHere, now out of business, and Everyone Counts), but these were either not “voting systems manufacturers” or the disclosures were not available publicly. Of course, almost all of the research systems (like VoteBox and Helios) have been truly open source. Groups like OSDV and OVC have released or will soon release voting system source code under open source licenses.

I wrote a paper ages ago (2006) on the use of open and disclosed source code for voting systems and I’m surprised at how well that analysis and set of recommendations has held up (the original paper is here, an updated version is in pages 11–41 of my PhD thesis).

The purpose of my post here is to highlight one point of that paper in a bit of detail: disclosed source software licenses need to have a few specific features to be useful to potential voting system evaluators. I’ll start by describing three examples of disclosed source software licenses and then talk about what I’d like to see, as a tinkerer, in these agreements.

Sunlight on NASED ITA Reports

Short version: we now have gobs of voting system ITA reports, publicly available and hosted by the NSF ACCURATE e-voting center. As I explain below, ITA’s were the Independent Testing Authority laboratories that tested voting systems for many years.

Long version: Before the Election Assistance Commission (EAC) took over the testing and certification of voting systems under the Help America Vote Act (HAVA), this critical function was performed by volunteers. The National Association of State Election Directors (NASED) recognized a need for voting system testing and partnered with the Federal Election Commission (FEC) to establish a qualification program that would test systems as having met or exceeded the requirements of the 1990 and 2002 Voting System Standards.*

However, as I’ve lamented many, many times over the years, the input, output and intermediate work product of the NASED testing regime were completely secret, due to proprietary concerns on behalf of the manufacturers. Once a system completed testing, members of the public could see that an entry was made in a publicly-available spreadsheet listing the tested components and a NASED qualification number for the system. But the public was permitted no other insight into the NASED qualification regime.

Researchers were convinced from what evidence was available that the quality of the testing was highly inadequate and that the expertise didn’t exist within either the testing laboratories to perform adequate testing or the NASED technical committee to competently review the ultimate test reports submitted by the laboratories (called Independent Testing Authorities (ITA)). Naturally, when reports of problems started to crop-up, like the various Hursti vulnerabilities with Diebold memory cards, the NASED system scrambled to figure out what went wrong.

I know have more moderate views with respect to the NASED regime: sure, it was pretty bad and a lot of serious vulnerabilities slipped through the cracks, but I’m not yet convinced that just having the right people or a different process in place would have resulted in fewer problems in the field. To have fixed the NASED system would have required improvements on all fronts: the technology, the testing paradigms, the people involved and the testing and certification process.

The EAC has since taken over testing and certification. Their process is notable in its much higher level of openness and accountability; the test plans are published (previously claimed as proprietary by the testing labs), the test reports are published (previously claimed as proprietary by the vendors) and the process is specified in detail with a program manual, a laboratory manual, notices of clarification, etc.

This is all great and it helps to increase the transparency of the EAC certification program. But, what about the past? What about the testing that NASED did? Well, we don’t know much about it for a number of reasons, chief among them that we never saw any of the materials mentioned above that are now available in the new EAC system.

Through a fortunate FOIA request made of the EAC on behalf of election sleuth Susan Greenhalgh, we now have available a slew of ITA reports from one of the ITAs, Ciber.

The reports are available at the following location (hosted by our NSF ACCURATE e-voting center):

http://accurate-voting.org/docs/ita-reports/

These reports cover the Software ITA testing performed by the ITA Ciber for the following voting systems:

  • Automark AIMS 1.0.9
  • Diebold GEMS 1.18.19
  • Diebold GEMS 1.18.22
  • Diebold GEMS 1.18.24
  • Diebold AccuVote-TSx Model D
  • Diebold AccuVote-TSx Model D w/ AccuView Printer
  • Diebold Assure 1.0
  • Diebold Assure 1.1
  • Diebold Election Media Processor 4.6.2
  • Diebold Optical Scan Accumulator Adapter
  • Hart System 4.0
  • Hart System 4.1
  • Hart System 6.0
  • Hart System 6.2
  • Hart System 6.2.1

I’ll be looking at these in my leisure over coming weeks and pointing out interesting features of these reports and the associated correspondence included in the FOIA production.

*The distinction between certification and qualification, although vague, appears to be that under the NASED system, states did the ultimate certification of a voting system for fitness in future elections.

Obama's CTO: two positions?

Paul Blumenthal over at the Sunlight Foundation Blog points to a new report from the Congressional Research Service: “A Federal Chief Technology Officer in the Obama Administration: Option and Issues for Consideration”.

This report does a good job of analyzing both existing positions in federal government that have roles that overlap with some of the potential responsibilities of an “Obama CTO” and the questions that Congress would want to consider if such a position is established by statute rather than an executive order.

The crux of the current issue, for me, is summed up well by this quote from the CRS report’s conclusion:

Although the campaign position paper and transition website provide explicit information on at least some of the duties of a CTO, they do not provide information on a CTO’s organizational placement, structure, or relationship to existing offices. In addition, neither the paper nor website states whether the president intends to establish this position/office by executive order or whether he would seek legislation to create a statutory foundation for its duties and authorities.

The various issues in the mix here lead me to one conclusion: an “Obama CTO” position will be very different from the responsibilities of a traditional chief technology officer. There seem to be at least two positions involved: one visionary and one fixer. That is, one person to push the envelope in a grounded-but-futurist style in terms of what is possible and then one person to negotiate the myriad of agencies and bureaucratic parameters to get things done.

As for the first position, I’d like to say a futurist would be a good idea. However, futurists don’t like to be tethered so much to current reality. A better idea is, I think, a senior academic with broad connections and deep interest and understanding in emerging technologies. The culture of academia, when it works well, can produce individuals who make connections quickly, know how to evaluate complex ideas and are good at filling gaps between what is known and not known for a particular proposal. I’m thinking a Felten, Lessig, etc. here.

As for the fixer, this desperately needs to be someone with experience negotiating complex endeavors between conflicting government fiefdoms. Vivek Kundra, the CTO for the District of Columbia, struck me as exactly this kind of person when he came to visit last semester here at Princeton’s CITP. When Kundra’s name came up as one of two shortlisted candidates for “Obama CTO”, I was a bit skeptical as I wasn’t convinced he had the appropriate visionary qualities. However, as part of a team, I think he’d be invaluable.

It could be possible that the other shortlisted candidate, Cisco’s Padmasree Warrior, would have enough of the visionary element to make up the other side of the team; I doubt she has (what I consider to be) the requisite governmental fixer qualities.

So, why not two positions? Does anyone have both these qualities? Do people agree that these are the right qualities?

As to how it would be structured, it’s almost as if it should be a spider position — a reference to a position in soccer that isn’t tethered by role. That is, they should be free from some of the encumbrances that make government information technology innovation so difficult.

CA SoS Bowen sends proposals to EAC

California Secretary of State Debra Bowen has sent a letter to Chair Gineen Beach of the US Election Assistance Commission (EAC) outlining three proposals that she thinks will markedly improve the integrity of voting systems in the country.

I’ve put a copy of Bowen’s letter here (87kB PDF).

Bowen’s three proposals are:

  • Vulnerability Reporting — The EAC should require that vendors disclose vulnerabilities, flaws, problems, etc. to the EAC as the system certification authority and to all the state election directors that use the affected equipment.
  • Uniform Incident Reporting — The EAC should create and adopt procedures that jurisdictions can follow to collect and report data about incidents they experience with their voting systems.
  • Voting System Performance Measurement — As part of the Election Day Survey, the EAC should systematically collect data from election officials about how voting systems perform during general elections.

In my opinion, each of these would be a welcome move for the EAC.

These proposals would put into place a number of essential missing elements of administering computerized elections equipment. First, for the users of these systems, election officials, it can be extremely frustrating and debilitating if they suspect that some voting system flaw is responsible for problems they’re experiencing. Often, when errors arise, contingency planning requires detailed knowledge about specific details of a voting system flaw. Without knowing as much as possible about the problem they’re facing, election officials can exacerbate the problem. At best, not knowing about a potential flaw can do what Bowen describes: doom the election official, and others with the same equipment, to repeatedly encounter the flaw in subsequent elections. Of course, vendors are the most likely to have useful information on a given flaw, and they should be required to report this information to both the EAC and election officials.

Often the most information we have about voting system incidents come from reports from local journalists. These reporters don’t tend to cover high technology too often; their reports are often incomplete and in many cases simply and obviously incorrect. Having a standardized set of elements that an election official can collect and report about voting system incidents will help to ensure that the data comes directly from those experiencing a given problem. The EAC should design such procedures and then a system for collecting and reporting these issues to other election officials and the public.

Finally, many of us were disappointed to learn that the 2008 Election Day survey would not include questions about voting system performance. Election Day is a unique and hard-to-replicate event where very little systematic data is collected about voting machine performance. The OurVoteLive and MyVote1 efforts go a long way towards actionable, qualitative data that can help to increase enfranchisement. However, self-reported data from the operators of the machinery of our democracy would be a gold mine in terms of identifying and examining trends in how this machinery performs, both good and bad.

I know a number of people, including Susannah Goodman at Common Cause as well as John Gideon and Ellen Theisen of VotersUnite!, who have been championing one or another of these proposals in their advocacy. The fact that Debra Bowen has penned this letter is a testament to the reason behind their efforts.

Total Election Awareness

Ed recently made a number of predictions about election day (“Election 2008: What Might Go Wrong”). In terms of long lines and voting machine problems, his predictions were pretty spot on.

On election day, I was one of a number of volunteers for the Election Protection Coalition at one of 25 call centers around the nation. Kim Zetter describes the OurVoteLive project, involving 100 non-profit organizations, ten thousand volunteers that answered 86,000 calls with a 750 line call-center operation (“U.S. Elections — It Takes a Village”):

The Election Protection Coalition, a network of more than 100 legal, voting rights and civil liberties groups was the force behind the 1-866-OUR-VOTE hotline, which provided legal experts to answer nearly 87,000 calls that came in over 750 phone lines on Election Day and dispatched experts to address problems in the field as they arose.

Pam Smith of the Verified Voting Foundation made sure each call center had a voting technologist responsible for responding to voting machine reports and advising mobile legal volunteers how to respond on the ground. It was simply a massive operation. Matt Zimmerman and Tim Jones of the Electronic Frontier Foundation and their team get serious props as developers and designers of the their Total Election Awareness (TEA) software behind OurVoteLive.

As Kim describes in the Wired article, the call data is all available in CSV, maps, tables, etc.: http://www.ourvotelive.org/. I just completed a preliminary qualitative analysis of the 1800 or so voting equipment incident reports: “A Preliminary Analysis of OVL Voting Equipment Reports”. Quite a bit of data in there with which to inform future efforts.