December 21, 2024

Rubin and Rescorla on E-Voting

There are two interesting new posts on e-voting over on ATAC.

In one post, Avi Rubin suggests a “hacking challenge” for e-voting technology: let experts tweak an existing e-voting system to rig it for one candidate, and then inject the tweaked system quietly into the certification pipeline and see if it passes. (All of this would be done with official approval and oversight, of course.)

In the other post (also at Educated Guesswork, with better comments), Eric Rescorla responds to Clive Thompson’s New York Times Magazine piece calling for open e-voting software. Thompson invoked the many-eyeballs phenomenon, saying that open software gets the benefit of inspection by many people, so that opening e-voting software would help to find any security flaws in it.

Eric counters by making two points. First, opening software just creates the opportunity to audit, but it doesn’t actually motivate skilled people to spend a lot of their scarce time doing a disciplined audit. Second, bugs can lurk in software for a long time, even in code that experts look at routinely. So, Eric argues, instituting a formal audit process that has real teeth will do more good than opening the code.

While I agree with Eric that open source software isn’t automatically more secure than closed source, I suspect that voting software may be the exceptional case where smart people will volunteer their time, or philanthropists will volunteer their money, to see that a serious audit actually happens. It’s true, in principle, that the same audit can happen if the software stays closed. But I think it’s much less likely to happen in practice with closed software – in a closed-source world, too many people have to approve the auditors or the audit procedures, and not all of those people will want to see a truly fair and comprehensive audit.

Eric also notes, correctly, the main purpose of auditing, which is not to find all of the security flaws (a hopeless task) but to figure out how trustworthy the software is. To me, the main benefit of opening the code is that the trustworthiness of the code can become a matter of public debate; and the debate will be better if its participants can refer directly to the evidence.

E-Voting Testing Labs Not Independent

E-voting vendors often argue that their systems must be secure, because they have been tested by “independent” labs. Elise Ackerman’s story in Sunday’s San Jose Mercury-News explains the depressing truth about how the testing process works.

There are only three labs, and they are overseen by a private body that is supported financially by the vendors. There is no government oversight. The labs have refused to release test results to state election officials, saying the results are proprietary and will be given only to the vendor whose product was tested:

Dan Reeder, a spokesman for Wyle, which functioned as the nation’s sole testing lab from 1994 to 1997, said the company’s policy is to provide information to the manufacturers who are its customers.

It’s worth noting, too, that the labs do not test the security of the e-voting systems; they only test the systems’ compliance with standards.

SysTest Labs President Brian Phillips said the security risks identified by the outside scientists were not covered by standards published by the Federal Election Commission. “So long as a system does not violate the requirements of the standards, it is OK,” Phillips said.

A few states do their own testing, or hire their own independent labs. It seems to me that state election officials should be able to get together and establish a truly independent testing procedure that has some teeth.

Florida Voting Machines Mis-recorded Votes

In Miami-Dade County, Florida, an internal county memo has come to light, documenting misrecording of votes by ES&S e-voting machines in a May 2003 election, according to a Matthew Haggman story in the Miami Daily Business Review.

The memo, written by Orlando Suarez, head of the county’s Enterprise Technology Services Department, describes Mr. Suarez’s examination of the electronic record of the May 2003 election in one precinct. The ES&S machines in question provide two reports at the end of an election. One report, the “vote image report”, gives the vote tabulation (i.e., number of votes cast for each candidate) for each voting machine, and the other gives an audit log of significant events, such as initialization of the machine and the casting of a vote (but not who the vote was cast for), for each machine.

Mr. Suarez’s examination found that the two records were inconsistent with each other, and that both were inconsistent with reality.

In his memo, Suarez analyzed a precinct where just nine electronic voting machines were used. He first examined the audit logs for all nine machines, which was compiled onto one combined audit log. He found that the audit log made no mention of two of the machines used in the precinct.

In addition, he found that the audit log reported the serial number of a machine that was not used in that precinct. The phantom machine that appeared on the audit showed a count of ballots cast that equaled the count of the two missing machines.

Then he looked at the vote image report that was an aggregate of all nine voting machines. He discovered that three of the machines were not reported in the vote image report. But a serial number for a machine not used in the precinct appeared on the vote image report. That phantom machine showed a vote count equal to the vote count on the two missing machines. The other missing machine showed no activity.

Further examination revealed 38 votes that appeared in the vote image report but not in the audit log.

There is some evidence that the software used in this election was uncertified.

County officials don’t see much of a problem here:

Nevertheless, [county elections supervisor Constance] Kaplan insisted that Suarez’s analysis did not demonstrate any basic problems with the accuracy of the vote counts produced by the county’s iVotronic system. “The Suarez memo has nothing to do with the tabulation process,” she said. “It is very annoying that the coalition keeps equating the tabulation function with the audit function.”

Maybe I’m being overly picky here, but isn’t the vote tabulation supposed to match the audit trail? And isn’t the vote tabulation report supposed to match reality?

Very annoying, indeed.

California Decertifies Touch-Screen Voting

Looks like I missed the significance of this story last week (by Kim Zetter at Wired News). California Secretary of State Kevin Shelley decertified all touch-screen voting machines, not just the Diebold systems whose decertification had been recommended by the state’s voting-systems panel.

Some counties may be able to get their machines recertified if they can meet a set of security requirements: the machines must be certified by the Federal government, provide a voter-verified paper trail, have a security plan that meets certain criteria, have source code disclosed to the Secretary of State and his designees (subject to reasonable confidentiality provisions), have a documented development process, no be modified at the last minute, have no network connections (including Internet, wireless, or phone connections), and a few other requirements.

Shelley condemned Diebold’s actions in California, calling them “despicable” and “deceitful tactics”. He referred evidence of possible fraud by Diebold to the state Attorney General’s office.

In a related story, Ireland recently decided not to use e-voting in their next election, due to security concerns.

California Panel Recommends Decertifying One Diebold System

The State of California’s Voting Systems Panel has voted to recommend the decertification of Diebold’s TSx e-voting system, according to a release from verifiedvoting.org. The final decision will be made by Secretary of State Kevin Shelley, but he is expected to approve the recommendation within the next week.

The TSx is only one of the Diebold e-voting systems used in California, but this is still an important step.