May 7, 2024

Where are the California E-Voting Reports?

I wrote Monday about the California Secretary of State’s partial release of report from the state’s e-voting study. Four subteams submitted reports to the Secretary, but as yet only the “red team” and accessibility teams’ reports have been released. The other two sets of reports, from the source code review and documentation review teams, are still being withheld.

The Secretary even held a public hearing on Monday about the study, without having released all of the reports. This has led to a certain amount of confusion, as many press reports and editorials (e.g. the Mercury News editorial) about the study seem to assume that the full evaluation results have been reported. The vendors and some county election officials have encouraged this misimpression – some have even criticized the study for failing to consider issues that are almost certainly addressed in the missing reports.

With the Secretary having until Friday to decide whether to decertify any e-voting systems for the February 2008 primary election, the obvious question arises: Why is the Secretary withholding the other reports?

Here’s the official explanation, from the Secretary’s site:

The document review teams and source code review teams submitted their reports on schedule. Their reports will be posted as soon as the Secretary of State ensures the reports do not inadvertently disclose security-sensitive information.

This explanation is hard to credit. The study teams were already tasked to separate their reports into a public body and a private appendix, with sensitive exploit-oriented details put in the private appendix that would go only to the Secretary and the affected vendor. Surely the study teams are much better qualified to determine the security implications of releasing a particular detail than the lawyers in the Secretary’s office are.

More likely, the Secretary is worried about the political implications of releasing the reports. Given this, it seems likely that the withheld reports are even more damning than the ones released so far.

If the red team reports, which reported multiple vulnerabilities of the most serious kind, are the good news, how bad must the bad news be?

UPDATE (2:45 PM EDT, August 2): The source code review reports are now up on the Secretary of State’s site. They’re voluminous so I won’t be commenting on them immediately. I’ll post my reactions tomorrow.

California Study: Voting Machines Vulnerable; Worse to Come?

A major study of three e-voting systems, commissioned by the California Secretary of State’s office, reported Friday that all three had multiple serious vulnerabilities.

The study examined systems from Diebold, Hart InterCivic, and Sequoia; each system included a touch-screen machine, an optical-scan machine, and the associated backend control and tabulation machine. Each system was studied by three teams: a “red team” did a hands-on study of the machines, a “source code team” examined the software source code for the system, and a “documentation team” examined documents associated with the system and its certification. (An additional team studied the accessibility of the three systems – an important topic but beyond the scope of this post.)

(I did not participate in the study. An early press release from the state listed me as a participant but that was premature. I ultimately had to withdraw before the study began, due to a scheduling issue.)

So far only the red team (and accessibility) reports have been released, which makes one wonder what is in the remaining reports. Here are the reports so far:

The bottom-line paragraph from the red team overview says this (section 6.4):

The red teams demonstrated that the security mechanisms provided for all systems analyzed were inadequate to ensure accuracy and integrity of the election results and of the systems that provide those results.

The red teams all reported having inadequate time to fully plumb the systems’ vulnerabilities (section 4.0):

The short time allocated to this study has several implications. The key one is that the results presented in this study should be seen as a “lower bound”; all team members felt that they lacked sufficient time to conduct a thorough examination, and consequently may have missed other serious vulnerabilities. In particular, Abbott’s team [which studied the Diebold and Hart systems] reported that it believed it was close to finding several other problems, but stopped in order to prepare and deliver the required reports on time. These unexplored avenues are presented in the reports, so that others may pursue them. Vigna’s and Kemmerer’s team [which studied the Sequoia system] also reported that they were confident further testing would reveal additional security issues.

Despite the limited time, the teams found ways to breach the physical security of all three systems using only “ordinary objects” (presumably paper clips, coins, pencil erasers, and the like); they found ways to modify or overwrite the basic control software in all three voting machines; and they were able to penetrate the backend tabulator system and manipulate election records.

The source code and documentation studies have not yet been released. To my knowledge, the state has not given a reason for the delay in releasing these reports.

The California Secretary of State reportedly has until Friday to decide whether to allow these systems to be used in the state’s February 2008 primary election.

[UPDATE: A public hearing on the study is being webcast live at 10:00 AM Pacific today.]

Email Protected by 4th Amendment, Court Says

The Sixth Circuit Court of Appeals ruled yesterday, in Warshak v. U.S., that people have a reasonable expectation of privacy in their email, so that the government needs a search warrant or similar process to access it. The Court’s decision was swayed by amicus briefs submitted by EFF and a group of law professors.

When Alice sends an email to Bob, the email will be stored, for a while at least, on an email server run by Bob’s email provider. Depending on how Bob uses email, the message may sit on the server just until Bob’s computer picks up mail (which happens every few minutes when Bob is online), or Bob may store his long-term email archive on the server. Either way the server, which is typically run by Bob’s ISP, will have a copy of the email and will have the ability to access its contents.

The key question in Warshak was whether, notwithstanding the ISP’s ability to read his mail, Bob still has a reasonable expectation of privacy in the email. This matters because certain Fourth Amendment protections apply where there is a reasonable expectation of privacy. The government had used a certain kind of order authorized by the Stored Communications Act to compel Warshak’s ISP to turn over Warshak’s email without notifying Warshak. Warshak argued that that was improper and the government should have been required to get a search warrant.

The key to the Court’s ruling is an analogy, offered by the amici, between email and phone calls. The phone company has the ability to listen to your calls, but courts ruled long ago that there is a reasonable expectation of privacy in the content of phone calls, so that the government cannot eavesdrop on the content of calls without a warrant. The Court accepted that email is like a phone call, for privacy purposes at least, and the ruling essentially followed from this analogy.

This is not a general ruling that warrants are required to access electronic records held by third parties. The Court’s reasoning depended on the particular attributes of email, and even on the way these particular ISPs handled email. If the ISP’s employees regularly looked at customer email in the ordinary course of business, or if there was a written agreement giving the ISP broad latitude to look at email, the Court might have found differently. Warshak had a reasonable expectation of privacy in his email, but you might not. (Randy Picker has an interesting commentary on Warshak in relation to online records held by third parties.)

Interestingly, the Court drew a line between inspection of email by computer programs, such as virus or spam checkers, versus inspection by a person. The Court found that automated analysis of email did not erode the reasonable expectation of privacy, but routine manual inspection of email would erode it.

Pragmatically, a ruling like this is only possible because email has become a routine part of life for so many people. The analogy to phone calls, and the unquestioned assumption that people value the privacy of email, are both easy for judges who have gotten used to the idea of email. Ten years ago this could not have happened. Ten years from now it will seem obvious.

Orin Kerr, who is expert in this area of the law, thinks this ruling is at higher than usual risk of being invalidated on appeal. That may be the case. But it seems to me that the long-term trend is toward treating email like phone calls, because that is how people think of it. The government may win this battle on appeal, but they’re likely to lose this point in the long run.

How Computers Can Make Voting More Secure

By now there is overwhelming evidence that today’s paperless computer-based voting technologies have such serious security and reliability problems that we should not be using them. Computers can’t do the job by themselves; but what role should they play in voting?

It’s tempting to eliminate computers entirely, returning to old-fashioned paper voting, but I think this is a mistake. Paper has an important role, as I’ll describe below, but paper systems are subject to well-known problems such as ballot-box stuffing and chain voting, as well as other user-interface and logistical challenges.

Security does require some role for paper. Each vote must be recorded in a manner that is directly verified by the voter. And the system must be software-independent, meaning that its accuracy cannot rely on the correct functioning of any software system. Today’s paperless e-voting systems satisfy neither requirement, and the only practical way to meet the requirements is to use paper.

The proper role for computers, then, is to backstop the paper system, to improve it. What we want is not a computerized voting system, but a computer-augmented one.

This mindset changes how we think about the role of computers. Instead of trying to make computers do everything, we will look instead for weaknesses and gaps in the paper system, and ask how computers can plug them.

There are two main ways computers can help. The first is in helping voters cast their votes. Computers can check for errors in ballots, for example by detecting an invalid ballot while the voter is still in a position to fix it. Computers can present the ballot in audio format for the blind or illiterate, or in multiple languages. (Of course, badly designed computer interfaces can do harm, so we have to be careful.) There must be a voter-verified paper record at the end of the vote-casting process, but computers, used correctly, can help voters create and validate that record, by acting as ballot-marking devices or as scanners to help voters spot mismarked ballots.

The second way computers can help is by improving security. Usually the e-voting security debate is about how to keep computers from making security too much worse than it was before. Given the design of today’s e-voting systems, this is appropriate – just bringing these systems up to the level of security and reliability in (say) the Xbox and Wii game consoles would be nice. Even in a computer-augmented system, we’ll need to do a better job of vetting the computers’ design – if a job is worth doing with a computer, it’s worth doing correctly.

But once we adopt the mindset of augmenting a paper-based system, security looks less like a problem and more like an opportunity. We can look for the security weaknesses of paper-based systems, and ask how computers can help to address them. For example, paper-based systems are subject to ballot-box stuffing – how can computers reduce this risk?

Surprisingly, the designs of current e-voting technologies, even the ones with paper trails, don’t do all they can to compensate for the weaknesses of paper. For example, the current systems I’ve seen keep electronic records that are subject to straightforward post-election tampering. Researchers have studied approaches to this problem, but as far as I know none are used in practice.

In future posts, we’ll discuss design ideas for computer-augmented voting.

Sarasota Voting Machines Insecure

The technical team commissioned by the State of Florida to study the technology used in the ill-fated Sarasota election has released its report. (Background: on the Sarasota election problems; on the study.)

One revelation from the study is that the iVotronic touch-screen voting machines are terribly insecure. The machines are apparently susceptible to viruses, and there are many bugs a virus could exploit to gain entry or spread:

We found many instances of [exploitable buffer overflow bugs]. Misplaced trust in the election definition file can be found throughout the iVotronic software. We found a number of buffer overruns of this type. The software also contains array out-of-bounds errors, integer overflow vulnerabilities, and other security holes. [page 57]

The equation is simple: sloppy software + removable storage = virus vulnerability. We saw the same thing with the Diebold touchscreen voting system.

Another example of poor security is in the passwords that protect crucial operations such as configuring the voting machine and modifying its software. There are separate passwords for different operations, but the system has a single backdoor that allows all of the passwords to be bypassed by an adversary who can learn or guess a one-byte secret, which is easily guessed since there are only 256 possibilities. (p. 67) For example, an attacker who gets private access to the machine for just a few minutes can apparently use the backdoor to install malicious software onto a machine.

Though the machines’ security is poor and needs to be fixed before it is used in another election, I agree with the study team that the undervotes were almost certainly not caused by a security attack. The reason is simple: only a brainless attacker would cause undervotes. An attack that switched votes from one candidate to another would be more effective and much harder to detect.

So if it wasn’t a security attack, what was the cause of the undervotes?

Experience teaches that systems that are insecure tend to be unreliable as well – they tend to go wrong on their own even if nobody is attacking them. Code that is laced with buffer overruns, array out-of-bounds errors, integer overflow errors, and the like tends to be flaky. Sporadic undervotes are the kind of behavior you would expect to see from a flaky voting technology.

The study claims to have ruled out reliability problems as a cause of the undervotes, but their evidence on this point is weak, and I think the jury is still out on whether voting machine malfunctions could be a significant cause of the undervotes. I’ll explain why, in more detail, in the next post.