November 24, 2024

Where are the California E-Voting Reports?

I wrote Monday about the California Secretary of State’s partial release of report from the state’s e-voting study. Four subteams submitted reports to the Secretary, but as yet only the “red team” and accessibility teams’ reports have been released. The other two sets of reports, from the source code review and documentation review teams, are still being withheld.

The Secretary even held a public hearing on Monday about the study, without having released all of the reports. This has led to a certain amount of confusion, as many press reports and editorials (e.g. the Mercury News editorial) about the study seem to assume that the full evaluation results have been reported. The vendors and some county election officials have encouraged this misimpression – some have even criticized the study for failing to consider issues that are almost certainly addressed in the missing reports.

With the Secretary having until Friday to decide whether to decertify any e-voting systems for the February 2008 primary election, the obvious question arises: Why is the Secretary withholding the other reports?

Here’s the official explanation, from the Secretary’s site:

The document review teams and source code review teams submitted their reports on schedule. Their reports will be posted as soon as the Secretary of State ensures the reports do not inadvertently disclose security-sensitive information.

This explanation is hard to credit. The study teams were already tasked to separate their reports into a public body and a private appendix, with sensitive exploit-oriented details put in the private appendix that would go only to the Secretary and the affected vendor. Surely the study teams are much better qualified to determine the security implications of releasing a particular detail than the lawyers in the Secretary’s office are.

More likely, the Secretary is worried about the political implications of releasing the reports. Given this, it seems likely that the withheld reports are even more damning than the ones released so far.

If the red team reports, which reported multiple vulnerabilities of the most serious kind, are the good news, how bad must the bad news be?

UPDATE (2:45 PM EDT, August 2): The source code review reports are now up on the Secretary of State’s site. They’re voluminous so I won’t be commenting on them immediately. I’ll post my reactions tomorrow.

California Study: Voting Machines Vulnerable; Worse to Come?

A major study of three e-voting systems, commissioned by the California Secretary of State’s office, reported Friday that all three had multiple serious vulnerabilities.

The study examined systems from Diebold, Hart InterCivic, and Sequoia; each system included a touch-screen machine, an optical-scan machine, and the associated backend control and tabulation machine. Each system was studied by three teams: a “red team” did a hands-on study of the machines, a “source code team” examined the software source code for the system, and a “documentation team” examined documents associated with the system and its certification. (An additional team studied the accessibility of the three systems – an important topic but beyond the scope of this post.)

(I did not participate in the study. An early press release from the state listed me as a participant but that was premature. I ultimately had to withdraw before the study began, due to a scheduling issue.)

So far only the red team (and accessibility) reports have been released, which makes one wonder what is in the remaining reports. Here are the reports so far:

The bottom-line paragraph from the red team overview says this (section 6.4):

The red teams demonstrated that the security mechanisms provided for all systems analyzed were inadequate to ensure accuracy and integrity of the election results and of the systems that provide those results.

The red teams all reported having inadequate time to fully plumb the systems’ vulnerabilities (section 4.0):

The short time allocated to this study has several implications. The key one is that the results presented in this study should be seen as a “lower bound”; all team members felt that they lacked sufficient time to conduct a thorough examination, and consequently may have missed other serious vulnerabilities. In particular, Abbott’s team [which studied the Diebold and Hart systems] reported that it believed it was close to finding several other problems, but stopped in order to prepare and deliver the required reports on time. These unexplored avenues are presented in the reports, so that others may pursue them. Vigna’s and Kemmerer’s team [which studied the Sequoia system] also reported that they were confident further testing would reveal additional security issues.

Despite the limited time, the teams found ways to breach the physical security of all three systems using only “ordinary objects” (presumably paper clips, coins, pencil erasers, and the like); they found ways to modify or overwrite the basic control software in all three voting machines; and they were able to penetrate the backend tabulator system and manipulate election records.

The source code and documentation studies have not yet been released. To my knowledge, the state has not given a reason for the delay in releasing these reports.

The California Secretary of State reportedly has until Friday to decide whether to allow these systems to be used in the state’s February 2008 primary election.

[UPDATE: A public hearing on the study is being webcast live at 10:00 AM Pacific today.]

Woman Registers Dog to Vote, Demonstrates Ease of Fraud

A woman in Seattle registered her dog to vote, and submitted absentee ballots in three elections on the dog’s behalf, according to an AP story.

The woman, Jane Balogh, said she did this to demonstrate how easy it would be for a noncitizen to vote. She put her phone bill in her dog’s name (“Duncan M. MacDonald”) and then used the phone bill as evidence of residency. She submitted absentee ballots in Duncan’s name three times, each ballot “signed” with a paw print. She says the ballots did not designate any candidates and only had “void” written on them, so the elections were not affected.

Nevertheless, she broke the law and now faces charges.

This relates to an issue every applied security researcher has faced: how to demonstrate a security problem is real. People take a problem more seriously when they have seen a real, working demonstration of the problem – otherwise the problem will be dismissed as theoretical. Often there is a lawful way to demonstrate a problem, for example by “breaking in” to your own computer. But sometimes there is no way to demonstrate a problem without breaking the law. Careful researchers will stop and assess the legality of what they’re planning to do, and will hold back if the demo they’re considering breaks the law.

Ms. Balogh went ahead and broke the law. Beyond that (serious) misstep, she did everything right: admitting what she did, avoiding any side-effect on the elections by filing blank ballots, and leaving obvious clues like the paw prints.

Fortunately for her, the prosecutor decided not to charge her with a felony but instead offered to let her plead guilty to a misdemeanor, pay a $250 fine, and do ten hours of community service. She was lucky to get this and will apparently accept the deal.

Any readers considering such a stunt should think again. The next prosecutor may not be so forgiving.

Botnet Briefing

Yesterday I spoke at a Washington briefing on botnets. The event was hosted by the Senate Science and Technology Caucus, and sponsored by ACM and Microsoft. Along with opening remarks by Senators Pryor and Bennett, there were short briefings by me, Phil Reitinger of Microsoft, and Scott O’Neal of the FBI.

(Botnets are coordinated computer intrusions, where the attacker installs a long-lived software agent or “bot” on many end-user computers. After being installed, the bots receive commands from the attacker through a command-and-control mechanism. You can think of bots as a more advanced form of the viruses and worms we saw previously.)

Botnets are a serious threat, but as usual in cybersecurity there is no obvious silver bullet against them. I gave a laundry list of possible anti-bot tactics, including a mix of technical, law enforcement, and policy approaches.

Phil Reitinger talked about Microsoft’s anti-botnet activities. These range from general efforts to improve software security, to distribution of patches and malicious code removal tools, to investigation of specific bot attacks. I was glad to hear him call out the need for basic research on computer security.

Scott O’Neal talked about the FBI’s fight against botnets, which he said followed the Bureau’s historical pattern in dealing with new types of crime. At first, they responded to specific attacks by investigating and trying to identify the perpetrators. Over time they have adopted new tactics, such as infiltrating the markets and fora where botmasters meet. Though he didn’t explicitly prioritize the different types of botnet (mis)use, it was clear that commercially motivated denial-of-service attacks were prominent in his mind.

Much of the audience consisted of Senate and House staffers, who are naturally interested in possible legislative approaches to the botnet problem. Beyond seeing that law enforcement has adequate resources, there isn’t much that needs to be done. Current laws such as the Computer Fraud and Abuse Act, and anti-fraud and anti-spam laws, already cover botnet attacks. The hard part is catching the bad guys in the first place.

The one legislative suggestion we heard was to reduce the threshold for criminal violation in the Computer Fraud and Abuse Act. Using computers without authorization is a crime, but there are threshold requirements to make sure that trivial offenses can’t bring down the big hammer of felony prosecution.

The concern is that a badguy who breaks into a large number of computers and installs bots, but hasn’t yet used the bots to do harm, might be able to escape prosecution. He could still be prosecuted if certain types of bad intent can be proved, but where that is not possible he arguably might not meet the $5000 damage threshold. The law might be changed to allow prosecution when some designated number of computers are affected.

Paul Ohm has expressed skepticism about this kind of proposal. He points to a tendency to base cybersecurity policy on anecdote and worst-case predictions, even though a great deal of preventable harm is caused by simpler, more mundane attacks.

I’d like to see more data on how big a problem the current CFAA thresholds are. How many real badguys have escaped CFAA prosecution? Of those who did, how many could be prosecuted for other, equally serious violations? With data in hand, the cost-benefit tradeoffs in amending the CFAA will be easier.

Senator Bennett, in his remarks, characterized cybersecurity as a long-term fight. “You guys have permanent job security…. You’re working on a problem that will never be solved.”

Why So Many False Positives on the No-Fly List?

Yesterday I argued that Walter Murphy’s much-discussed encounter with airport security was probably just a false positive in the no-fly list matching algorithm. Today I want to talk about why false positives (ordinary citizens triggering mistaken “matches” with the list) are so common.

First, a preliminary. It’s often argued that the high false positive rate proves the system is poorly run or even useless. This is not necessarily the case. In running a system like this, we necessarily trade off false positives against false negatives. We can lower either kind of error, but doing so will increase the other kind. The optimal policy will balance the harm from false positives against the harm from false negatives, to minimize total harm. If the consequences of a false positive are relatively minor (brief inconvenience for one traveler), but the consequences of a false negative are much worse (non-negligible probability of multiple deaths), then the optimal choice is to accept many false positives in order to drive the false negative rate way down. In other words, a high false positive rate is not by itself a sign of bad policy or bad management. You can argue that the consequences of error are not really so unbalanced, or that the tradeoff is being made poorly, but your argument can’t rely only on the false positive rate.

Having said that, the system’s high false positive rate still needs explaining.

The fundamental reason for the false positives is that the system matches names , and names are a poor vehicle for identifying people, especially in the context of air travel. Names are not as unique as most people think, and names are frequently misspelled, especially in airline records. Because of the misspellings, you’ll have to do approximate matching, which will make the nonuniqueness problem even worse. The result is many false positives.

Why not use more information to reduce false positives? Why not, for example, use the fact that the Walter Murphy who served in the Marine Corps and used to live near Princeton is not a threat?

The reason is that using that information would have unwanted consequences. First, the airlines would have to gather much more private information about passengers, and they would probably have to verify that information by demanding documentary proof of some kind.

Second, checking that private information against the name on the no-fly list would require bringing together the passenger’s private information with the government’s secret information about the person on the no-fly list. Either the airline can tell the government what it knows about the passenger’s private life, or the government can tell the airline what it knows about the person on the no-fly list. Both options are unattractive.

A clumsy compromise – which the government is apparently making – is to provide a way for people who often trigger false positives to supply more private information, and if that information distinguishes the person from the no-fly list entry, to give the person some kind of “I’m not really on the no-fly list” certificate. This imposes a privacy cost, but only on people who often trigger false positives.

Once you’ve decided to have a no-fly list, a significant false positive rate is nearly inevitable. The bigger policy question is whether, given all of its drawbacks, we should have a no-fly list at all.