There are two interesting new posts on e-voting over on ATAC.
In one post, Avi Rubin suggests a “hacking challenge” for e-voting technology: let experts tweak an existing e-voting system to rig it for one candidate, and then inject the tweaked system quietly into the certification pipeline and see if it passes. (All of this would be done with official approval and oversight, of course.)
In the other post (also at Educated Guesswork, with better comments), Eric Rescorla responds to Clive Thompson’s New York Times Magazine piece calling for open e-voting software. Thompson invoked the many-eyeballs phenomenon, saying that open software gets the benefit of inspection by many people, so that opening e-voting software would help to find any security flaws in it.
Eric counters by making two points. First, opening software just creates the opportunity to audit, but it doesn’t actually motivate skilled people to spend a lot of their scarce time doing a disciplined audit. Second, bugs can lurk in software for a long time, even in code that experts look at routinely. So, Eric argues, instituting a formal audit process that has real teeth will do more good than opening the code.
While I agree with Eric that open source software isn’t automatically more secure than closed source, I suspect that voting software may be the exceptional case where smart people will volunteer their time, or philanthropists will volunteer their money, to see that a serious audit actually happens. It’s true, in principle, that the same audit can happen if the software stays closed. But I think it’s much less likely to happen in practice with closed software – in a closed-source world, too many people have to approve the auditors or the audit procedures, and not all of those people will want to see a truly fair and comprehensive audit.
Eric also notes, correctly, the main purpose of auditing, which is not to find all of the security flaws (a hopeless task) but to figure out how trustworthy the software is. To me, the main benefit of opening the code is that the trustworthiness of the code can become a matter of public debate; and the debate will be better if its participants can refer directly to the evidence.
I think you’re right, this is about increasing the opportunity to have software that does the right thing for the public. A great way to do that is to let anyone inspect, change, and republish the software. And that is the aspect of this that the “open source” movement can address. It’s unfortunate that there’s no way we can verify that ostensibly trusted software is actually running in voting machines but this is not anyone’s fault, it’s just a side effect of limiting access to voting machines.
But there is an aspect of this that is better addressed by the “free software” movement — making the software available under a license which irrevocably grants everyone the freedoms of free software would allow every county to be more self-sufficient when bugs are found or new features need to be put in (Illinois, my home state, might someday switch to an instant run-off voting system and we would need voting machines to help us count votes with an IRV counting scheme). The free software movement is much more adamant about preventing proprietarization by software patents and non-copylefted derivatives (in fact it was this movement that is famous for inventing “copyleft”). That’s why I think when it comes to the licensing, I think we’re all better off with a strong copylefted free software license for our electronic voting machines than merely an open source license.
In the case of Closed Source questions, shouldn’t we be asking who has audited not just the software, but to what degree, how was the software constructed (formal design methods etc?) and tested etc.
Its always struck me that the (IIRC) Gartner survey that said the number of bugs/vulnerabilities in Closed Source/Open Source was about the same always assumed that the Closed Source was properly designed, developed, debugged and documented (4D). Is this the case here?
A few years back I worked at a British aircraft manufacturer – though not on flight critical software and the documentation and testing in what we did there was apalling. I cannot believe that all software is produced the 4D way. It certainly isn’t where I work.
As I argued in my original post, I don’t buy this “fox guarding the henhouse.” Sure, in PRINCIPLE people could be auditing the code, but in PRACTICE they’re not. So nearly all the information you have is the result of formal audits in any case.
I agree that the validity of the running code–as opposed to the source code that was audited–is an important issue but it seems to me that this is a result of the voting machines being delivered as black boxes by vendors and isn’t really affected by whether the code is in principle open. The election authorities don’t have the technical sophistication to install some source code version even if there was one.
Jvance: It’s not at all clear that this is true. As I argued in my original post, there are a fair number of simple bugs in commonly used programs that have survived for years. There’s no good evidence that auditing can remove enough bugs to make them very much harder to find.
I think one important point is that bugs can still lurk in there, but if expert programmers cannot find them, it would take someone of exceptional skills to exploit the bug. If it did get exploited, however, a fix would probably be out a lot quicker than closed software.
“To me, the main benefit of opening the code is that the trustworthiness of the code can become a matter of public debate; and the debate will be better if its participants can refer directly to the evidence.”
Absolutely correct. With closed source, it’s the fox guarding the henhouse regardless of the number of autitors who have looked at what is *supposed* to be the production code.
Besides “open audit” opportunities, we need some way to insure that the published code is the executing code. How can poll workers know that the machine is running the officially approved code? We need an “open distribution” method that is secure, too.
Fred