This summer, the California Secretary of State commissioned a first-ever “Top to Bottom Review” of all the electronic voting systems used in the state. In August, the results of the first round of review were published, finding significant security vulnerabilities and a variety of other problems with the three vendors reviewed at the time. (See the Freedom to Tinker coverage for additional details.) The ES&S InkaVote Plus system, used in Los Angeles County, wasn’t included in this particular review. (The InkaVote is apparently unrelated to the ES&S iVotronic systems used elsewhere in the U.S.) The reports on InkaVote are now public.
(Disclosure: I was a co-author of the Hart InterCivic source code report, released by the California Secretary of State in August. I was uninvolved in the current round of investigation and have no inside information about this work.)
First, it’s worth a moment to describe what InkaVote is actually all about. It’s essentially a precinct-based optical-scan paper ballot system, with a template-like device, comparable to the Votomatic punch-card systems. As such, even if the tabulation computers are completely compromised, the paper ballots remain behind with the potential for being retabulated, whether mechanically or by hand.
The InkaVote reports represent work done by a commercial firm, atsec, whose primary business is performing security evaluation against a variety of standards, such as FIPS-140 or the ISO Common Criteria. The InkaVote reports are quite short (or, at least the public reports are short). In effect, we only get to see the high-level bullet-points rather than detailed explanations of what they found. Furthermore, their analysis was apparently compressed to an impossible two week period, meaning there are likely to be additional issues that exist but were not discovered by virtue of the lack of time. Despite this, we still get a strong sense of how vulnerable these systems are.
From the source code report:
The documentation provided by the vendor does not contain any test procedure description; rather, it provides only a very abstract description of areas to be tested. The document mentions test cases and test tools, but these have not been submitted as part of the TDP and could not be considered for this review. The provided documentation does not show evidence of “conducting of tests at every level of the software structure”. The TDP and source code did not contain unit tests, or any evidence that the modules were developed in such a way that program components were tested in isolation. The vendor documentation contains a description of cryptographic algorithms that is inconsistent with standard practices and represented a serious vulnerability. No vulnerability assessment was made as part of the documentation review because the attack approach could not be identified based on the documentation alone. (The source review identified additional specific vulnerabilities related to encryption).
This is consistent, for better or for worse, with what we’ve seen from the other vendors. Given that, security vulnerabilities are practically a given. So, what kinds of vulnerabilities were found?
In the area of cryptography and key management, multiple potential and actual vulnerabilities were identified, including inappropriate use of symmetric cryptography for authenticity checking (A.8), use of a very weak homebrewed cipher for the master key algorithm (A.7), and key generation with artificially low entropy which facilitates brute force attacks (A.6). In addition, the code and comments indicated that a hash (checksum) method that is suitable only for detecting accidental corruption is used inappropriately with the claimed intent of detecting malicious tampering. The Red Team has demonstrated that due to the flawed encryption mechanisms a fake election definition CD can be produced that appears genuine, see Red Team report, section A.15.
106 instances were identified of SQL statements embedded in the code with no evidence of sanitation of the data before it is added to the SQL statement. It is considered a bad practice to build the SQL statements at runtime; the preferred method is to use predefined SQL statements using bound variables. A specific potential vulnerability was found and documented in A.10, SQL Injection.
Ahh, lovely (or, I should say, oy gevaldik). Curiously, the InkaVote tabulation application appears to have been written in Java – a good thing, because it eliminates the possibility of buffer overflows. Nonetheless, writing this software in a “safe” language is insufficient to yield a secure system.
The reviewer noted the following items as impediments to an effective security analysis of the system:
- Lack of design documentation at appropriate levels of detail.
- Design does not use privilege separation, so all code in the entire application is potentially security critical.
- Unhelpful or misleading comments in the code.
- Potentially complex data flow due to exception handling.
- Subjectively, large amount of source code compared to the functionality implemented.
The code constructs used were generally straightforward and easy to follow on a local level. However, the lack of design documentation made it difficult to globally analyze the system.
It’s clear that none of the voting system vendors that have been reviewed so far have had the engineering mandate (or the engineering talent) to build secure software systems that are suitably designed to resist threats that are reasonable to expect in an election setting. Instead, these vendors have produced systems that are “good enough” to sell, relying on external tamper-resistance mechanisms and human procedures. The Red Team report offers some insight into the value of these kinds of mitigations:
In the physical security testing, the wire and tamper proof paper seals were easily removed without damage to the seals using simple household chemicals and tools and could be replaced without detection (Ref item A.1 in the Summary Table). The tamper proof paper seals were designed to show evidence of removal and did so if simply peeled off but simple household solvents could be used to remove the seal unharmed to be replaced later with no evidence that it had been removed. Once the seals are bypassed, simple tools or easy modifications to simple tools could be used to access the computer and its components (Ref A.2 in summary). The key lock for the Transfer Device was unlocked using a common office item without the special ‘key’ and the seal removed. The USB port may then be used to attach a USB memory device which can be used in as part of other attacks to gain control of the system. The keyboard connector for the Audio Ballot unit was used to attach a standard keyboard which was then used to get access to the operating system (Ref A.10 in Summary) without reopening the computer.
The seal used to secure the PBC head to the ballot box provided some protection but the InkaVote Plus Manual (UDEL) provides instructions for installing the seal that, if followed, will allow the seal to be opened without breaking it (Ref A.3 in the Summary Table). However, even if the seals are attached correctly, there was enough play and movement in the housing that it was possible to lift the PBC head unit out of the way and insert or remove ballots (removal was more difficult but possible). [Note that best practices in the polling place which were not considered in the security test include steps that significantly reduce the risk of this attack succeeding but this weakness still needs to be rectified.]
I’ll leave it as an exercise to the reader to determine what the “household solvents” or “common office item” must be.
Tel> In either case, an unhandled or badly handled exception can easily have security implications (usually not as bad as running arbitrary code, but not to be ignored either).
An absurd understatement. All badly handled exceptions can have security implications, not merely those involving buffer sizes. And it’s difficult to imagine an arbitrary code exec vulnerability in Java resulting from a buffer-related exception. Trusting a deserialized object, sure. Overlong string, no, not without a major flaw in the VM.
Tel> There are plenty of Java bytecode obfuscators out there, also plenty of bytecode reverse compilers.
I’ve audited a fair amount of Java bytecode in the past, and I’ve seen obfuscation only once, in a crypto library from RSA embedded in what I was auditing. On the other hand, I’ve heard a number of Java coders express the mistaken belief that vulnerabilities I found in their source code would be hard for an attacker to find because they were delivering only Java bytecode–this, even after I had found the vulnerabilities by decompiling that same bytecode.
Tel> However, reverse engineering i386 binaries is a very common practice,
Yes, that’s true. But it is trivially easy to decompile unobfuscated Java, and doing so usually yields original line numbers, variable, and method names to go along with the complete code. This is not remotely the case with x86 (or any other) machine code.
Tel> the protection provided by delivering a binary without source code is only minimal.
The protection is not minimal–ultimately it’s non-existent because even if your compiler output is completely obfuscated, *someone* has access to the source. The difference is the ease and speed with which the reverse engineer can take a deliverable and start finding vulnerabilities instead of crawling through assembly code reconstructing stack frames.
The fact that I’m basically having to repeat my earlier comment here in greater detail implies that you didn’t read my earlier comment very carefully.
Java (and similar languages) don’t eliminate the possibility of buffer overflows, they merely change the effect of the buffer overflow. True, Java will not execute arbitrary code, and that can generally be seen as a good thing. However, Java will either generate an exception or consume all available memory growing the buffer (and then generate an exception). In either case, an unhandled or badly handled exception can easily have security implications (usually not as bad as running arbitrary code, but not to be ignored either).
In summary, don’t trust language models to save you from buggy code.
There are plenty of Java bytecode obfuscators out there, also plenty of bytecode reverse compilers. However, reverse engineering i386 binaries is a very common practice, the protection provided by delivering a binary without source code is only minimal.
antibozo> Of course it should depend on that.
Shouldn’t! Shouldn’t! Terrible typo…
Spudz> Your security model should not depend on attackers not having the source code
Of course it should depend on that. I did say “amateurs”. These folks were using a home-grown cipher, so we know they’re already in that camp.
As for jar-signing, ultimately the VM is under your control.
Your security model should not depend on attackers not having the source code though, and Java provides security mechanisms you can use to prevent substitution of a malicious version of a class in a running system (jar-signing etc.)
Public algorithms, secret keys.
Another side-effect of using Java is that source is generally easy to reconstruct from Java bytecode; typically this includes all original variable and method names, line numbers, etc. since by default Java bytecode has lots of good metadata–a necessity given the necessity of linking against .jar files. It is possible to strip some metadata and obfuscate the bytecode, but this is relatively rare. A decent decompiler such as jad usually produces very complete, readable source code from .class files. Usually the resulting source is pristine enough that it can be recompiled to a new class file, which makes experimentation especially easy–modify a reconstructed .class file and put it in CLASSPATH ahead of the original to overload the class’s behavior.
The security implication here is that if the .jar or .class files are ever exposed, there’s a decent chance that the source is then trivially reconstructable and then usable for attack analysis. This is much harder in, say, C/C++ (though not impossible). It’s worth noting because a lot of amateurs think that providing only compiled Java protects against source analysis, which is dead wrong.
Here’s another hack on an OCR voting machine that was about to be used in the upcoming state parliament elections in Hamburg, Germany. In this system, you make your vote with an electronic pencil which records the vote. The paper ballot was to be discarded afterwards. Only random samples of some polling stations would be counted and compared to the electronic vote. No procedures were defined in case the electronic result differed from the paper one, since the electronic count was considered superior to manual counting.
Article in german, but the video shows pretty well what they did:
http://chaosradio.ccc.de/ctv099.html
The main difficulty with the system is that the voter can’t see what vote the pen really recorded. So another easier attack on that system would be smear something on the camera lens of the pen while you are in the booth. Voters after you will think they voted, although the pen didn’t record anything.
Happy Ending: Recently a court ruled that the system was not to be used in the upcoming elections.