May 5, 2024

Internet Voting

(or, how I learned to stop worrying and love having the whole world know exactly how I voted)

Tomorrow is “Super Tuesday” in the United States. Roughly half of the delegates to the Democratic and Republican conventions will be decided tomorrow, and the votes will be cast either in a polling place or through the mail. Except for the votes cast online. Yes, over the Internet.

The Libertarian Party of Arizona is conducting its entire primary election online. Arizona’s Libertarian voters who wish to participate in its primary election have no choice but to vote online. Also, the Democratic Party is experimenting with online voting for overseas voters.

Abridged history: The U.S. military has been pushing hard on getting something like this in place, most famously commissioning a system called “SERVE”. To their credit, they hired several smart security people to evaluate their security. Four of those experts published an independent report that was strongly critical of the system, notably pointing out the obvious problem with such a scheme: home computers are notoriously insecure. It’s easy to imagine viruses and whatnot being engineered to specifically watch for attempts to use the computer to vote and to specifically tamper with those votes, transparently shifting votes in the election. The military killed the program, later replacing it with a vote-by-fax scheme. It’s unclear whether this represents a security improvement, but it probably makes it easier to deal with the diversity of ballot styles.

Internet voting has also been used in a variety of other places, including Estonia. An Estonian colleague of mine demonstrated the system for me. He inserted his national ID card (a smartcard) into a PCMCIA card reader in his laptop. This allowed him to authenticate to an official government web site where he could then cast his vote. He was perfectly comfortable letting me watch the whole process because he said that he could go back and cast his vote again later, in private, overriding the vote that I saw him cast. This scheme partly addresses the risk of voter coercion and bribery (see sidebar), but it doesn’t do anything for the insecurity of the client platform.

Okay, then, how does the Arizona Libertarian party do it? You can visit their web site and click here to vote. I went as far as a web page, hosted by fairvotelections.com, which asked me for my name, birth year, house address number (i.e., for “600 Main Street”, I would enter “600”), and zip code. Both this web page and the page to which it “posts” its response are “http” pages. No cryptography is used, but then the information you’re sending isn’t terribly secret, either. Do they support Estonian-style vote overriding? Unclear. None of the links or information say a single word about security. The lack of SSL is strongly indicative of a lack of sophistication (although they did set a tracking cookie to an opaque value of some sort).

How about Democrats Abroad? If you go to their web site, you end up at VoteFromAbroad.org, which gives you two choices. You can download a PDF of the ballot, print it and mail or fax it in. Or, you can vote online via the Internet, which helpfully tells you:

Is it safe to vote by Internet? Secure Internet voting is powered by Everyone Counts, a leading expert in high-integrity online elections. We are using the same system the Michigan Democratic Party has used since 2004. Alternatively, you will have the option to vote by post, fax or in-person at Voting Centers in 34 countries around the world.

The registration system, unlike the Arizona one, at least operates over SSL. Regardless, it would seem to have all the same problems. In a public radio interview with Weekend America, Meredith Gowan LeGoff, vice chairman of Democrats Abroad, responded to a question about security issues:

Where I grew up, the dead still vote in Louisiana. There are lots of things that could potentially go wrong in any election. This might be a big challenge to a hacker somewhere. We’re hoping a hacker might care more about democracy than hacking. But we’re not depending on that. We have a lot of processes, and we’ve also chosen an outside vendor, Everyone Counts, to run the online voting.

The best we can do is the same as New Hampshire or Michigan or anywhere else, and that’s to have the members of our list and correspond that to who actually voted. Another important thing to remember is that our ballots are actually public. So you have to give your name and your address, so it’s not secret and it’s not anonymous. It’s probably easier to catch than someone in Mississippi going across to Alabama and trying to vote again.

Ahh, now there’s an interesting choice of security mechanisms. Every vote is public! For starters, this would be completely unacceptable in a general election. It’s debatable what value it has in a party election. Review time: there are two broadly different ways that U.S. political parties select their candidates, and it tends to vary from state to state. Caucuses, most famously used in Iowa, are a very public affair. In the Iowa Democratic caucuses, people stand up, speak their mind, and literally vote with their feet by where they sit or stand in the room. The Iowa Republicans, for contrast, cast their votes secretly. (Wikipedia has all the details.) Primary elections may or may not be anonymous, depending on the state. Regardless, for elections in areas dominated by a single political party, the primary election might as well be the final election, so it’s not hard to argue in favor of anonymous voting in primaries.

On the flip side, maybe we shouldn’t care about voter anonymity. Publish everybody’s name and how they voted in the newspaper. Needless to say, that would certainly simplify the security problem. Whether it would be good for democracy or not, however, is a completely different question.

[Sidebar: bribery and coercion. You don’t have to be a scholar of election history or a crazy conspiracy nut to believe that bribery and coercion are real and pressing issues in elections. Let’s examine the Estonian scheme, described above, for its resistance to bribery to coercion. The fundamental security mechanism used for voter privacy is the ability to vote anew, overriding an earlier vote. Thus, in order to successfully coerce a vote, the coercer must defeat the voter’s ability to vote again. Given that voting requires voters to have their national ID cards, the simplest answer would be to “help” voters vote “correctly”, then collect their ID cards, returning them after the election is over. You could minimize the voter’s inconvenience by doing this on the last possible day to cast a vote.

It’s important to point out that voting in a polling place may still be subject to bribery or coercion. For example, camera-phones with a video mode can record the act of casting a vote on an electronic voting system. Traditional secret-ballot paper systems are vulnerable to a chain-voting attack, where the voter is given a completed ballot before they enter the polls and returns with a fresh, unvoted ballot. Even sophisticated end-to-end voting schemes like ThreeBallot or Punchscan may be subject to equally sophisticated attacks (see these slides from John Kelsey).]

New York Times Magazine on e-voting

This Sunday’s New York Times Magazine has an article by Clive Thompson on electronic voting machines. Freedom to Tinker‘s Ed Felten is briefly quoted, as are a small handful of other experts. The article is a reasonable summary of where we are today, with paperless electronic voting systems on a downswing and optical scan paper ballots gaining in popularity. The article even conveys the importance of open source and the broader importance of transparency, i.e., convincing the loser that he or she legitimately lost the election.

A few points in the article are worth clarifying. For starters, Pennsylvania is cited as the “next Florida” – a swing state using paperless electronic voting systems whose electoral votes could well be decisive toward the 2008 presidential election. In other words, Pennsylvania has the perfect recipe to cause electoral chaos this November. Pennsylvania presently bans paper-trail attachments to voting systems. While it’s not necessarily too late to reverse this decision, Pennsylvania’s examiner for electronic voting systems, Michael Shamos, has often (and rightly) criticized these continuous paper-tape systems for their ability to compromise voters’ anonymity. Furthermore, the article cites evidence from Ohio where a claimed 20 percent of these things jammed, presumably without voters noticing and complaining. This is also consistent with a recent PhD thesis by Sarah Everett, where she used a homemade electronic voting system that would insert deliberate errors into the summary screen. About two thirds of her test subjects never noticed the errors and, amazingly enough, gave the system extremely high subjective marks. If voters don’t notice errors on a summary screen, then it’s reasonable to suppose that voters would be similarly unlikely to notice errors on a printout.

Rather than adding a bad paper-tape printer, the article explains that hand-marked optical tabulated ballots are presently seen as the best available voting technology. For technologies presently on the market and certified for use, this is definitely the case. A variety of assistive devices exist to help voters with low-vision, zero-vision, and other issues, although there’s plenty of room for improvement on that score.

Unfortunately, optical scanners, themselves, have their own security problems. For example, the Hart InterCivic eScan (a precinct-based optical scanner) has an Ethernet port on the back, and you can pretty much just jack in and send it arbitrary commands that can extract or rewrite the firmware and/or recorded votes. This year’s studies from California and Ohio found a variety of related issues. [I was part of the source code review team for the California study of Hart InterCivic.] The only short-term solution to compensate for these design flaws is to manually audit the results. This is probably the biggest issue glossed over in the article: when you have an electronic tabulation system, you must also have a non-electronic auditing procedure to verify the correctness of the electronic tabulation. This is best done by randomly sampling the ballots by hand and statistically comparing them to the official totals. In tight races, you sample more ballots to increase your confidence. Rep. Rush Holt’s bill, which has yet to come up for a vote, would require this nationwide, but it’s something that any state or county could and should institute on its own.

Lastly, the article has a fair amount of discussion of the Sarasota fiasco in November 2006, where roughly one in seven votes that were cast electronically were recorded as “undervotes” in the Congressional race, while far fewer undervotes were recorded in other races on the same ballot. If you do any sort of statistical projection to replace even a fraction of those undervotes with the observed ratios of cast votes, then the Congressional race would have had a different winner. [I worked as an expert for the Jennings campaign in the Sarasota case. David Dill and I wrote a detailed report on the Sarasota undervote issue. It is our opinion that there is not presently any definitive explanation for the causes of Sarasota’s undervote rate and a lot of analysis still needs to be performed.]

There are three theories raised in the article to explain Sarasota’s undervote anomaly: deliberate abstention (voters deliberately choosing to leave the race blank), human factors (voters being confused by the layout of the page), and malfunctioning machines. The article offers no support for the abstention theory beyond the assertions of Kathy Dent, the Sarasota County election supervisor, and ES&S, Sarasota’s equipment vendor (neither of whom have ever offered any support for these assertions). Dan Rather Reports covered many of the issues that could lead to machine malfunction, including poor quality control in manufacturing. To support the human factors theory, the article only refers to “early results from a separate test by an MIT professor”, but the professor in question, Ted Selker, has never published these results. The only details I’ve ever been able to find about his experiments is this quote from a Sarasota Herald-Tribune article:

On Tuesday [November 14, 2006], Selker set up a computer with a dummy version of the Sarasota ballot at the Boston Museum of Science to test the extent of the ballot design problems.

Twenty people cast fake ballots and two people missed the District 13 race. But the experiment was hastily designed and had too few participants to draw any conclusion, Selker said.

Needless to say, that’s not enough experimental evidence to support a usefully quantitative conclusion. The article also quotes Michael Shamos with some very specific numbers:

It’s difficult to say how often votes have genuinely gone astray. Michael Shamos, a computer scientist at Carnegie Mellon University who has examined voting-machine systems for more than 25 years, estimates that about 10 percent of the touch-screen machines “fail” in each election. “In general, those failures result in the loss of zero or one vote,” he told me. “But they’re very disturbing to the public.”

I would love to know where he got those numbers, since many real elections, such as the Sarasota election, seem to have yielded far larger problem rates.

For the record, it’s worth pointing out that Jennings has never conceded the election. Instead, after Florida’s courts decided to deny her motions for expert discovery (i.e., she asked the court to let her experts have a closer look at the voting machines and the court said “no”), Jennings has instead moved her complaint to the Committee on House Administration. Technically, Congress is responsible for seating its own members and can overturn a local election result. The committee has asked the Governmental Accountability Office to investigate further. They’re still working on it. Meanwhile, Jennings is preparing to run again in 2008.

In summary, the NYT Magazine article did a reasonable job of conveying the high points of the electronic voting controversy. There will be no surprises for anybody who follows the issue closely, and there are a only few places where the article conveys “facts” that are “truthy” without necessarily being true. If you want to get somebody up to speed on the electronic voting issue, this article makes a fine starting place.

[Irony sidebar: in the same election where Jennings lost due to the undervote anomaly, a voter initiative appeared on the ballot that would require the county to replace its touchscreen voting systems with paper ballots. That initiative passed.]

Latest voting system analysis from California

This summer, the California Secretary of State commissioned a first-ever “Top to Bottom Review” of all the electronic voting systems used in the state. In August, the results of the first round of review were published, finding significant security vulnerabilities and a variety of other problems with the three vendors reviewed at the time. (See the Freedom to Tinker coverage for additional details.) The ES&S InkaVote Plus system, used in Los Angeles County, wasn’t included in this particular review. (The InkaVote is apparently unrelated to the ES&S iVotronic systems used elsewhere in the U.S.) The reports on InkaVote are now public.

(Disclosure: I was a co-author of the Hart InterCivic source code report, released by the California Secretary of State in August. I was uninvolved in the current round of investigation and have no inside information about this work.)

First, it’s worth a moment to describe what InkaVote is actually all about.  It’s essentially a precinct-based optical-scan paper ballot system, with a template-like device, comparable to the Votomatic punch-card systems.  As such, even if the tabulation computers are completely compromised, the paper ballots remain behind with the potential for being retabulated, whether mechanically or by hand.

The InkaVote reports represent work done by a commercial firm, atsec, whose primary business is performing security evaluation against a variety of standards, such as FIPS-140 or the ISO Common Criteria. The InkaVote reports are quite short (or, at least the public reports are short). In effect, we only get to see the high-level bullet-points rather than detailed explanations of what they found. Furthermore, their analysis was apparently compressed to an impossible two week period, meaning there are likely to be additional issues that exist but were not discovered by virtue of the lack of time. Despite this, we still get a strong sense of how vulnerable these systems are.

From the source code report:

The documentation provided by the vendor does not contain any test procedure description; rather, it provides only a very abstract description of areas to be tested. The document mentions test cases and test tools, but these have not been submitted as part of the TDP and could not be considered for this review. The provided documentation does not show evidence of “conducting of tests at every level of the software structure”. The TDP and source code did not contain unit tests, or any evidence that the modules were developed in such a way that program components were tested in isolation. The vendor documentation contains a description of cryptographic algorithms that is inconsistent with standard practices and represented a serious vulnerability. No vulnerability assessment was made as part of the documentation review because the attack approach could not be identified based on the documentation alone. (The source review identified additional specific vulnerabilities related to encryption).

This is consistent, for better or for worse, with what we’ve seen from the other vendors.  Given that, security vulnerabilities are practically a given. So, what kinds of vulnerabilities were found?

In the area of cryptography and key management, multiple potential and actual vulnerabilities were identified, including inappropriate use of symmetric cryptography for authenticity checking (A.8), use of a very weak homebrewed cipher for the master key algorithm (A.7), and key generation with artificially low entropy which facilitates brute force attacks (A.6). In addition, the code and comments indicated that a hash (checksum) method that is suitable only for detecting accidental corruption is used inappropriately with the claimed intent of detecting malicious tampering. The Red Team has demonstrated that due to the flawed encryption mechanisms a fake election definition CD can be produced that appears genuine, see Red Team report, section A.15.

106 instances were identified of SQL statements embedded in the code with no evidence of sanitation of the data before it is added to the SQL statement. It is considered a bad practice to build the SQL statements at runtime; the preferred method is to use predefined SQL statements using bound variables. A specific potential vulnerability was found and documented in A.10, SQL Injection.

Ahh, lovely (or, I should say, oy gevaldik). Curiously, the InkaVote tabulation application appears to have been written in Java – a good thing, because it eliminates the possibility of buffer overflows. Nonetheless, writing this software in a “safe” language is insufficient to yield a secure system.

The reviewer noted the following items as impediments to an effective security analysis of the system:

  • Lack of design documentation at appropriate levels of detail.
  • Design does not use privilege separation, so all code in the entire application is potentially security critical.
  • Unhelpful or misleading comments in the code.
  • Potentially complex data flow due to exception handling.
  • Subjectively, large amount of source code compared to the functionality implemented.

The code constructs used were generally straightforward and easy to follow on a local level. However, the lack of design documentation made it difficult to globally analyze the system.

It’s clear that none of the voting system vendors that have been reviewed so far have had the engineering mandate (or the engineering talent) to build secure software systems that are suitably designed to resist threats that are reasonable to expect in an election setting. Instead, these vendors have produced systems that are “good enough” to sell, relying on external tamper-resistance mechanisms and human procedures. The Red Team report offers some insight into the value of these kinds of mitigations:

In the physical security testing, the wire and tamper proof paper seals were easily removed without damage to the seals using simple household chemicals and tools and could be replaced without detection (Ref item A.1 in the Summary Table). The tamper proof paper seals were designed to show evidence of removal and did so if simply peeled off but simple household solvents could be used to remove the seal unharmed to be replaced later with no evidence that it had been removed. Once the seals are bypassed, simple tools or easy modifications to simple tools could be used to access the computer and its components (Ref A.2 in summary). The key lock for the Transfer Device was unlocked using a common office item without the special ‘key’ and the seal removed. The USB port may then be used to attach a USB memory device which can be used in as part of other attacks to gain control of the system. The keyboard connector for the Audio Ballot unit was used to attach a standard keyboard which was then used to get access to the operating system (Ref A.10 in Summary) without reopening the computer.

The seal used to secure the PBC head to the ballot box provided some protection but the InkaVote Plus Manual (UDEL) provides instructions for installing the seal that, if followed, will allow the seal to be opened without breaking it (Ref A.3 in the Summary Table). However, even if the seals are attached correctly, there was enough play and movement in the housing that it was possible to lift the PBC head unit out of the way and insert or remove ballots (removal was more difficult but possible). [Note that best practices in the polling place which were not considered in the security test include steps that significantly reduce the risk of this attack succeeding but this weakness still needs to be rectified.]

I’ll leave it as an exercise to the reader to determine what the “household solvents” or “common office item” must be.

Further adventures in personal credit

In our last installment, I described how one of the mortgage vendors who I was considering for the loan for my new home failed to trigger the credit alerting mechanism (Debix) to which I was signed up. Since then, I’ve learned several interesting facts. First, the way that Debix operates is that they insert a line into your credit reports which says, in effect, “you, the reader of this line, are required to call this 1-800 telephone number, prior to granting credit based on what you see in this report.” That 800-number finds its way to Debix, where a robot answers the phone and asks the human who called it for their name/organization, and the purpose of the request. Then, the Debix robot calls up their customer and asks permission to authorize the request, playing back the recordings made earlier.

The only thing that makes this “mandatory” is a recent law (sorry, I don’t have the citation handy) which specifies how lenders and such are required to act when they see one of these alerts in a credit report. The mechanism, aside from legal requirements, is otherwise used at the discretion of a human loan officer. This leads me to wonder whether or not the mechanism works when there isn’t a human loan officer involved. I may just need to head over to some big box store and purchase myself something with an in-store instant-approval credit card, just to see what happens. (With my new house will inevitably come a number of non-trivial expenses, and oh what great savings I can get with those insta-credit cards!)

So does the mechanism work? Yesterday morning, as I was getting into the car to go to work, my cell phone rang with an 800-number as the caller-ID. “Hello?” It was the Debix robot, asking for my approval. Debix played a recording of an apparently puzzled loan officer who identified herself as being from the bank that, indeed, I’m using for my loan. Well that’s good. Could the loan officer have been lying? Unlikely. An identity thief isn’t really the one who gets to see the 800-number. It’s the loan officer of the bank that the identity thief is trying to defraud who then makes the call. That means our prospective thief would need to guess the proper bank to use that would fool me into giving my okay. Given the number of choices, the odds of the thief nailing it on the first try are pretty low. (Unless our prospective thief is clever enough to have identified a bank that’s too lazy to follow the proper procedure and call the 800-number; more on this below).

A side-effect of my last post was that it got noticed by some people inside Debix and I ended up spending some quality time with one of their people on the telephone.  They were quite interested in my experiences.  They also told me, assuming everything is working right, that there will be some additional authentication hoops that the lender is (legally) mandated to jump through between now and when they actually write out the big check. Our closing date is next week, Friday, so I should have one more post when it’s all over to describe how all of that worked in the end.

Further reading: The New York Times recently had an article (“In ID Theft, Some Victims See an Opportunity“, November 16, 2007) discussing Debix and several other companies competing in the same market. Here’s an interesting quote:

Among its peers, LifeLock has attracted the most attention — much of it negative. In radio and television ads, Todd Davis, chief executive of LifeLock, gives out his Social Security number to demonstrate his faith in the service. As a result, he has been hit with repeated identity theft attacks, including one successful effort this summer in which a check-cashing firm gave out a $500 loan to a Texas fraudster without ever checking Mr. Davis’s credit report.

Sure enough, if you go to LifeLock’s home page, you see Mr. Davis’s social security number, right up front. And, unsurprisingly, he fell victim because, indeed, fraudsters identified a loan organization that didn’t follow the (legally) mandated protocol.

How do we solve the problem? Legally mandated protocols need to become technically mandatory protocols. The sort of credit alerts placed by Debix, LifeLock, and others need to be more than just a line in the consumer’s credit file. Instead, the big-3 credit bureaus need to be (legally) required not to divulge anything beyond the credit-protection vendor’s 800-number without the explicit (technical) permission of the vendor (on behalf of the user). Doing this properly would require the credit bureaus to standardize and implement a suitable Internet-based API with all the right sorts of crypto authentication and so forth – nothing technically difficult about that. Legally, I’d imagine they’d put up more of a fight, since they may not like these startups getting in the way of their business.

The place where the technical difficulty would ramp up is that the instant-credit-offering big-box stores would want to automate their side of the phone robot conversation. That would then require all these little startups to standardize their own APIs, which seems difficult when they’re all still busily inventing their own business models.

(Sidebar: I set up this Debix thing months ago. Then I get a phone call, out of the blue, that asked me to remember my PIN. Momentary panic: what PIN did I use? Same as the four-digit one I use for my bank ATM? Same as the six-digit one I uses for my investment broker? Same as the four-digit one used by my preferred airline’s frequent flyer web site which I can’t seem to change? Anyway, I guessed right. I’d love to know how many people forget.)

The ease of applying for a home loan

I’m currently in the process of purchasing a new house. I called up a well-known national bank and said I wanted a mortgage. In the space of 30 minutes, I was pre-approved, had my rates locked in, and so forth. Pretty much the only identifying information I had to provide was the employer, salary, and social security number for myself and my wife, as well as some basic stats on our investment portfolio. Interestingly, the agent said that for people in my situation (sterling credit, paying more than 20% of the down payment out of our own pocket), they believe I’m highly unlikely to ever default on the loan. As a result, they do not need me to go the trouble of documenting my income or assets beyond what I told them over the phone. They’ll take my word for it.

(In an earlier post, I discussed my name and social security number having been stolen from where they had been kept in Ohio. Ohio gave me a free subscription to Debix, which claims to be able to intercept requests to read my credit report, calling my cell phone to ask for my permission. Why not? I signed up. Well, my cell phone never buzzed with any sort of call from Debix. Their service, whatever it does, had no effect here.)

Obviously, there’s a lot more to finalizing a loan and completing the purchase of a home than there is to getting approved for a loan and locking a rate. Nonetheless, it’s striking how little personal information I had to divulge to get this far into the game. Could somebody who knew my social security number use this mechanism to borrow money against my good credit and run away to a Carribean island with the proceeds? I would have to hope that there’s some kind of mechanism further down the pipeline to catch such fraud, but it’s not too hard to imagine ways to game this system, given what I’ve observed so far.

Needless to say, once this home purchase is complete, I’ll be freezing my credit report. Let’s just hope the freezing mechanism is more useful than Debix’s notification system.

(Sidebar: an $18 charge appeared on my credit card last month for a car rental agency that I’ve never used, claiming to have a “swipe” of my credit card. I challenged it, so now the anti-fraud division is allegedly attempting to recover the signed charge slip from the car rental agency. The mortgage agent, mentioned above, saw a note in my credit report on this and asked me if I had “challenged my bank”. I explained the circumstances and all was well. However, it’s interesting to note that the “challenge”, as it apparently appears in my credit report, doesn’t have any indication as to what’s being challenged or how significant it might be. Again, the agent basically took my word for it.)