March 29, 2024

Archives for October 2010

Hacking the D.C. Internet Voting Pilot

The District of Columbia is conducting a pilot project to allow overseas and military voters to download and return absentee ballots over the Internet. Before opening the system to real voters, D.C. has been holding a test period in which they've invited the public to evaluate the system's security and usability.

This is exactly the kind of open, public testing that many of us in the e-voting security community — including me — have been encouraging vendors and municipalities to conduct. So I was glad to participate, even though the test was launched with only three days' notice. I assembled a team from the University of Michigan, including my PhD students, Eric Wustrow and Scott Wolchok, and Dawn Isabel, a member of the University of Michigan technical staff.

Within 36 hours of the system going live, our team had found and exploited a vulnerability that gave us almost total control of the server software, including the ability to change votes and reveal voters’ secret ballots. In this post, I’ll describe what we did, how we did it, and what it means for Internet voting.

D.C.'s pilot system

The D.C. system is built around an open source server-side application developed in partnership with the TrustTheVote project. Under the hood, it looks like a typical web application. It's written using the popular Ruby on Rails framework and runs on top of the Apache web server and MySQL database.

Absentee overseas voters receive a physical letter in the mail instructing them to visit a D.C. web site, http://www.dcboee.us/DVM/, and log in with a unique 16-character PIN. The system gives voters two options: they can download a PDF ballot and return it by mail, or they can download a PDF ballot, fill it out electronically, and then upload the completed ballot as a PDF file to the server. The server encrypts uploaded ballots and saves them in encrypted form, and, after the election, officials transfer them to a non-networked PC, where they decrypt and print them. The printed ballots are counted using the same procedures used for mail-in paper ballots.

A small vulnerability, big consequences

We found a vulnerability in the way the system processes uploaded ballots. We confirmed the problem using our own test installation of the web application, and found that we could gain the same access privileges as the server application program itself, including read and write access to the encrypted ballots and database.

The problem, which geeks classify as a “shell-injection vulnerability,” has to do with the ballot upload procedure. When a voter follows the instructions and uploads a completed ballot as a PDF file, the server saves it as a temporary file and encrypts it using a command-line tool called GnuPG. Internally, the server executes the command gpg with the name of this temporary file as a parameter: gpg […] /tmp/stream,28957,0.pdf.

We realized that although the server replaces the filename with an automatically generated name (“stream,28957,0” in this example), it keeps whatever file extension the voter provided. Instead of a file ending in “.pdf,” we could upload a file with a name that ended in almost any string we wanted, and this string would become part of the command the server executed. By formatting the string in a particular way, we could cause the server to execute commands on our behalf. For example, the filename “ballot.$(sleep 10)pdf” would cause the server to pause for ten seconds (executing the “sleep 10” command) before responding. In effect, this vulnerability allowed us to remotely log in to the server as a privileged user.

Our demonstration attacks

D.C. launched the public testbed server on Tuesday, September 28. On Wednesday afternoon, we began to exploit the problem we found to demonstrate a number of attacks:

  • We collected crucial secret data stored on the server, including the database username and password as well as the public key used to encrypt the ballots.
  • We modified all the ballots that had already been cast to contain write-in votes for candidates we selected. (Although the system encrypts voted ballots, we simply discarded the encrypted files and replaced them with different ones that we encrypted using the same key.) We also rigged the system to replace future votes in the same way.
  • We installed a back door that let us view any ballots that voters cast after our attack. This modification recorded the votes, in unencrypted form, together with the names of the voters who cast them, violating ballot secrecy.
  • To show that we had control of the server, we left a “calling card” on the system's confirmation screen, which voters see after voting. After 15 seconds, the page plays the University of Michigan fight song. Here's a demonstration.

Stealthiness wasn't our main objective, and our demonstration had a much greater footprint inside the system than a real attack would need. Nevertheless, we did not immediately announce what we had done, because we wanted to give the administrators an opportunity to exercise their intrusion detection and recovery processes — an essential part of any online voting system. Our attack remained active for two business days, until Friday afternoon, when D.C. officials took down the testbed server after several testers pointed out the fight song.

Based on this experience and other results from the public tests, the D.C. Board of Elections and Ethics has announced that they will not proceed with a live deployment of electronic ballot return at this time, though they plan to continue to develop the system. Voters will still be able to download and print ballots to return by mail, which seems a lot less risky.

D.C. officials brought the testbed server back up today (Tuesday) with the electronic ballot return mechanism disabled. The public test period will continue until Friday, October 8.

What this means for Internet voting

The specific vulnerability that we exploited is simple to fix, but it will be vastly more difficult to make the system secure. We've found a number of other problems in the system, and everything we've seen suggests that the design is brittle: one small mistake can completely compromise its security. I described above how a small error in file-extension handling left the system open to exploitation. If this particular problem had not existed, I'm confident that we would have found another way to attack the system.

None of this will come as a surprise to Internet security experts, who are familiar with the many kinds of attacks that major web sites suffer from on a daily basis. It may someday be possible to build a secure method for submitting ballots over the Internet, but in the meantime, such systems should be presumed to be vulnerable based on the limitations of today's security technology.

We plan to write more about the problems we found and their implications for Internet voting in a forthcoming paper.


Professor J. Alex Halderman is a computer scientist at the University of Michigan.

NPR Gets it Wrong on the Rutgers Tragedy: Cyberbullying is Unique

On Saturday, NPR’s Weekend All Things Considered ran a story by Elizabeth Blair called “Public Humiliation: It’s Not The Web, It’s Us” [transcript]. The story purported to examine the phenomenon of internet-mediated public humiliation in the context of last weeks tragic suicide of Tyler Clementi, a Rutgers student who was secretly filmed having a sexual encounter in his dorm room. The video was redistributed online by his classmates who created it. The story is heartbreaking to many locals who have friends or family at Rutgers, especially to those of us in the technology policy community who are again reminded that so-called “cyberbullying” can be a life-or-death policy issue.

Thus, I was disappointed that the All Things Considered piece decided to view the issue through the lens of “public humiliation,” opening with a sampling of reality TV clips and the claim that they are significantly parallel to this past week’s tragedy. This is just not the case, for reasons that are widely known to people who study online bullying. Reality TV is about participants voluntarily choosing to expose themselves in an artificial environment, and cyberbullying is about victims being attacked against their will in the real world and in ways that reverberate even longer and more deeply than traditional bullying. If Elizabeth Blair or her editors had done the most basic survey of the literature or experts, this would have been clear.

The oddest choice of interviewees was Tavia Nyong’o, a professor of performance studies at New York University. I disagree with his claim that the TV show Glee has something significant to say about the topic, but more disturbing is his statement about what we should conclude from the event:

“[My students and I] were talking about the misleading perception, because there’s been so much advances in visibility, there’s no cost to coming out anymore. There’s a kind of equal opportunity for giving offense and for public hazing and for humiliating. We should all be able to deal with this now because we’re all equally comfortable in our own skins. Tragically, what Rutgers reveals is that we’re not all equally comfortable in our own skins.

I’m not sure if it’s as obvious to everyone else why this is absolutely backward, but I was shocked. What Rutgers reveals is, yet again, that new technologies can facilitate new and more creative ways of being cruel to each other. What Rutgers reveals is that although television may give us ways to examine the dynamics of privacy and humiliation, we have a zone of personal privacy that still matters deeply. What Rutgers tells us is that cyberbullying has introduced new dynamics into the way that young people develop their identities and deal with hateful antagonism. Nothing about Glee or reality TV tells us that we shouldn’t be horrified when someone secretly records and distributes video of our sexual encounters. I’m “comfortable in my own skin” but I would be mortified if my sexual exploits were broadcast online. Giving Nyong’o the benefit of the doubt, perhaps his quote was taken out of context, or perhaps he’s just coming from a culture at NYU that differs radically from the experience of somewhere like middle America, but I don’t see how Blair or her editors thought that this way of constructing the piece was justifiable.

The name of the All Things Considered piece was, “It’s Not The Web, It’s Us.” The reality is that it’s both. Humiliation and bullying would of course exist regardless of the technology, but new communications technologies change the balance. For instance, the Pew Internet & American Life Project has observed how digital technologies are uniquely invasive, persistent, and distributable. Pew has also pointed out (as have many other experts) that computer-mediated communications can often have the effect of disinhibition — making attackers comfortable with doing what they would otherwise never do in direct person-to-person contact. The solution may have more to do with us than the technology, but our solutions need to be informed by an understanding of how new technologies alter the dynamic.

General Counsel's Role in Shoring Up Authentication Practices Used in Secure Communications

Business conducted over the Internet has benefited hugely from web-based encryption. Retail sales, banking transactions, and secure enterprise applications have all flourished because of the end-to-end protection offered by encrypted Internet communications. An encrypted communication, however, is only as secure as the process used to authenticate the parties doing the communicating. The major Internet browsers all currently use the Certificate Authority Trust Model to verify the identity of websites on behalf of end-users. (The Model involves third parties known as certificate authorities or “CAs” issuing digital certificates to browswers and website operators that enable the end-user’s computer to cryptographically prove that the same CA that issued a certificate to the browser also issued a certificate to the website). The CA Trust Model has recently come under fire by the information security community because of technical and institutional defects. Steve Schultze and Ed Felten, in previous posts here, have outlined the Model’s shortcomings and examined potential fixes. The vulernabilities are a big deal because of the potential for man-in-the-middle wiretap exploits as well as imposter website scams.

One of the core problems with the CA Trust Model is that there are just too many CAs. Although organizations can configure their browser platforms to trust fewer CAs, the problem of how to isolate trustworthy (and untrustworthy) CAs remains. A good review of trustworthiness would start with examining the civil and criminal track record of CAs and their principals; identifying the geographic locations where CAs are resident; determining in which legal jurisdictions the CAs operate; determining which governmental actors may be able to coerce the CA to issue bogus certificates, behind-the-scenes, for the purpose of carrying out surveillance; analyzing the loss limitation and indemnity provisions found in each CA’s Certification Practice Statement or CPS; and nailing down which CAs engage in cross-certification. These are just a few considerations that need to be considered from the standpoint of an organization as an end-user. There is an entirely separate legal analysis that must be done from the standpoint of an organization as a website operator and purchaser of SSL certificates (which will be the subject of a future post).

The bottom line is that the tasks involved with evaluating CAs are not ones that IT departments, acting alone, have sufficient resources to perform. I recently posted on my law firm’s blog a short analysis regarding why it’s time for General Counsel to weigh in on the authentication practices associated with secure communications. The post resonated in the legal blogosphere and was featured in write-ups on Law.Com’s web-magazine “Corporate Counsel” and 3 Geeks and a Law Blog. The sentiment seems to be that this is an area ripe for remedial measures and that a collaborative approach is in order which leverages the resources and expertise of General Counsel. Could it be that the deployment of the CA Trust Model is about to get a long overdue shakeup?

Did a denial-of-service attack cause the flash crash? Probably not.

Last June I wrote about an analysis from Nanex.com claiming that a kind of spam called “quote stuffing” on the NYSE network may have caused the “flash crash” of shares on the New York Stock Exchange, May 6, 2010. I wrote that this claim was “interesting if true, and interesting anyway”.

It turns out that “A Single Sale Worth $4.1 Billion Led to the ‘Flash Crash’“, according to a report by the SEC and the CFTC.

The SEC’s report says that no, quote-stuffing did not cause the crash. The report says,

It has been hypothesized that these delays are due to a manipulative practice called “quote-stuffing” in which high volumes of quotes are purposely sent to exchanges in order to create data delays that would afford the firm sending these quotes a trading advantage.

Our investigation to date reveals that the largest and most erratic price moves observed on May 6 were caused by withdrawals of liquidity and the subsequent execution of trades at stub quotes. We have interviewed many of the participants who withdrew their liquidity, including those who were party to significant numbers of buys and sells that occurred at stub quote prices. …[E]ach market participant had many and varied reasons for its specific actions and decisions on May 6. … [T]he evidence does not support the hypothesis that delays in the CTS and CQS feeds triggered or otherwise caused the extreme volatility in security prices observed that day.

Nevertheless … the SEC staff will be working with the market centers in exploring their members’ trading practices to identify any unintentional or potentially abusive or manipulative conduct that may cause such system delays that inhibit the ability of market participants to engage in a fair and orderly process of price discovery.

Given this evidence, I guess we can simplify “interesting if true, and interesting anyway” to just “interesting anyway”.