March 19, 2024

Security Analysis of the Dominion ImageCast X

Today, the Federal District Court for the Northern District of Georgia permitted the public release of Security Analysis of Georgia’s ImageCast X Ballot Marking Devices, a 96-page report that describes numerous security problems affecting Dominion voting equipment used in Georgia and other states.

Let’s Encrypt: Bringing HTTPS to Every Web Site

HTTPS, the cryptographic protocol used to secure web traffic as it travels across the Internet, has been in the news a lot recently. We’ve heard about security problems like Goto Fail, Heartbleed, and POODLE — vulnerabilities in the protocol itself or in specific implementations — that resulted in major security headaches. Yet the single biggest […]

Anticensorship in the Internet's Infrastructure

I’m pleased to announce a research result that Eric Wustrow, Scott Wolchok, Ian Goldberg, and I have been working on for the past 18 months: Telex, a new approach to circumventing state-level Internet censorship. Telex is markedly different from past anticensorship efforts, and we believe it has the potential to shift the balance of power in the censorship arms race.

What makes Telex different from previous approaches:

  • Telex operates in the network infrastructure — at any ISP between the censor’s network and non-blocked portions of the Internet — rather than at network end points. This approach, which we call “end-to-middle” proxying, can make the system robust against countermeasures (such as blocking) by the censor.
  • Telex focuses on avoiding detection by the censor. That is, it allows a user to circumvent a censor without alerting the censor to the act of circumvention. It complements anonymizing services like Tor (which focus on hiding with whom the user is attempting to communicate instead of that that the user is attempting to have an anonymous conversation) rather than replacing them.
  • Telex employs a form of deep-packet inspection — a technology sometimes used to censor communication — and repurposes it to circumvent censorship.
  • Other systems require distributing secrets, such as encryption keys or IP addresses, to individual users. If the censor discovers these secrets, it can block the system. With Telex, there are no secrets that need to be communicated to users in advance, only the publicly available client software.
  • Telex can provide a state-level response to state-level censorship. We envision that friendly countries would create incentives for ISPs to deploy Telex.

For more information, keep reading, or visit the Telex website.

The Problem

Government Internet censors generally use firewalls in their network to block traffic bound for certain destinations, or containing particular content. For Telex, we assume that the censor government desires generally to allow Internet access (for economic or political reasons) while still preventing access to specifically blacklisted content and sites. That means Telex doesn’t help in cases where a government pulls the plug on the Internet entirely. We further assume that the censor allows access to at least some secure HTTPS websites. This is a safe assumption, since blocking all HTTPS traffic would cut off practically every site that uses password logins.

<!– –>

Many anticensorship systems work by making an encrypted connection (called a “tunnel”) from the user’s computer to a trusted proxy server located outside the censor’s network. This server relays requests to censored websites and returns the responses to the user over the encrypted tunnel. This approach leads to a cat-and-mouse game, where the censor attempts to discover and block the proxy servers. Users need to learn the address and login information for a proxy server somehow, and it’s very difficult to broadcast this information to a large number of users without the censor also learning it.

How Telex Works

Telex turns this approach on its head to create what is essentially a proxy server without an IP address. In fact, users don’t need to know any secrets to connect. The user installs a Telex client app (perhaps by downloading it from an intermittently available website or by making a copy from a friend). When the user wants to visit a blacklisted site, the client establishes an encrypted HTTPS connection to a non-blacklisted web server outside the censor’s network, which could be a normal site that the user regularly visits. Since the connection looks normal, the censor allows it, but this connection is only a decoy.

The client secretly marks the connection as a Telex request by inserting a cryptographic tag into the headers. We construct this tag using a mechanism called public-key steganography. This means anyone can tag a connection using only publicly available information, but only the Telex service (using a private key) can recognize that a connection has been tagged.

As the connection travels over the Internet en route to the non-blacklisted site, it passes through routers at various ISPs in the core of the network. We envision that some of these ISPs would deploy equipment we call Telex stations. These devices hold a private key that lets them recognize tagged connections from Telex clients and decrypt these HTTPS connections. The stations then divert the connections to anti­censorship services, such as proxy servers or Tor entry points, which clients can use to access blocked sites. This creates an encrypted tunnel between the Telex user and Telex station at the ISP, redirecting connections to any site on the Internet.

<!– –>

Telex doesn’t require active participation from the censored websites, or from the non-censored sites that serve as the apparent connection destinations. However, it does rely on ISPs to deploy Telex stations on network paths between the censor’s network and many popular Internet destinations. Widespread ISP deployment might require incentives from governments.

Development so Far

At this point, Telex is a concept rather than a production system. It’s far from ready for real users, but we have developed proof-of-concept software for researchers to experiment with. So far, there’s only one Telex station, on a mock ISP that we’re operating in our lab. Nevertheless, we have been using Telex for our daily web browsing for the past four months, and we’re pleased with the performance and stability. We’ve even tested it using a client in Beijing and streamed HD YouTube videos, in spite of YouTube being censored there.

Telex illustrates how it is possible to shift the balance of power in the censorship arms race, by thinking big about the problem. We hope our work will inspire discussion and further research about the future of anticensorship technology.

You can find more information and prototype software at the Telex website, or read our technical paper, which will appear at Usenix Security 2011 in August.

Hacking the D.C. Internet Voting Pilot

The District of Columbia is conducting a pilot project to allow overseas and military voters to download and return absentee ballots over the Internet. Before opening the system to real voters, D.C. has been holding a test period in which they've invited the public to evaluate the system's security and usability.

This is exactly the kind of open, public testing that many of us in the e-voting security community — including me — have been encouraging vendors and municipalities to conduct. So I was glad to participate, even though the test was launched with only three days' notice. I assembled a team from the University of Michigan, including my PhD students, Eric Wustrow and Scott Wolchok, and Dawn Isabel, a member of the University of Michigan technical staff.

Within 36 hours of the system going live, our team had found and exploited a vulnerability that gave us almost total control of the server software, including the ability to change votes and reveal voters’ secret ballots. In this post, I’ll describe what we did, how we did it, and what it means for Internet voting.

D.C.'s pilot system

The D.C. system is built around an open source server-side application developed in partnership with the TrustTheVote project. Under the hood, it looks like a typical web application. It's written using the popular Ruby on Rails framework and runs on top of the Apache web server and MySQL database.

Absentee overseas voters receive a physical letter in the mail instructing them to visit a D.C. web site, http://www.dcboee.us/DVM/, and log in with a unique 16-character PIN. The system gives voters two options: they can download a PDF ballot and return it by mail, or they can download a PDF ballot, fill it out electronically, and then upload the completed ballot as a PDF file to the server. The server encrypts uploaded ballots and saves them in encrypted form, and, after the election, officials transfer them to a non-networked PC, where they decrypt and print them. The printed ballots are counted using the same procedures used for mail-in paper ballots.

A small vulnerability, big consequences

We found a vulnerability in the way the system processes uploaded ballots. We confirmed the problem using our own test installation of the web application, and found that we could gain the same access privileges as the server application program itself, including read and write access to the encrypted ballots and database.

The problem, which geeks classify as a “shell-injection vulnerability,” has to do with the ballot upload procedure. When a voter follows the instructions and uploads a completed ballot as a PDF file, the server saves it as a temporary file and encrypts it using a command-line tool called GnuPG. Internally, the server executes the command gpg with the name of this temporary file as a parameter: gpg […] /tmp/stream,28957,0.pdf.

We realized that although the server replaces the filename with an automatically generated name (“stream,28957,0” in this example), it keeps whatever file extension the voter provided. Instead of a file ending in “.pdf,” we could upload a file with a name that ended in almost any string we wanted, and this string would become part of the command the server executed. By formatting the string in a particular way, we could cause the server to execute commands on our behalf. For example, the filename “ballot.$(sleep 10)pdf” would cause the server to pause for ten seconds (executing the “sleep 10” command) before responding. In effect, this vulnerability allowed us to remotely log in to the server as a privileged user.

Our demonstration attacks

D.C. launched the public testbed server on Tuesday, September 28. On Wednesday afternoon, we began to exploit the problem we found to demonstrate a number of attacks:

  • We collected crucial secret data stored on the server, including the database username and password as well as the public key used to encrypt the ballots.
  • We modified all the ballots that had already been cast to contain write-in votes for candidates we selected. (Although the system encrypts voted ballots, we simply discarded the encrypted files and replaced them with different ones that we encrypted using the same key.) We also rigged the system to replace future votes in the same way.
  • We installed a back door that let us view any ballots that voters cast after our attack. This modification recorded the votes, in unencrypted form, together with the names of the voters who cast them, violating ballot secrecy.
  • To show that we had control of the server, we left a “calling card” on the system's confirmation screen, which voters see after voting. After 15 seconds, the page plays the University of Michigan fight song. Here's a demonstration.

Stealthiness wasn't our main objective, and our demonstration had a much greater footprint inside the system than a real attack would need. Nevertheless, we did not immediately announce what we had done, because we wanted to give the administrators an opportunity to exercise their intrusion detection and recovery processes — an essential part of any online voting system. Our attack remained active for two business days, until Friday afternoon, when D.C. officials took down the testbed server after several testers pointed out the fight song.

Based on this experience and other results from the public tests, the D.C. Board of Elections and Ethics has announced that they will not proceed with a live deployment of electronic ballot return at this time, though they plan to continue to develop the system. Voters will still be able to download and print ballots to return by mail, which seems a lot less risky.

D.C. officials brought the testbed server back up today (Tuesday) with the electronic ballot return mechanism disabled. The public test period will continue until Friday, October 8.

What this means for Internet voting

The specific vulnerability that we exploited is simple to fix, but it will be vastly more difficult to make the system secure. We've found a number of other problems in the system, and everything we've seen suggests that the design is brittle: one small mistake can completely compromise its security. I described above how a small error in file-extension handling left the system open to exploitation. If this particular problem had not existed, I'm confident that we would have found another way to attack the system.

None of this will come as a surprise to Internet security experts, who are familiar with the many kinds of attacks that major web sites suffer from on a daily basis. It may someday be possible to build a secure method for submitting ballots over the Internet, but in the meantime, such systems should be presumed to be vulnerable based on the limitations of today's security technology.

We plan to write more about the problems we found and their implications for Internet voting in a forthcoming paper.


Professor J. Alex Halderman is a computer scientist at the University of Michigan.

Indian E-Voting Researcher Freed After Seven Days in Police Custody

FLASH: 4:47 a.m. EDT August 28 — Indian e-voting researcher Hari Prasad was released on bail an hour ago, after seven days in police custody. Magistrate D. H. Sharma reportedly praised Hari and made strong comments against the police, saying Hari has done service to his country. Full post later today.