October 12, 2024

Court permits release of unredacted report on AVC Advantage

In the summer of 2008 I led a team of computer scientists in examining the hardware and software of the Sequoia AVC Advantage voting machine. I did this as a pro-bono expert witness for the Plaintiffs in the New Jersey voting-machine lawsuit. We were subject to a Protective Order that, in essence, permitted publication of our findings but prohibited us from revealing any of Sequoia’s trade secrets.

At the end of August 2008, I delivered my expert report to the court, and prepared it for public release as a technical report with the rest of my team as coauthors. Before we could release that report, Sequoia intervened with the Court, claiming that we were revealing trade secrets. We had been very careful not to reveal trade secrets, so we disputed Sequoia’s claim. In October 2008 the Court ruled mostly in our favor on this issue, permitting us to release the report with some redactions,and reserving a decision on those redacted sections until later.

The hearing on those sections has finally arrived, completely vindicating our claim that the original report was within the parameters of the Protective Order. On October 5, 2010 Judge Linda Feinberg signed an order permitting me to release the original, unredacted expert report, which is now available here.

If you’re curious, you can look at paragraphs 19.8, 19.9, 21.3, and 21.5, as well as Appendices B through G, all of which were blacked out in our previously released report.

HTC Willfully Violates the GPL in T-Mobile's New G2 Android Phone

[UPDATE (Oct 14, 2010): HTC has released the source code. Evidently 90-120 days was not in fact necessary, given that they managed to do it 7 days after the phone’s official release. It is possible that the considerable pressure from the media, modders, kernel copyright holders, and other kernel hackers contributed to the apparently accelerated release.]

[UPDATE (Nov 10, 2010): The phone has been permanently rooted.]

Last week, the hottest new Android-based phone arrived on the doorstep of thousands of expectant T-Mobile customers. What didn’t arrive with the G2 was the source code that runs the heart of the device — a customized Linux kernel. Android has been hailed as an open platform in the midst of other highly locked-down systems, but as it makes its way out of the Google source repository and into devices this vision has repeatedly hit speedbumps. Last year, I blogged about one such issue, and to their credit Google sorted out a solution. This has ultimately been to everyone’s benefit, because the modified versions of the OS have routinely enabled software applications that the stock versions haven’t supported (not to mention improved reliability and speed).

When the G2 arrived, modders were eager to get to work. First, they had to overcome one of the common hurdles to getting anything installed — the “jailbreak”. Although the core operating system is open source, phone manufacturers and carriers have placed artificial restrictions on the ability to modify the basic system files. The motivations for doing so are mixed, but the effect is that hackers have to “jailbreak” or “root” the phone — essentially obtain super-user permissions. In 2009, the Copyright Office explicitly permitted such efforts when they are done for the purpose of enabling third-party programs to run on a phone.

G2 owners were excited when it appeared that an existing rooting technique worked on the G2, but were dismayed when their efforts were reversed every time the phone rebooted. T-Mobile passed the buck to HTC, the phone manufacturer:

The HTC software implementation on the G2 stores some components in read-only memory as a security measure to prevent key operating system software from becoming corrupted and rendering the device inoperable. There is a small subset of highly technical users who may want to modify and re-engineer their devices at the code level, known as “rooting,” but a side effect of HTC’s security measure is that these modifications are temporary and cannot be saved to permanent memory. As a result the original code is restored.

As it turned out, the internal memory chip included an option to make certain portions of memory read-only, which had the effect of silently discarding all changes upon reboot. However, it appears that this can be changed by sending the right series of commands to the chip. This effectively moved the rooting efforts into the complex domain of hardware hacking, with modders trying to figure out how to send these commands. Doing so involves writing some very challenging code that interacts with the open-source Linux kernel. The hackers haven’t yet succeeded (although they still could), largely because they are working in the dark. The relevant details about how the Linux kernel has been modified by HTC have not been disclosed. Reportedly, the company is replying to email queries with the following:

Thank you for contacting HTC Technical Assistance Center. HTC will typically publish on developer.htc.com the Kernel open source code for recently released devices as soon as possible. HTC will normally publish this within 90 to 120 days. This time frame is within the requirements of the open source community.

Perhaps HTC (and T-Mobile, distributor of the phone) should review the actual contents of the GNU Public License (v2), which stipulate the legal requirements for modifying and redistributing Linux. They state that you may only distribute derivative code if you “[a]ccompany it with the complete corresponding machine-readable source code.” Notably, there is no mention of a “grace period” or the like.

The importance of redistributing source code in a timely fashion goes beyond enabling phone rooting. It is the foundation of the “copyleft” regime of software licensing that has led to the flourishing of the open source software ecosystem. If every useful modification required waiting 90 to 120 days to be built upon, it would have taken eons to get to where we are today. It’s one thing for a company to choose to pursue the closed-source model and to start from scratch, but it’s another thing for it to profit from the goodwill of the open source community while imposing arbitrary and illegal restrictions on the code.

Hacking the D.C. Internet Voting Pilot

The District of Columbia is conducting a pilot project to allow overseas and military voters to download and return absentee ballots over the Internet. Before opening the system to real voters, D.C. has been holding a test period in which they've invited the public to evaluate the system's security and usability.

This is exactly the kind of open, public testing that many of us in the e-voting security community — including me — have been encouraging vendors and municipalities to conduct. So I was glad to participate, even though the test was launched with only three days' notice. I assembled a team from the University of Michigan, including my PhD students, Eric Wustrow and Scott Wolchok, and Dawn Isabel, a member of the University of Michigan technical staff.

Within 36 hours of the system going live, our team had found and exploited a vulnerability that gave us almost total control of the server software, including the ability to change votes and reveal voters’ secret ballots. In this post, I’ll describe what we did, how we did it, and what it means for Internet voting.

D.C.'s pilot system

The D.C. system is built around an open source server-side application developed in partnership with the TrustTheVote project. Under the hood, it looks like a typical web application. It's written using the popular Ruby on Rails framework and runs on top of the Apache web server and MySQL database.

Absentee overseas voters receive a physical letter in the mail instructing them to visit a D.C. web site, http://www.dcboee.us/DVM/, and log in with a unique 16-character PIN. The system gives voters two options: they can download a PDF ballot and return it by mail, or they can download a PDF ballot, fill it out electronically, and then upload the completed ballot as a PDF file to the server. The server encrypts uploaded ballots and saves them in encrypted form, and, after the election, officials transfer them to a non-networked PC, where they decrypt and print them. The printed ballots are counted using the same procedures used for mail-in paper ballots.

A small vulnerability, big consequences

We found a vulnerability in the way the system processes uploaded ballots. We confirmed the problem using our own test installation of the web application, and found that we could gain the same access privileges as the server application program itself, including read and write access to the encrypted ballots and database.

The problem, which geeks classify as a “shell-injection vulnerability,” has to do with the ballot upload procedure. When a voter follows the instructions and uploads a completed ballot as a PDF file, the server saves it as a temporary file and encrypts it using a command-line tool called GnuPG. Internally, the server executes the command gpg with the name of this temporary file as a parameter: gpg […] /tmp/stream,28957,0.pdf.

We realized that although the server replaces the filename with an automatically generated name (“stream,28957,0” in this example), it keeps whatever file extension the voter provided. Instead of a file ending in “.pdf,” we could upload a file with a name that ended in almost any string we wanted, and this string would become part of the command the server executed. By formatting the string in a particular way, we could cause the server to execute commands on our behalf. For example, the filename “ballot.$(sleep 10)pdf” would cause the server to pause for ten seconds (executing the “sleep 10” command) before responding. In effect, this vulnerability allowed us to remotely log in to the server as a privileged user.

Our demonstration attacks

D.C. launched the public testbed server on Tuesday, September 28. On Wednesday afternoon, we began to exploit the problem we found to demonstrate a number of attacks:

  • We collected crucial secret data stored on the server, including the database username and password as well as the public key used to encrypt the ballots.
  • We modified all the ballots that had already been cast to contain write-in votes for candidates we selected. (Although the system encrypts voted ballots, we simply discarded the encrypted files and replaced them with different ones that we encrypted using the same key.) We also rigged the system to replace future votes in the same way.
  • We installed a back door that let us view any ballots that voters cast after our attack. This modification recorded the votes, in unencrypted form, together with the names of the voters who cast them, violating ballot secrecy.
  • To show that we had control of the server, we left a “calling card” on the system's confirmation screen, which voters see after voting. After 15 seconds, the page plays the University of Michigan fight song. Here's a demonstration.

Stealthiness wasn't our main objective, and our demonstration had a much greater footprint inside the system than a real attack would need. Nevertheless, we did not immediately announce what we had done, because we wanted to give the administrators an opportunity to exercise their intrusion detection and recovery processes — an essential part of any online voting system. Our attack remained active for two business days, until Friday afternoon, when D.C. officials took down the testbed server after several testers pointed out the fight song.

Based on this experience and other results from the public tests, the D.C. Board of Elections and Ethics has announced that they will not proceed with a live deployment of electronic ballot return at this time, though they plan to continue to develop the system. Voters will still be able to download and print ballots to return by mail, which seems a lot less risky.

D.C. officials brought the testbed server back up today (Tuesday) with the electronic ballot return mechanism disabled. The public test period will continue until Friday, October 8.

What this means for Internet voting

The specific vulnerability that we exploited is simple to fix, but it will be vastly more difficult to make the system secure. We've found a number of other problems in the system, and everything we've seen suggests that the design is brittle: one small mistake can completely compromise its security. I described above how a small error in file-extension handling left the system open to exploitation. If this particular problem had not existed, I'm confident that we would have found another way to attack the system.

None of this will come as a surprise to Internet security experts, who are familiar with the many kinds of attacks that major web sites suffer from on a daily basis. It may someday be possible to build a secure method for submitting ballots over the Internet, but in the meantime, such systems should be presumed to be vulnerable based on the limitations of today's security technology.

We plan to write more about the problems we found and their implications for Internet voting in a forthcoming paper.


Professor J. Alex Halderman is a computer scientist at the University of Michigan.

NPR Gets it Wrong on the Rutgers Tragedy: Cyberbullying is Unique

On Saturday, NPR’s Weekend All Things Considered ran a story by Elizabeth Blair called “Public Humiliation: It’s Not The Web, It’s Us” [transcript]. The story purported to examine the phenomenon of internet-mediated public humiliation in the context of last weeks tragic suicide of Tyler Clementi, a Rutgers student who was secretly filmed having a sexual encounter in his dorm room. The video was redistributed online by his classmates who created it. The story is heartbreaking to many locals who have friends or family at Rutgers, especially to those of us in the technology policy community who are again reminded that so-called “cyberbullying” can be a life-or-death policy issue.

Thus, I was disappointed that the All Things Considered piece decided to view the issue through the lens of “public humiliation,” opening with a sampling of reality TV clips and the claim that they are significantly parallel to this past week’s tragedy. This is just not the case, for reasons that are widely known to people who study online bullying. Reality TV is about participants voluntarily choosing to expose themselves in an artificial environment, and cyberbullying is about victims being attacked against their will in the real world and in ways that reverberate even longer and more deeply than traditional bullying. If Elizabeth Blair or her editors had done the most basic survey of the literature or experts, this would have been clear.

The oddest choice of interviewees was Tavia Nyong’o, a professor of performance studies at New York University. I disagree with his claim that the TV show Glee has something significant to say about the topic, but more disturbing is his statement about what we should conclude from the event:

“[My students and I] were talking about the misleading perception, because there’s been so much advances in visibility, there’s no cost to coming out anymore. There’s a kind of equal opportunity for giving offense and for public hazing and for humiliating. We should all be able to deal with this now because we’re all equally comfortable in our own skins. Tragically, what Rutgers reveals is that we’re not all equally comfortable in our own skins.

I’m not sure if it’s as obvious to everyone else why this is absolutely backward, but I was shocked. What Rutgers reveals is, yet again, that new technologies can facilitate new and more creative ways of being cruel to each other. What Rutgers reveals is that although television may give us ways to examine the dynamics of privacy and humiliation, we have a zone of personal privacy that still matters deeply. What Rutgers tells us is that cyberbullying has introduced new dynamics into the way that young people develop their identities and deal with hateful antagonism. Nothing about Glee or reality TV tells us that we shouldn’t be horrified when someone secretly records and distributes video of our sexual encounters. I’m “comfortable in my own skin” but I would be mortified if my sexual exploits were broadcast online. Giving Nyong’o the benefit of the doubt, perhaps his quote was taken out of context, or perhaps he’s just coming from a culture at NYU that differs radically from the experience of somewhere like middle America, but I don’t see how Blair or her editors thought that this way of constructing the piece was justifiable.

The name of the All Things Considered piece was, “It’s Not The Web, It’s Us.” The reality is that it’s both. Humiliation and bullying would of course exist regardless of the technology, but new communications technologies change the balance. For instance, the Pew Internet & American Life Project has observed how digital technologies are uniquely invasive, persistent, and distributable. Pew has also pointed out (as have many other experts) that computer-mediated communications can often have the effect of disinhibition — making attackers comfortable with doing what they would otherwise never do in direct person-to-person contact. The solution may have more to do with us than the technology, but our solutions need to be informed by an understanding of how new technologies alter the dynamic.

General Counsel's Role in Shoring Up Authentication Practices Used in Secure Communications

Business conducted over the Internet has benefited hugely from web-based encryption. Retail sales, banking transactions, and secure enterprise applications have all flourished because of the end-to-end protection offered by encrypted Internet communications. An encrypted communication, however, is only as secure as the process used to authenticate the parties doing the communicating. The major Internet browsers all currently use the Certificate Authority Trust Model to verify the identity of websites on behalf of end-users. (The Model involves third parties known as certificate authorities or “CAs” issuing digital certificates to browswers and website operators that enable the end-user’s computer to cryptographically prove that the same CA that issued a certificate to the browser also issued a certificate to the website). The CA Trust Model has recently come under fire by the information security community because of technical and institutional defects. Steve Schultze and Ed Felten, in previous posts here, have outlined the Model’s shortcomings and examined potential fixes. The vulernabilities are a big deal because of the potential for man-in-the-middle wiretap exploits as well as imposter website scams.

One of the core problems with the CA Trust Model is that there are just too many CAs. Although organizations can configure their browser platforms to trust fewer CAs, the problem of how to isolate trustworthy (and untrustworthy) CAs remains. A good review of trustworthiness would start with examining the civil and criminal track record of CAs and their principals; identifying the geographic locations where CAs are resident; determining in which legal jurisdictions the CAs operate; determining which governmental actors may be able to coerce the CA to issue bogus certificates, behind-the-scenes, for the purpose of carrying out surveillance; analyzing the loss limitation and indemnity provisions found in each CA’s Certification Practice Statement or CPS; and nailing down which CAs engage in cross-certification. These are just a few considerations that need to be considered from the standpoint of an organization as an end-user. There is an entirely separate legal analysis that must be done from the standpoint of an organization as a website operator and purchaser of SSL certificates (which will be the subject of a future post).

The bottom line is that the tasks involved with evaluating CAs are not ones that IT departments, acting alone, have sufficient resources to perform. I recently posted on my law firm’s blog a short analysis regarding why it’s time for General Counsel to weigh in on the authentication practices associated with secure communications. The post resonated in the legal blogosphere and was featured in write-ups on Law.Com’s web-magazine “Corporate Counsel” and 3 Geeks and a Law Blog. The sentiment seems to be that this is an area ripe for remedial measures and that a collaborative approach is in order which leverages the resources and expertise of General Counsel. Could it be that the deployment of the CA Trust Model is about to get a long overdue shakeup?