April 20, 2014

avatar

NJ court permits release of post-trial briefs in voting case

In 2009 the Superior Court of New Jersey, Law Division, held a trial on the legality of using paperless direct-recording electronic (DRE) voting machines. Plaintiffs in the suit argued that because it’s so easy to replace the software in a DRE with fraudulent software that cheats in elections, DRE voting systems do not guarantee the substantive right to vote (and to have one’s vote counted) required by the New Jersey constitution and New Jersey statutory law.

I described this trial in three articles last year: trial update, summary of plaintiffs’ witnesses’ testimony, and summary of defense witnesses’ testimony.

Normally in a lawsuit, the courtroom is open. The public can attend all legal proceedings. Additionally, plaintiffs are permitted to explain their case to the public by releasing their post-trial briefs (“proposed findings of fact” and “proposed conclusions of law”). But in this suit the Attorney General of New Jersey, representing the defendants in this case, argued that the courtroom be closed for parts of the proceedings, and asked the Court to keep all post-trial documents from the public, indefinitely.

More than a year after the trial ended, the Court finally held a hearing to determine whether post-trial documents should be kept from the public. The Attorney General’s office failed to even articulate a legal argument for keeping the briefs secret.

So, according to a Court Order of October 15, 2010, counsel for the plaintiffs (Professor Penny Venetis of Rutgers Law School aided by litigators from Patton Boggs LLP) are now free to show you the details of their legal argument.

The briefs are available here:
Plaintiffs’ Proposed Findings of Fact
Plaintiffs’ Proposed Conclusions of Law

I am now free to tell you all sorts of interesting things about my hands-on experiences with (supposedly) tamper-evident security seals. I published some preliminary findings in 2008. Over the next few weeks I’ll post a series of articles about the limitations of tamper-evident seals in securing elections.

avatar

Join CITP in DC this Friday for "Emerging Threats to Online Trust"

Update – you can watch the video here.

Please join CITP this Friday from 9AM to 11AM for an event entitled “Emerging Threats to Online Trust: The Role of Public Policy and Browser Certificates.” The event will focus on the trustworthiness of the technical and policy structures that govern certificate-based browser security. It will include representatives from government, browser vendors, certificate authorities, academics, and hackers. For more information see:

http://citp.princeton.edu/events/emerging-threats-to-online-trust/

Several Freedom-to-Tinker posts have explored this set of issues:

avatar

On Facebook Apps Leaking User Identities

The Wall Street Journal today reports that many Facebook applications are handing over user information—specifically, Facebook IDs—to online advertisers. Since a Facebook ID can easily be linked to a user’s real name, third party advertisers and their downstream partners can learn the names of people who load their advertisement from those leaky apps. This reportedly happens on all ten of Facebook’s most popular apps and many others.

The Journal article provides few technical details behind what they found, so here’s a bit more about what I think they’re reporting.

The content of a Facebook application, for example FarmVille, is loaded within an iframe on the Facebook page. An iframe essentially embeds one webpage (FarmVille) inside another (Facebook). This means that as you play FarmVille, your browser location bar will show http://apps.facebook.com/onthefarm, but the iframe content is actually controlled by the application developer, in this case by farmville.com.

The content loaded by farmville.com in the iframe contains the game alongside third party advertisements. When your browser goes to fetch the advertisement, it automatically forwards to the third party advertiser “referer” information—that is, the URL of the current page that’s loading the ad. For FarmVille, the URL referer that’s sent will look something like:

http://fb-tc-2.farmville.com/flash.php?…fb_sig_user=[User’s Facebook ID]…

And there’s the issue. Because of the way Zynga (the makers of FarmVille) crafts some of its URLs to include the user’s Facebook ID, the browser will forward this identifying information on to third parties. I confirmed yesterday evening that using FarmVille does indeed transmit my Facebook ID to a few third parties, including Doubleclick, Interclick and socialvi.be.

Facebook policy prohibits application developers from passing this information to advertising networks and other third parties. In addition, Zynga’s privacy policy promises that “Zynga does not provide any Personally Identifiable Information to third-party advertising companies.”

But evidence clearly indicates otherwise.

What can be done about this? First, application developers like Zynga can simply stop including the user’s Facebook ID in the HTTP GET arguments, or they can place a “#” mark before the sensitive information in the URL so browsers don’t transmit this information automatically to third parties.

Second, Facebook can implement a proxy scheme, as proposed by Adrienne Felt more than two years ago, where applications would not receive real Facebook IDs but rather random placeholder IDs that are unique for each application. Then, application developers can be free do whatever they want with the placeholder IDs, since they can no longer be linked back to real user names.

Third, browser vendors can give users easier and better control over when HTTP referer information is sent. As Chris Soghoian recently pointed out, browser vendors currently don’t make these controls very accessible to users, if at all. This isn’t a direct solution to the problem but it could help. You could imagine a privacy-enhancing opt-in browser feature that turns off the referer header in all cross-domain situations.

Some may argue that this leak, whether inadvertent or not, is relatively innocuous. But allowing advertisers and other third parties to easily and definitively correlate a real name with an otherwise “anonymous” IP address, cookie, or profile is a dangerous path forward for privacy. At the very least, Facebook and app developers need to be clear with users about their privacy rights and comply with their own stated policies.

avatar

Court permits release of unredacted report on AVC Advantage

In the summer of 2008 I led a team of computer scientists in examining the hardware and software of the Sequoia AVC Advantage voting machine. I did this as a pro-bono expert witness for the Plaintiffs in the New Jersey voting-machine lawsuit. We were subject to a Protective Order that, in essence, permitted publication of our findings but prohibited us from revealing any of Sequoia’s trade secrets.

At the end of August 2008, I delivered my expert report to the court, and prepared it for public release as a technical report with the rest of my team as coauthors. Before we could release that report, Sequoia intervened with the Court, claiming that we were revealing trade secrets. We had been very careful not to reveal trade secrets, so we disputed Sequoia’s claim. In October 2008 the Court ruled mostly in our favor on this issue, permitting us to release the report with some redactions,and reserving a decision on those redacted sections until later.

The hearing on those sections has finally arrived, completely vindicating our claim that the original report was within the parameters of the Protective Order. On October 5, 2010 Judge Linda Feinberg signed an order permitting me to release the original, unredacted expert report, which is now available here.

If you’re curious, you can look at paragraphs 19.8, 19.9, 21.3, and 21.5, as well as Appendices B through G, all of which were blacked out in our previously released report.

avatar

HTC Willfully Violates the GPL in T-Mobile's New G2 Android Phone

[UPDATE (Oct 14, 2010): HTC has released the source code. Evidently 90-120 days was not in fact necessary, given that they managed to do it 7 days after the phone's official release. It is possible that the considerable pressure from the media, modders, kernel copyright holders, and other kernel hackers contributed to the apparently accelerated release.]

[UPDATE (Nov 10, 2010): The phone has been permanently rooted.]

Last week, the hottest new Android-based phone arrived on the doorstep of thousands of expectant T-Mobile customers. What didn’t arrive with the G2 was the source code that runs the heart of the device — a customized Linux kernel. Android has been hailed as an open platform in the midst of other highly locked-down systems, but as it makes its way out of the Google source repository and into devices this vision has repeatedly hit speedbumps. Last year, I blogged about one such issue, and to their credit Google sorted out a solution. This has ultimately been to everyone’s benefit, because the modified versions of the OS have routinely enabled software applications that the stock versions haven’t supported (not to mention improved reliability and speed).

When the G2 arrived, modders were eager to get to work. First, they had to overcome one of the common hurdles to getting anything installed — the “jailbreak”. Although the core operating system is open source, phone manufacturers and carriers have placed artificial restrictions on the ability to modify the basic system files. The motivations for doing so are mixed, but the effect is that hackers have to “jailbreak” or “root” the phone — essentially obtain super-user permissions. In 2009, the Copyright Office explicitly permitted such efforts when they are done for the purpose of enabling third-party programs to run on a phone.

G2 owners were excited when it appeared that an existing rooting technique worked on the G2, but were dismayed when their efforts were reversed every time the phone rebooted. T-Mobile passed the buck to HTC, the phone manufacturer:

The HTC software implementation on the G2 stores some components in read-only memory as a security measure to prevent key operating system software from becoming corrupted and rendering the device inoperable. There is a small subset of highly technical users who may want to modify and re-engineer their devices at the code level, known as “rooting,” but a side effect of HTC’s security measure is that these modifications are temporary and cannot be saved to permanent memory. As a result the original code is restored.

As it turned out, the internal memory chip included an option to make certain portions of memory read-only, which had the effect of silently discarding all changes upon reboot. However, it appears that this can be changed by sending the right series of commands to the chip. This effectively moved the rooting efforts into the complex domain of hardware hacking, with modders trying to figure out how to send these commands. Doing so involves writing some very challenging code that interacts with the open-source Linux kernel. The hackers haven’t yet succeeded (although they still could), largely because they are working in the dark. The relevant details about how the Linux kernel has been modified by HTC have not been disclosed. Reportedly, the company is replying to email queries with the following:

Thank you for contacting HTC Technical Assistance Center. HTC will typically publish on developer.htc.com the Kernel open source code for recently released devices as soon as possible. HTC will normally publish this within 90 to 120 days. This time frame is within the requirements of the open source community.

Perhaps HTC (and T-Mobile, distributor of the phone) should review the actual contents of the GNU Public License (v2), which stipulate the legal requirements for modifying and redistributing Linux. They state that you may only distribute derivative code if you “[a]ccompany it with the complete corresponding machine-readable source code.” Notably, there is no mention of a “grace period” or the like.

The importance of redistributing source code in a timely fashion goes beyond enabling phone rooting. It is the foundation of the “copyleft” regime of software licensing that has led to the flourishing of the open source software ecosystem. If every useful modification required waiting 90 to 120 days to be built upon, it would have taken eons to get to where we are today. It’s one thing for a company to choose to pursue the closed-source model and to start from scratch, but it’s another thing for it to profit from the goodwill of the open source community while imposing arbitrary and illegal restrictions on the code.

avatar

Hacking the D.C. Internet Voting Pilot

Editor’s Note: On November 1, 2012, Alex Halderman and several other experts participated in a live streaming symposium on the state of electronic voting in Election 2012: E-Voting: Risk and Opportunity (archived video now available). The event was hosted by the Center for Information Technology Policy at Princeton.

The current U.S. e-voting system is a patchwork of locally implemented technologies and procedures — with varying degrees of reliability, usability, and security. Different groups have advocated for improved systems, better standards, and new approaches like internet-based voting. Panelists will discuss these issues and more, with a keynote by Professor Ron Rivest, one of the pioneers of modern cryptography.

[Update (6/2012): We've published a detailed technical paper about the D.C. hack.]

The District of Columbia is conducting a pilot project to allow overseas and military voters to download and return absentee ballots over the Internet. Before opening the system to real voters, D.C. has been holding a test period in which they’ve invited the public to evaluate the system’s security and usability.

This is exactly the kind of open, public testing that many of us in the e-voting security community — including me — have been encouraging vendors and municipalities to conduct. So I was glad to participate, even though the test was launched with only three days’ notice. I assembled a team from the University of Michigan, including my PhD students, Eric Wustrow and Scott Wolchok, and Dawn Isabel, a member of the University of Michigan technical staff.

Within 36 hours of the system going live, our team had found and exploited a vulnerability that gave us almost total control of the server software, including the ability to change votes and reveal voters’ secret ballots. In this post, I’ll describe what we did, how we did it, and what it means for Internet voting.

D.C.’s pilot system

The D.C. system is built around an open source server-side application developed in partnership with the TrustTheVote project. Under the hood, it looks like a typical web application. It’s written using the popular Ruby on Rails framework and runs on top of the Apache web server and MySQL database.

Absentee overseas voters receive a physical letter in the mail instructing them to visit a D.C. web site, http://www.dcboee.us/DVM/, and log in with a unique 16-character PIN. The system gives voters two options: they can download a PDF ballot and return it by mail, or they can download a PDF ballot, fill it out electronically, and then upload the completed ballot as a PDF file to the server. The server encrypts uploaded ballots and saves them in encrypted form, and, after the election, officials transfer them to a non-networked PC, where they decrypt and print them. The printed ballots are counted using the same procedures used for mail-in paper ballots.

A small vulnerability, big consequences

We found a vulnerability in the way the system processes uploaded ballots. We confirmed the problem using our own test installation of the web application, and found that we could gain the same access privileges as the server application program itself, including read and write access to the encrypted ballots and database.

The problem, which geeks classify as a “shell-injection vulnerability,” has to do with the ballot upload procedure. When a voter follows the instructions and uploads a completed ballot as a PDF file, the server saves it as a temporary file and encrypts it using a command-line tool called GnuPG. Internally, the server executes the command gpg with the name of this temporary file as a parameter: gpg […] /tmp/stream,28957,0.pdf.

We realized that although the server replaces the filename with an automatically generated name (“stream,28957,0” in this example), it keeps whatever file extension the voter provided. Instead of a file ending in “.pdf,” we could upload a file with a name that ended in almost any string we wanted, and this string would become part of the command the server executed. By formatting the string in a particular way, we could cause the server to execute commands on our behalf. For example, the filename “ballot.$(sleep 10)pdf” would cause the server to pause for ten seconds (executing the “sleep 10” command) before responding. In effect, this vulnerability allowed us to remotely log in to the server as a privileged user.

Our demonstration attacks

D.C. launched the public testbed server on Tuesday, September 28. On Wednesday afternoon, we began to exploit the problem we found to demonstrate a number of attacks:

  • We collected crucial secret data stored on the server, including the database username and password as well as the public key used to encrypt the ballots.
  • We modified all the ballots that had already been cast to contain write-in votes for candidates we selected. (Although the system encrypts voted ballots, we simply discarded the encrypted files and replaced them with different ones that we encrypted using the same key.) We also rigged the system to replace future votes in the same way.
  • We installed a back door that let us view any ballots that voters cast after our attack. This modification recorded the votes, in unencrypted form, together with the names of the voters who cast them, violating ballot secrecy.
  • To show that we had control of the server, we left a “calling card” on the system’s confirmation screen, which voters see after voting. After 15 seconds, the page plays the University of Michigan fight song. Here’s a demonstration.

Stealthiness wasn’t our main objective, and our demonstration had a much greater footprint inside the system than a real attack would need. Nevertheless, we did not immediately announce what we had done, because we wanted to give the administrators an opportunity to exercise their intrusion detection and recovery processes — an essential part of any online voting system. Our attack remained active for two business days, until Friday afternoon, when D.C. officials took down the testbed server after several testers pointed out the fight song.

Based on this experience and other results from the public tests, the D.C. Board of Elections and Ethics has announced that they will not proceed with a live deployment of electronic ballot return at this time, though they plan to continue to develop the system. Voters will still be able to download and print ballots to return by mail, which seems a lot less risky.

D.C. officials brought the testbed server back up today (Tuesday) with the electronic ballot return mechanism disabled. The public test period will continue until Friday, October 8.

What this means for Internet voting

The specific vulnerability that we exploited is simple to fix, but it will be vastly more difficult to make the system secure. We’ve found a number of other problems in the system, and everything we’ve seen suggests that the design is brittle: one small mistake can completely compromise its security. I described above how a small error in file-extension handling left the system open to exploitation. If this particular problem had not existed, I’m confident that we would have found another way to attack the system.

None of this will come as a surprise to Internet security experts, who are familiar with the many kinds of attacks that major web sites suffer from on a daily basis. It may someday be possible to build a secure method for submitting ballots over the Internet, but in the meantime, such systems should be presumed to be vulnerable based on the limitations of today’s security technology.

We plan to write more about the problems we found and their implications for Internet voting in a forthcoming paper.


Professor J. Alex Halderman is a computer scientist at the University of Michigan.

avatar

NPR Gets it Wrong on the Rutgers Tragedy: Cyberbullying is Unique

On Saturday, NPR’s Weekend All Things Considered ran a story by Elizabeth Blair called “Public Humiliation: It’s Not The Web, It’s Us” [transcript]. The story purported to examine the phenomenon of internet-mediated public humiliation in the context of last weeks tragic suicide of Tyler Clementi, a Rutgers student who was secretly filmed having a sexual encounter in his dorm room. The video was redistributed online by his classmates who created it. The story is heartbreaking to many locals who have friends or family at Rutgers, especially to those of us in the technology policy community who are again reminded that so-called “cyberbullying” can be a life-or-death policy issue.

Thus, I was disappointed that the All Things Considered piece decided to view the issue through the lens of “public humiliation,” opening with a sampling of reality TV clips and the claim that they are significantly parallel to this past week’s tragedy. This is just not the case, for reasons that are widely known to people who study online bullying. Reality TV is about participants voluntarily choosing to expose themselves in an artificial environment, and cyberbullying is about victims being attacked against their will in the real world and in ways that reverberate even longer and more deeply than traditional bullying. If Elizabeth Blair or her editors had done the most basic survey of the literature or experts, this would have been clear.

The oddest choice of interviewees was Tavia Nyong’o, a professor of performance studies at New York University. I disagree with his claim that the TV show Glee has something significant to say about the topic, but more disturbing is his statement about what we should conclude from the event:

“[My students and I] were talking about the misleading perception, because there’s been so much advances in visibility, there’s no cost to coming out anymore. There’s a kind of equal opportunity for giving offense and for public hazing and for humiliating. We should all be able to deal with this now because we’re all equally comfortable in our own skins. Tragically, what Rutgers reveals is that we’re not all equally comfortable in our own skins.

I’m not sure if it’s as obvious to everyone else why this is absolutely backward, but I was shocked. What Rutgers reveals is, yet again, that new technologies can facilitate new and more creative ways of being cruel to each other. What Rutgers reveals is that although television may give us ways to examine the dynamics of privacy and humiliation, we have a zone of personal privacy that still matters deeply. What Rutgers tells us is that cyberbullying has introduced new dynamics into the way that young people develop their identities and deal with hateful antagonism. Nothing about Glee or reality TV tells us that we shouldn’t be horrified when someone secretly records and distributes video of our sexual encounters. I’m “comfortable in my own skin” but I would be mortified if my sexual exploits were broadcast online. Giving Nyong’o the benefit of the doubt, perhaps his quote was taken out of context, or perhaps he’s just coming from a culture at NYU that differs radically from the experience of somewhere like middle America, but I don’t see how Blair or her editors thought that this way of constructing the piece was justifiable.

The name of the All Things Considered piece was, “It’s Not The Web, It’s Us.” The reality is that it’s both. Humiliation and bullying would of course exist regardless of the technology, but new communications technologies change the balance. For instance, the Pew Internet & American Life Project has observed how digital technologies are uniquely invasive, persistent, and distributable. Pew has also pointed out (as have many other experts) that computer-mediated communications can often have the effect of disinhibition — making attackers comfortable with doing what they would otherwise never do in direct person-to-person contact. The solution may have more to do with us than the technology, but our solutions need to be informed by an understanding of how new technologies alter the dynamic.

avatar

General Counsel's Role in Shoring Up Authentication Practices Used in Secure Communications

Business conducted over the Internet has benefited hugely from web-based encryption. Retail sales, banking transactions, and secure enterprise applications have all flourished because of the end-to-end protection offered by encrypted Internet communications. An encrypted communication, however, is only as secure as the process used to authenticate the parties doing the communicating. The major Internet browsers all currently use the Certificate Authority Trust Model to verify the identity of websites on behalf of end-users. (The Model involves third parties known as certificate authorities or “CAs” issuing digital certificates to browswers and website operators that enable the end-user’s computer to cryptographically prove that the same CA that issued a certificate to the browser also issued a certificate to the website). The CA Trust Model has recently come under fire by the information security community because of technical and institutional defects. Steve Schultze and Ed Felten, in previous posts here, have outlined the Model’s shortcomings and examined potential fixes. The vulernabilities are a big deal because of the potential for man-in-the-middle wiretap exploits as well as imposter website scams.

One of the core problems with the CA Trust Model is that there are just too many CAs. Although organizations can configure their browser platforms to trust fewer CAs, the problem of how to isolate trustworthy (and untrustworthy) CAs remains. A good review of trustworthiness would start with examining the civil and criminal track record of CAs and their principals; identifying the geographic locations where CAs are resident; determining in which legal jurisdictions the CAs operate; determining which governmental actors may be able to coerce the CA to issue bogus certificates, behind-the-scenes, for the purpose of carrying out surveillance; analyzing the loss limitation and indemnity provisions found in each CA’s Certification Practice Statement or CPS; and nailing down which CAs engage in cross-certification. These are just a few considerations that need to be considered from the standpoint of an organization as an end-user. There is an entirely separate legal analysis that must be done from the standpoint of an organization as a website operator and purchaser of SSL certificates (which will be the subject of a future post).

The bottom line is that the tasks involved with evaluating CAs are not ones that IT departments, acting alone, have sufficient resources to perform. I recently posted on my law firm’s blog a short analysis regarding why it’s time for General Counsel to weigh in on the authentication practices associated with secure communications. The post resonated in the legal blogosphere and was featured in write-ups on Law.Com’s web-magazine “Corporate Counsel” and 3 Geeks and a Law Blog. The sentiment seems to be that this is an area ripe for remedial measures and that a collaborative approach is in order which leverages the resources and expertise of General Counsel. Could it be that the deployment of the CA Trust Model is about to get a long overdue shakeup?

avatar

Did a denial-of-service attack cause the flash crash? Probably not.

Last June I wrote about an analysis from Nanex.com claiming that a kind of spam called “quote stuffing” on the NYSE network may have caused the “flash crash” of shares on the New York Stock Exchange, May 6, 2010. I wrote that this claim was “interesting if true, and interesting anyway”.

It turns out that “A Single Sale Worth $4.1 Billion Led to the ‘Flash Crash’“, according to a report by the SEC and the CFTC.

The SEC’s report says that no, quote-stuffing did not cause the crash. The report says,

It has been hypothesized that these delays are due to a manipulative practice called “quote-stuffing” in which high volumes of quotes are purposely sent to exchanges in order to create data delays that would afford the firm sending these quotes a trading advantage.

Our investigation to date reveals that the largest and most erratic price moves observed on May 6 were caused by withdrawals of liquidity and the subsequent execution of trades at stub quotes. We have interviewed many of the participants who withdrew their liquidity, including those who were party to significant numbers of buys and sells that occurred at stub quote prices. …[E]ach market participant had many and varied reasons for its specific actions and decisions on May 6. … [T]he evidence does not support the hypothesis that delays in the CTS and CQS feeds triggered or otherwise caused the extreme volatility in security prices observed that day.

Nevertheless … the SEC staff will be working with the market centers in exploring their members’ trading practices to identify any unintentional or potentially abusive or manipulative conduct that may cause such system delays that inhibit the ability of market participants to engage in a fair and orderly process of price discovery.

Given this evidence, I guess we can simplify “interesting if true, and interesting anyway” to just “interesting anyway”.