October 11, 2024

Seals on NJ voting machines, 2004-2008

I have just released a new paper entitled Security seals on voting machines: a case study and here I’ll explain how I came to write it.

Like many computer scientists, I became interested in the technology of vote-counting after the technological failure of hanging chads and butterfly ballots in 2000. In 2004 I visited my local polling place to watch the procedures for closing the polls, and I noticed that ballot cartridges were sealed by plastic strap seals like this one:

plastic strap seal

The pollworkers are supposed to write down the serial numbers on the official precinct report, but (as I later found when Ed Felten obtained dozens of these reports through an open-records request), about 50% of the time they forget to do this:

In 2008 when (as the expert witness in a lawsuit) I examined the hardware and software of New Jersey’s voting machines, I found that there were no security seals present that would impede opening the circuit-board cover to replace the vote-counting software. The vote-cartridge seal looks like it would prevent the cover from being opened, but it doesn’t.

There was a place to put a seal on the circuit-board cover, through the hole labeled “DO NOT REMOVE”, but there was no seal there:

Somebody had removed a seal, probably a voting-machine repairman who had to open the cover to replace the batteries, and nobody bothered to install a new one.

The problem with paperless electronic voting machines is that if a crooked political operative has access to install fraudulent software, that software can switch votes from one candidate to another. So, in my report to the Court during the lawsuit, I wrote,


10.6. For a system of tamper-evident seals to provide effective protection, the seals must be consistently installed, they must be truly tamper-evident, and they must be consistently inspected. With respect to the Sequoia AVC Advantage, this means that all five of the
following would have to be true. But in fact, not a single one of these is true in practice, as I will explain.

  1. The seals would have to be routinely in place at all times when an attacker might wish to access the Z80 Program ROM; but they are not.
  2. The cartridge should not be removable without leaving evidence of tampering with
    the seal; but plastic seals can be quickly defeated, as I will explain.

  3. The panel covering the main circuit board should not be removable without removing the [vote-cartridge] seal; but in fact it is removable without disturbing the seal.
  4. If a seal with a different serial number is substituted, written records would have to reliably catch this substitution; but I have found major gaps in these records in New Jersey.
  5. Identical replacement seals (with duplicate serial numbers) should not exist; but the evidence shows that no serious attempt is made to avoid duplication.

Those five criteria are just common sense about what would be a required in any effective system for protecting something using tamper-indicating seals. What I found was that (1) the seals aren’t always there; (2) even if they were, you can remove the cartridge without visible evidence of tampering with the seal and (3) you can remove the circuit-board cover without even disturbing the plastic-strap seal; (4) even if that hadn’t been true, the seal-inspection records are quite lackadaisical and incomplete; and (5) even if that weren’t true, since the counties tend to re-use the same serial numbers, the attacker could just obtain fresh seals with the same number!

Since the time I wrote that, I’ve learned from the seal experts that there’s a lot more to a seal use protocol than these five observations. I’ll write about that in the near future.

But first, I’ll write about the State of New Jersey’s slapdash response to my first examination of their seals. Stay tuned.

If Wikileaks Scraped P2P Networks for "Leaks," Did it Break Federal Criminal Law?

On Bloomberg.com today, Michael Riley reports that some of the documents hosted at Wikileaks may not be “leaks” at all, at least not in the traditional sense of the word. Instead, according to a computer security firm called Tiversa, “computers in Sweden” have been searching the files shared on p2p networks like Limewire for sensitive and confidential information, and the firm supposedly has proof that some of the documents found in this way have ended up on the Wikileaks site. These charges are denied as “completely false in every regard” by Wikileaks lawyer Mark Stephens.

I have no idea whether these accusations are true, but I am interested to learn from the story that if they are true they might provide “an alternate path for prosecuting WikiLeaks,” most importantly because the reporter attributes this claim to me. Although I wasn’t misquoted in the article, I think what I said to the reporter is a few shades away from what he reported, so I wanted to clarify what I think about this.

In the interview and in the article, I focus only on the Computer Fraud and Abuse Act (“CFAA”), the primary federal law prohibiting computer hacking. The CFAA defines a number of federal crimes, most of which turn on whether an action on a computer or network was done “without authorization” or in a way that “exceeds authorized access.”

The question presented by the reporter to me (though not in these words) was: is it a violation of the CFAA to systematically crawl a p2p network like Limewire searching for and downloading files that might be mistakenly shared, like spreadsheets or word processing documents full of secrets?

I don’t think so. With everything I know about the text of this statute, the legislative history surrounding its enactment, and the cases that have interpreted it, this kind of searching and downloading won’t “exceed the authorized access” of the p2p network. This simply isn’t a crime under the CFAA.

But although I don’t think this is a viable theory, I can’t unequivocally dismiss it for a few reasons, all of which I tried to convey in the interview. First, some courts have interpreted “exceeds authorized access” broadly, especially in civil lawsuits arising under the CFAA. For example, back in 2001, one court declared it a CFAA violation to utilize a spider capable of collecting prices from a travel website by a competitor, if the defendant built the spider by taking advantage of “proprietary information” from a former employee of the plaintiff. (For much more on this, see this article by Orin Kerr.)

Second, it seems self-evident that these confidential files are being shared on accident. The users “leaking” these files are either misunderstanding or misconfiguring their p2p clients in ways that would horrify them, if only they knew the truth. While this doesn’t translate directly into “exceeds authorized access,” it might weigh heavily in court, especially if the government can show that a reasonable searcher/downloader would immediately and unambiguously understand that the files were shared on accident.

Third, let’s be realistic: there may be judges who are so troubled by what they see as the harm caused by Wikileaks that they might be willing to read the open-textured and mostly undefined terms of the CFAA broadly if it might help throw a hurdle in Wikileaks’ way. I’m not saying that judges will bend the law to the facts, but I think that with a law as vague as the CFAA, multiple interpretations are defensible.

But I restate my conclusion: I think a prosecution under the CFAA against someone for searching a p2p network should fail. The text and caselaw of the CFAA don’t support such a prosecution. Maybe it’s “not a slam dunk either way,” as I am quoted saying in the story, but for the lawyers defending against such a theory, it’s at worst an easy layup.

Web Browser Security User Interfaces: Hard to Get Right and Increasingly Inconsistent

A great deal of online commerce, speech, and socializing supposedly happens over encrypted protocols. When using these protocols, users supposedly know what remote web site they are communicating with, and they know that nobody else can listen in. In the past, this blog has detailed how the technical protocols and legal framework are lacking. Today I’d like to talk about how secure communications are represented in the browser user interface (UI), and what users should be expected to believe based on those indicators.

The most ubiquitous indicator of a “secure” connection on the web is the “padlock icon.” For years, banks, commerce sites, and geek grandchildren have been telling people to “look for the lock.” However, The padlock has problems. First, it has been shown in user studies that despite all of the imploring, many people just don’t pay attention. Second, when they do pay attention, the padlock often gives them the impression that the site they are connecting to is the real-world person or company that the site claims to be (in reality, it usually just means that the connection is encrypted to “somebody”). Even more generally, many people think that the padlock means that they are “safe” to do whatever they wish on the site without risk. Finally, there are some tricky hacker moves that can make it appear that a padlock is present when it actually is not.

A few years ago, a group of engineers invented “Extended Validation(EV) certificates. As opposed to “Domain Validation(DV) certs that simply verify that you are talking to “somebody” who owns the domain, EV certificates actually do verify real-world identities. They also typically cause some prominent part of the browser to turn green and show the real-world entity’s name and location (eg: “Bank of America Corporation (US)”). Separately, the W3 Consortium recently issued a final draft of a document entitled “Web Security Context: User Interface Guidelines.” The document describes web site “identity signals,” saying that the browser must “make information about the identity of the Web site that a user interacts with available.” These developments highlight a shift in browser security UI from simply showing a binary padlock/no-padlock icon to showing more rich information about identity (when it exists).

In the course of trying to understand all of these changes, I made a disturbing discovery: different browser vendors are changing their security UI’s in different ways. Here are snapshots from some of the major browsers:

As you can see, all of the browsers other than Firefox still have a padlock icon (albeit in different places). Chrome now makes “https” and the padlock icon green regardless of whether it is DV or EV (see the debate here), whereas the other browsers reserve the green color for EV only. The confusion is made worse by the fact that Chrome appears to contain a bug in which the organization name/location (the only indication of EV validation) sometimes does not appear. Firefox chose to use the color blue for DV even though one of their user experience guys noted, “The color blue unfortunately carries no meaning or really any form of positive/negative connotation (this was intentional and the rational[e] is rather complex)”. The name/location from EV certificates appear in different places, and the method of coloring elements also varies (Safari in particular colors only the text, and does so in dark shades that can sometimes be hard to discern from black). Some browsers also make (different) portions of the url a shade of gray in an attempt to emphasize the domain you are visiting.

Almost all of the browsers have made changes to these elements in recent versions. Mozilla has been particularly aggressively changing Firefox’s user interface, with the most dramatic change being the removal of the padlock icon entirely as of Firefox 4. Here is the progression in changes to the UI when visiting DV-certified sites:

By stepping back to Firefox 2.0, we can see a much more prominent padlock icon in both the URL bar and in the bottom-right “status bar” along with an indication of what domain is being validated. Firefox 3.0 toned down the color scheme of the lock icon, making it less attention grabbing and removing it from the URL bar. It also removed the yellow background that the URL bar would show for encrypted sites, and introduced a blue glow around the site icon (“favicon”) if the site provided a DV cert. This area was named the “site identification button,” and is either grey, blue, or green depending on the level of security offered. Users can click on the button to get more information about the certificate, presuming they know to do so. At some point between Firefox 3.0 and 3.6, the domain name was moved from the status bar (and away from the padlock icon) to the “site identification button”.

In the soon-to-be-released Firefox 4 is the padlock icon removed altogether. Mozilla actually removed the “status bar” at the bottom of the screen completely, and the padlock icon with it. This has caused consternation among some users, and generated about 35k downloads of an addon that restores some of the functionality of the status bar (but not the padlock).

Are these changes a good thing? On the one hand, movement toward a more accurately descriptive system is generally laudable. On the other, I’m not sure whether there has been any study about how users interpret the color-only system — especially in the context of varying browser implementations. Anecdotally, I was unaware of the Firefox changes, and I had a moment of panic when I had just finished a banking transaction using a Firefox 4 beta and realized that there was no lock icon. I am not the only one. Perhaps I’m an outlier, and perhaps it’s worth the confusion in order to move to a better system. However, at the very least I would expect Mozilla to do more to proactively inform users about the changes.

It seems disturbing that the browsers are diverging in their visual language of security. I have heard people argue that competition in security UI could be a good thing, but I am not convinced that any benefits would outweigh the cost of confusing users. I’m also not sure that users are aware enough of the differences that they will consider it when selecting a browser… limiting the positive effects of any competition. What’s more, the problem is only set to get worse as more and more browsing takes place on mobile devices that are inherently constrained in what they can cram on the screen. Just take a look at iOS vs. Android:

To begin with, Mobile Safari behaves differently from desktop Safari. The green color is even harder to see here, and one wonders whether the eye will notice any of these changes when they appear in the browser title bar (this is particularly evident when browsing on an iPad). Android’s browser displays a lock icon that is identical for DV and EV sites. Windows Phone 7 behaves similarly, but only when the URL bar is present — and the URL bar is automatically hidden when you rotate your phone into landscape mode. Blackberry shows a padlock icon inconspicuously in the top status bar of the phone (the same area as your signal strength and battery status). Blackberry uniquely shows an unlocked padlock icon when on non-encrypted sites, something I don’t remember in desktop browsers since Netscape Navigator (although maybe it’s a good idea to re-introduce some positive indication of “not encrypted”).

Some of my more cynical realistic colleagues have said that given the research showing that most users don’t pay attention to this stuff anyway, trying to fix it is pointless. I am sympathetic to that view, and I think that making more sites default to HTTPS, encouraging adoption of standards like HSTS, and working on standards to make it easier to encrypt web communications are probably lower hanging fruit. There nevertheless seems to be an opportunity here for some standardization amongst the browser vendors, with a foundation in actual usability testing.

Some Technical Clarifications About Do Not Track

When I last wrote here about Do Not Track in August, there were just a few rumblings about the possibility of a Do Not Track mechanism for online privacy. Fast forward four months, and Do Not Track has shot to the top of the privacy agenda among regulators in Washington. The FTC staff privacy report released in December endorsed the idea, and Congress was quick to hold a hearing on the issue earlier this month. Now, odds are quite good that some kind of Do Not Track legislation will be introduced early in this new congressional session.

While there isn’t yet a concrete proposal for Do Not Track on the table, much has already been written both in support of and against the idea in general, and it’s terrific to see the issue debated so widely. As I’ve been following along, I’ve noticed some technical confusion on a few points related to Do Not Track, and I’d like to address three of them here.

1. Do Not Track will most likely be based on an HTTP header.

I’ve read some people still suggesting that Do Not Track will be some form of a government-operated list or registry—perhaps of consumer names, device identifiers, tracking domains, or something else. This type of solution has been suggested before in an earlier conception of Do Not Track, and given its rhetorical likeness to the Do Not Call Registry, it’s a natural connection to make. But as I discussed in my earlier post—the details of which I won’t rehash here—a list mechanism is a relatively clumsy solution to this problem for a number of reasons.

A more elegant solution—and the one that many technologists seem to have coalesced around—is the use of a special HTTP header that simply tells the server whether the user is opting out of tracking for that Web request, i.e. the header can be set to either “on” or “off” for each request. If the header is “on,” the server would be responsible for honoring the user’s choice to not be tracked. Users would be able to control this choice through the preferences panel of the browser or the mobile platform.

2. Do Not Track won’t require us to “re-engineer the Internet.”

It’s also been suggested that implementing Do Not Track in this way will require a substantial amount of additional work, possibly even rising to the level of “re-engineering the Internet.” This is decidedly false. The HTTP standard is an extensible one, and it “allows an open-ended set of… headers” to be defined for it. Indeed, custom HTTP headers are used in many Web applications today.

How much work will it take to implement Do Not Track using the header? Generally speaking, not too much. On the client-side, adding the ability to send the Do Not Track header is a relatively simple undertaking. For instance, it only took about 30 minutes of programming to add this functionality to a popular extension for the Firefox Web browser. Other plug-ins already exist. Implementing this functionality directly into the browser might take a little bit longer, but much of the work will be in designing a clear and easily understandable user interface for the option.

On the server-side, adding code to detect the header is also a reasonably easy task—it takes just a few extra lines of code in most popular Web frameworks. It could take more substantial work to program how the server behaves when the header is “on,” but this work is often already necessary even in the absence of Do Not Track. With industry self-regulation, compliant ad servers supposedly already handle the case where a user opts out of their behavioral advertising programs, the difference now being that the opt-out signal comes from a header rather than a cookie. (Of course, the FTC could require stricter standards for what opting-out means.)

Note also that contrary to some suggestions, the header mechanism doesn’t require consumers to identify who they are or otherwise authenticate to servers in order to gain tracking protection. Since the header is a simple on/off flag sent with every Web request, the server doesn’t need to maintain any persistent state about users or their devices’ opt-out preferences.

3. Microsoft’s new Tracking Protection feature isn’t the same as Do Not Track.

Last month, Microsoft announced that its next release of Internet Explorer will include a privacy feature called Tracking Protection. Mozilla is also reportedly considering a similar browser-based solution (although a later report makes it unclear whether they actually will). Browser vendors should be given credit for doing what they can from within their products to protect user privacy, but their efforts are distinct from the Do Not Track header proposal. Let me explain the major difference.

Browser-based features like Tracking Protection basically amount to blocking Web connections from known tracking domains that are compiled on a list. They don’t protect users from tracking by new domains (at least until they’re noticed and added to the tracking list) nor from “allowed” domains that are tracking users surreptitiously.

In contrast, the Do Not Track header compels servers to cooperate, to proactively refrain from any attempts to track the user. The header could be sent to all third-party domains, regardless of whether the domain is already known or whether it actually engages in tracking. With the header, users wouldn’t need to guess whether a domain should be blocked or not, and they wouldn’t have to risk either allowing tracking accidentally or blocking a useful feature.

Tracking Protection and other similar browser-based defenses like Adblock Plus and NoScript are reasonable, but incomplete, interim solutions. They should be viewed as complementary with Do Not Track. For entities under FTC jurisdiction, Do Not Track could put an effective end to the tracking arms race between those entities and browser-based defenses—a race that browsers (and thus consumers) are losing now and will be losing in the foreseeable future. For those entities outside FTC jurisdiction, blocking unwanted third parties is still a useful though leaky defense that maintains the status quo.

Information security experts like to preach “defense in depth” and it’s certainly vital in this case. Neither solution fully protects the user, so users really need both solutions to be available in order to gain more comprehensive protection. As such, the upcoming features in IE and Firefox should not be seen as a technical substitute for Do Not Track.

——

To reiterate: if the technology that implements Do Not Track ends up being an HTTP header, which I think it should be, it would be both technically feasible and relatively simple. It’s also distinct from recent browser announcements about privacy in that Do Not Track forces server cooperation, while browser-based defenses work alone to fend off tracking.

What other technical issues related to Do Not Track remain murky to readers? Feel free to leave comments here, or if you prefer on Twitter using the #dntrack tag and @harlanyu.

CITP Visitors Application Deadline Extended to Feb 1st

The deadline for applications to CITP’s Visitors Program has been extended to February 1st. If you or someone you know is interested but has questions, feel free to contact me at

The Center has secured limited resources from a range of sources to support visiting faculty, scholars or policy experts for up to one-year appointments during the 2011-2012 academic year. We are interested in applications from academic faculty and researchers as well as from individuals who have practical experience in the policy arena. The rank and status of the successful applicant(s) will be determined on a case-by-case basis. We are particularly interested in hearing from faculty members at other universities and from individuals who have first-hand experience in public service in the technology policy area.

For more details and instructions about how to apply, see the full description here.