November 23, 2024

Seals on NJ voting machines, October-December 2008

In my examination of New Jersey’s voting machines, I found that there were no tamper-indicating seals that prevented fiddling with the vote-counting software—just a plastic strap seal on the vote cartridge. And I was rather skeptical whether slapping seals on the machine would really secure the ROMs containing the software. I remembered Avi Rubin’s observations from a couple of years earlier, that I described in a previous post.

A bit of googling turned up this interesting 1996 article:


Vulnerability Assessment of Security Seals
Roger G. Johnston, Ph.D. and Anthony R.E. Garcia
Los Alamos National Laboratory

… We studied 94 different security seals, both passive and electronic, developed either commercially or by the United States Government. Most of these seals are in wide-spread use, including for critical applications. We learned how to defeat all 94 seals using rapid, inexpensive, low-tech methods.

In my expert report, I cited this scientific article to explain that seals would not be a panacea to solve the problems with the voting machine.

Soon after I delivered this report to the Court, the judge held a hearing in which she asked the defendants (the State of New Jersey) how they intended to secure these voting machines against tampering. A few weeks later, the State explained their new system: more seals.

For the November 2008 election, they slapped on three pieces of tape, a wire seal, and a “security screw cap”, in addition to the plastic strap seal that had already been in use. All these seals are in the general categories described by Johnston and Garcia as easy to defeat using “rapid, inexpensive, low-tech methods”.

Up to this point I knew in theory (by reading Avi Rubin and Roger Johnston) that tamper-indicating seals aren’t very secure, but I hadn’t really tried anything myself.

Here’s what is not so obvious: If you want to study how to lift and replace a seal without breaking it, or how to counterfeit a seal, you can’t practice on the actual voting machine (or other device) in the polling place! You need a few dozen samples of the seal, so that you can try different approaches, to see what works and what doesn’t. Then you need to practice these approaches over and over. So step 1 is to get a big bag of seals.

What I’ve discovered, by whipping out a credit card and trying it, is that the seal vendors are happy to sell you 100 seals, or 1000, or however many you need. They cost about 50 cents apiece, or more, depending on the seal. So I bought some seals. In addition, under Court order we got some samples from the State, but that wasn’t really necessary as all those seals are commercially available, as I found by a few minutes of googling.

The next step was to go down to my basement workshop and start experimenting. After about a day of thinking about the seals and trying things out, I cracked them all.

As I wrote in December 2008, all those seals are easily defeated.

  • The tamper-indicating tape can be lifted using a heat gun and a razor blade, then replaced with no indication of tampering.
  • The security screw cap can be removed using a screwdriver, then the
    serial-numbered top can be replaced (undamaged) onto a fresh (unnumbered) base.

  • The wire seal can be defeated using a #4 wood screw.
  • The plastic strap seal can be picked using a jeweler’s screwdriver.

For details and pictures, see “Seal Regime #2” in this paper.

Brazilian Communications Agency Moves Towards Surveillance Superpowers

January is the month when the Brazilian version of the popular TV show Big Brother returns to the air. For three months, a bunch of people are locked inside a house and their lives are broadcast 24/7. A TV show premised on nonstop surveillance might sound like fun to some people, but it is disturbing when governments engage in similar practices. The Brazilian national communications agency (aka Anatel) announced a few days ago a plan to implement 24/7 surveillance over the more than 203 million cell phones in the country.

As published by Folha de Sao Paulo, the largest newspaper in the country, Anatel has invested about $500,000 in building three central switches that connect directly with the private carrier’s networks. The switches are not for eavesdropping, but will provide the agency with direct access to information such as numbers dialed, date, time, amount paid and duration of all phone calls. It will also provide access to personal information such as name, address and taxpayer number for every mobile customer.

The agency claims that the system will help “modernize” the capability of regulating phone companies, leading to a better quality of service. Currently, the data is privately kept by each phone company. The agency can ask for that information, but has to rely on what is provided. It claims that its technicians “are not prepared to deal with the systems used by the phone carriers and obtain the necessary original information”. So it has decided to collect the information directly, creating its own database in order to “validate” the information directly.

Lawyers and civil rights advocates are worried about this intention to turn Anatel into a “Big Brother” entity. Floriano Marques, an administrative law attorney, claims that the new measure is a “pathology”. He says “it reflects a trend of weakening privacy rights that can be found in various efforts of the public administration in Brazil”. And he is right. Recent events indicate that some public authorities in Brazil have been holding privacy in low regard. In the presidential campaign of 2010, Brazilian tax officials were caught disclosing confidential tax information of members of the political party opposing the government.

Also, a Brazilian Senator called Eduardo Azeredo introduced a bill mandating every citizen to establish his identity through a digital certificate before connecting to the Internet. After causing considerable uproar, the bill was amended to exclude mandatory identification provision, but it still includes disconcerting surveillance provisions, such as the obligation imposed on websites and service providers to keep records of users’ online activities for 5 years.

Lawyers and civil rights activists fear that Anatel’s surveillance superpowers will open the path for all sorts of misuse. They claim the project violates the Brazilian Constitution, which protects privacy as a fundamental right, as well as due process. The agency would gain access to sensitive information without prior permission of users, or any scrutiny by the courts.

Arguably, the implementation of these new provisions by Anatel puts Brazil one step closer to initiatives such as China’s practices of scanning SMS messages for “illegal or unhealthy” content, India’s demands for monitoring communications sent via BlackBerry smartphones, or other countries investing in technical infrastructure to surveil citizens. For the country that once pledged allegiance to the Penguin, in reference to its support to online freedom, free software and free culture policies, the recent developments have been showing an unexpected Orwellian touch.

Seals on NJ voting machines, 2004-2008

I have just released a new paper entitled Security seals on voting machines: a case study and here I’ll explain how I came to write it.

Like many computer scientists, I became interested in the technology of vote-counting after the technological failure of hanging chads and butterfly ballots in 2000. In 2004 I visited my local polling place to watch the procedures for closing the polls, and I noticed that ballot cartridges were sealed by plastic strap seals like this one:

plastic strap seal

The pollworkers are supposed to write down the serial numbers on the official precinct report, but (as I later found when Ed Felten obtained dozens of these reports through an open-records request), about 50% of the time they forget to do this:

In 2008 when (as the expert witness in a lawsuit) I examined the hardware and software of New Jersey’s voting machines, I found that there were no security seals present that would impede opening the circuit-board cover to replace the vote-counting software. The vote-cartridge seal looks like it would prevent the cover from being opened, but it doesn’t.

There was a place to put a seal on the circuit-board cover, through the hole labeled “DO NOT REMOVE”, but there was no seal there:

Somebody had removed a seal, probably a voting-machine repairman who had to open the cover to replace the batteries, and nobody bothered to install a new one.

The problem with paperless electronic voting machines is that if a crooked political operative has access to install fraudulent software, that software can switch votes from one candidate to another. So, in my report to the Court during the lawsuit, I wrote,


10.6. For a system of tamper-evident seals to provide effective protection, the seals must be consistently installed, they must be truly tamper-evident, and they must be consistently inspected. With respect to the Sequoia AVC Advantage, this means that all five of the
following would have to be true. But in fact, not a single one of these is true in practice, as I will explain.

  1. The seals would have to be routinely in place at all times when an attacker might wish to access the Z80 Program ROM; but they are not.
  2. The cartridge should not be removable without leaving evidence of tampering with
    the seal; but plastic seals can be quickly defeated, as I will explain.

  3. The panel covering the main circuit board should not be removable without removing the [vote-cartridge] seal; but in fact it is removable without disturbing the seal.
  4. If a seal with a different serial number is substituted, written records would have to reliably catch this substitution; but I have found major gaps in these records in New Jersey.
  5. Identical replacement seals (with duplicate serial numbers) should not exist; but the evidence shows that no serious attempt is made to avoid duplication.

Those five criteria are just common sense about what would be a required in any effective system for protecting something using tamper-indicating seals. What I found was that (1) the seals aren’t always there; (2) even if they were, you can remove the cartridge without visible evidence of tampering with the seal and (3) you can remove the circuit-board cover without even disturbing the plastic-strap seal; (4) even if that hadn’t been true, the seal-inspection records are quite lackadaisical and incomplete; and (5) even if that weren’t true, since the counties tend to re-use the same serial numbers, the attacker could just obtain fresh seals with the same number!

Since the time I wrote that, I’ve learned from the seal experts that there’s a lot more to a seal use protocol than these five observations. I’ll write about that in the near future.

But first, I’ll write about the State of New Jersey’s slapdash response to my first examination of their seals. Stay tuned.

If Wikileaks Scraped P2P Networks for "Leaks," Did it Break Federal Criminal Law?

On Bloomberg.com today, Michael Riley reports that some of the documents hosted at Wikileaks may not be “leaks” at all, at least not in the traditional sense of the word. Instead, according to a computer security firm called Tiversa, “computers in Sweden” have been searching the files shared on p2p networks like Limewire for sensitive and confidential information, and the firm supposedly has proof that some of the documents found in this way have ended up on the Wikileaks site. These charges are denied as “completely false in every regard” by Wikileaks lawyer Mark Stephens.

I have no idea whether these accusations are true, but I am interested to learn from the story that if they are true they might provide “an alternate path for prosecuting WikiLeaks,” most importantly because the reporter attributes this claim to me. Although I wasn’t misquoted in the article, I think what I said to the reporter is a few shades away from what he reported, so I wanted to clarify what I think about this.

In the interview and in the article, I focus only on the Computer Fraud and Abuse Act (“CFAA”), the primary federal law prohibiting computer hacking. The CFAA defines a number of federal crimes, most of which turn on whether an action on a computer or network was done “without authorization” or in a way that “exceeds authorized access.”

The question presented by the reporter to me (though not in these words) was: is it a violation of the CFAA to systematically crawl a p2p network like Limewire searching for and downloading files that might be mistakenly shared, like spreadsheets or word processing documents full of secrets?

I don’t think so. With everything I know about the text of this statute, the legislative history surrounding its enactment, and the cases that have interpreted it, this kind of searching and downloading won’t “exceed the authorized access” of the p2p network. This simply isn’t a crime under the CFAA.

But although I don’t think this is a viable theory, I can’t unequivocally dismiss it for a few reasons, all of which I tried to convey in the interview. First, some courts have interpreted “exceeds authorized access” broadly, especially in civil lawsuits arising under the CFAA. For example, back in 2001, one court declared it a CFAA violation to utilize a spider capable of collecting prices from a travel website by a competitor, if the defendant built the spider by taking advantage of “proprietary information” from a former employee of the plaintiff. (For much more on this, see this article by Orin Kerr.)

Second, it seems self-evident that these confidential files are being shared on accident. The users “leaking” these files are either misunderstanding or misconfiguring their p2p clients in ways that would horrify them, if only they knew the truth. While this doesn’t translate directly into “exceeds authorized access,” it might weigh heavily in court, especially if the government can show that a reasonable searcher/downloader would immediately and unambiguously understand that the files were shared on accident.

Third, let’s be realistic: there may be judges who are so troubled by what they see as the harm caused by Wikileaks that they might be willing to read the open-textured and mostly undefined terms of the CFAA broadly if it might help throw a hurdle in Wikileaks’ way. I’m not saying that judges will bend the law to the facts, but I think that with a law as vague as the CFAA, multiple interpretations are defensible.

But I restate my conclusion: I think a prosecution under the CFAA against someone for searching a p2p network should fail. The text and caselaw of the CFAA don’t support such a prosecution. Maybe it’s “not a slam dunk either way,” as I am quoted saying in the story, but for the lawyers defending against such a theory, it’s at worst an easy layup.

Some Technical Clarifications About Do Not Track

When I last wrote here about Do Not Track in August, there were just a few rumblings about the possibility of a Do Not Track mechanism for online privacy. Fast forward four months, and Do Not Track has shot to the top of the privacy agenda among regulators in Washington. The FTC staff privacy report released in December endorsed the idea, and Congress was quick to hold a hearing on the issue earlier this month. Now, odds are quite good that some kind of Do Not Track legislation will be introduced early in this new congressional session.

While there isn’t yet a concrete proposal for Do Not Track on the table, much has already been written both in support of and against the idea in general, and it’s terrific to see the issue debated so widely. As I’ve been following along, I’ve noticed some technical confusion on a few points related to Do Not Track, and I’d like to address three of them here.

1. Do Not Track will most likely be based on an HTTP header.

I’ve read some people still suggesting that Do Not Track will be some form of a government-operated list or registry—perhaps of consumer names, device identifiers, tracking domains, or something else. This type of solution has been suggested before in an earlier conception of Do Not Track, and given its rhetorical likeness to the Do Not Call Registry, it’s a natural connection to make. But as I discussed in my earlier post—the details of which I won’t rehash here—a list mechanism is a relatively clumsy solution to this problem for a number of reasons.

A more elegant solution—and the one that many technologists seem to have coalesced around—is the use of a special HTTP header that simply tells the server whether the user is opting out of tracking for that Web request, i.e. the header can be set to either “on” or “off” for each request. If the header is “on,” the server would be responsible for honoring the user’s choice to not be tracked. Users would be able to control this choice through the preferences panel of the browser or the mobile platform.

2. Do Not Track won’t require us to “re-engineer the Internet.”

It’s also been suggested that implementing Do Not Track in this way will require a substantial amount of additional work, possibly even rising to the level of “re-engineering the Internet.” This is decidedly false. The HTTP standard is an extensible one, and it “allows an open-ended set of… headers” to be defined for it. Indeed, custom HTTP headers are used in many Web applications today.

How much work will it take to implement Do Not Track using the header? Generally speaking, not too much. On the client-side, adding the ability to send the Do Not Track header is a relatively simple undertaking. For instance, it only took about 30 minutes of programming to add this functionality to a popular extension for the Firefox Web browser. Other plug-ins already exist. Implementing this functionality directly into the browser might take a little bit longer, but much of the work will be in designing a clear and easily understandable user interface for the option.

On the server-side, adding code to detect the header is also a reasonably easy task—it takes just a few extra lines of code in most popular Web frameworks. It could take more substantial work to program how the server behaves when the header is “on,” but this work is often already necessary even in the absence of Do Not Track. With industry self-regulation, compliant ad servers supposedly already handle the case where a user opts out of their behavioral advertising programs, the difference now being that the opt-out signal comes from a header rather than a cookie. (Of course, the FTC could require stricter standards for what opting-out means.)

Note also that contrary to some suggestions, the header mechanism doesn’t require consumers to identify who they are or otherwise authenticate to servers in order to gain tracking protection. Since the header is a simple on/off flag sent with every Web request, the server doesn’t need to maintain any persistent state about users or their devices’ opt-out preferences.

3. Microsoft’s new Tracking Protection feature isn’t the same as Do Not Track.

Last month, Microsoft announced that its next release of Internet Explorer will include a privacy feature called Tracking Protection. Mozilla is also reportedly considering a similar browser-based solution (although a later report makes it unclear whether they actually will). Browser vendors should be given credit for doing what they can from within their products to protect user privacy, but their efforts are distinct from the Do Not Track header proposal. Let me explain the major difference.

Browser-based features like Tracking Protection basically amount to blocking Web connections from known tracking domains that are compiled on a list. They don’t protect users from tracking by new domains (at least until they’re noticed and added to the tracking list) nor from “allowed” domains that are tracking users surreptitiously.

In contrast, the Do Not Track header compels servers to cooperate, to proactively refrain from any attempts to track the user. The header could be sent to all third-party domains, regardless of whether the domain is already known or whether it actually engages in tracking. With the header, users wouldn’t need to guess whether a domain should be blocked or not, and they wouldn’t have to risk either allowing tracking accidentally or blocking a useful feature.

Tracking Protection and other similar browser-based defenses like Adblock Plus and NoScript are reasonable, but incomplete, interim solutions. They should be viewed as complementary with Do Not Track. For entities under FTC jurisdiction, Do Not Track could put an effective end to the tracking arms race between those entities and browser-based defenses—a race that browsers (and thus consumers) are losing now and will be losing in the foreseeable future. For those entities outside FTC jurisdiction, blocking unwanted third parties is still a useful though leaky defense that maintains the status quo.

Information security experts like to preach “defense in depth” and it’s certainly vital in this case. Neither solution fully protects the user, so users really need both solutions to be available in order to gain more comprehensive protection. As such, the upcoming features in IE and Firefox should not be seen as a technical substitute for Do Not Track.

——

To reiterate: if the technology that implements Do Not Track ends up being an HTTP header, which I think it should be, it would be both technically feasible and relatively simple. It’s also distinct from recent browser announcements about privacy in that Do Not Track forces server cooperation, while browser-based defenses work alone to fend off tracking.

What other technical issues related to Do Not Track remain murky to readers? Feel free to leave comments here, or if you prefer on Twitter using the #dntrack tag and @harlanyu.