November 25, 2024

The trick to defeating tamper-indicating seals

In this post I’ll tell you the trick to defeating physical tamper-evident seals.

When I signed on as an expert witness in the New Jersey voting-machines lawsuit, voting machines in New Jersey used hardly any security seals. The primary issues were in my main areas of expertise: computer science and computer security.

Even so, when the state stuck a bunch of security seals on their voting machines in October 2008, I found that I could easily defeat them. I sent in a supplement expert report to the Court, explaining how.

Soon after I sent in my report about how to defeat all the State’s new seals, in January 2009 the State told the Court that it was abandoning all those seals, and that it had new seals for the voting machines. As before, I obtained samples of these new seals, and went down to my basement to work on them.

In a day or two, I figured out how to defeat all those new seals.

  • The vinyl tamper-indicating tape can be defeated using packing tape, a razor blade, and (optionally) a heat gun.
  • The blue padlock seal can be defeated with a portable drill and a couple of jigs that I made from scrap metal.
  • The security screw cap can be defeated with a $5 cold chisel and a $10 long-nose pliers, each custom ground on my bench grinder.

For details and pictures, see “Seal Regime #3” in this paper.

The main trick is this: just to know that physical seals are, in general, easy to defeat. Once you know that, then it’s just a matter of thinking about how to do it, and having a pile of samples on which to experiment. In fact, the techniques I describe in my paper are not the only way to defeat these seals, or the best way—not even close. These techniques are what an amateur could come up with. But these seal-defeats were good enough to work just fine when I demonstrated them in the courtroom during my testimony, and they would almost certainly not be detected by the kinds of seal-inspection protocols that most states (including New Jersey) use for election equipment.

(In addition, the commenters on my previous post describe a very simple denial-of-service attack on elections: brazenly cut or peel all the seals in sight. Then what will the election officials do? In principle they should throw out the ballots or data covered by those seals. But then what? “Do-overs” of elections are rare and messy. I suspect the most common action in this case is not even to notice anything wrong; and the second most common is to notice it but say nothing. Nobody wants to rock the boat.)

Seals on NJ voting machines, October-December 2008

In my examination of New Jersey’s voting machines, I found that there were no tamper-indicating seals that prevented fiddling with the vote-counting software—just a plastic strap seal on the vote cartridge. And I was rather skeptical whether slapping seals on the machine would really secure the ROMs containing the software. I remembered Avi Rubin’s observations from a couple of years earlier, that I described in a previous post.

A bit of googling turned up this interesting 1996 article:


Vulnerability Assessment of Security Seals
Roger G. Johnston, Ph.D. and Anthony R.E. Garcia
Los Alamos National Laboratory

… We studied 94 different security seals, both passive and electronic, developed either commercially or by the United States Government. Most of these seals are in wide-spread use, including for critical applications. We learned how to defeat all 94 seals using rapid, inexpensive, low-tech methods.

In my expert report, I cited this scientific article to explain that seals would not be a panacea to solve the problems with the voting machine.

Soon after I delivered this report to the Court, the judge held a hearing in which she asked the defendants (the State of New Jersey) how they intended to secure these voting machines against tampering. A few weeks later, the State explained their new system: more seals.

For the November 2008 election, they slapped on three pieces of tape, a wire seal, and a “security screw cap”, in addition to the plastic strap seal that had already been in use. All these seals are in the general categories described by Johnston and Garcia as easy to defeat using “rapid, inexpensive, low-tech methods”.

Up to this point I knew in theory (by reading Avi Rubin and Roger Johnston) that tamper-indicating seals aren’t very secure, but I hadn’t really tried anything myself.

Here’s what is not so obvious: If you want to study how to lift and replace a seal without breaking it, or how to counterfeit a seal, you can’t practice on the actual voting machine (or other device) in the polling place! You need a few dozen samples of the seal, so that you can try different approaches, to see what works and what doesn’t. Then you need to practice these approaches over and over. So step 1 is to get a big bag of seals.

What I’ve discovered, by whipping out a credit card and trying it, is that the seal vendors are happy to sell you 100 seals, or 1000, or however many you need. They cost about 50 cents apiece, or more, depending on the seal. So I bought some seals. In addition, under Court order we got some samples from the State, but that wasn’t really necessary as all those seals are commercially available, as I found by a few minutes of googling.

The next step was to go down to my basement workshop and start experimenting. After about a day of thinking about the seals and trying things out, I cracked them all.

As I wrote in December 2008, all those seals are easily defeated.

  • The tamper-indicating tape can be lifted using a heat gun and a razor blade, then replaced with no indication of tampering.
  • The security screw cap can be removed using a screwdriver, then the
    serial-numbered top can be replaced (undamaged) onto a fresh (unnumbered) base.

  • The wire seal can be defeated using a #4 wood screw.
  • The plastic strap seal can be picked using a jeweler’s screwdriver.

For details and pictures, see “Seal Regime #2” in this paper.

Seals on NJ voting machines, 2004-2008

I have just released a new paper entitled Security seals on voting machines: a case study and here I’ll explain how I came to write it.

Like many computer scientists, I became interested in the technology of vote-counting after the technological failure of hanging chads and butterfly ballots in 2000. In 2004 I visited my local polling place to watch the procedures for closing the polls, and I noticed that ballot cartridges were sealed by plastic strap seals like this one:

plastic strap seal

The pollworkers are supposed to write down the serial numbers on the official precinct report, but (as I later found when Ed Felten obtained dozens of these reports through an open-records request), about 50% of the time they forget to do this:

In 2008 when (as the expert witness in a lawsuit) I examined the hardware and software of New Jersey’s voting machines, I found that there were no security seals present that would impede opening the circuit-board cover to replace the vote-counting software. The vote-cartridge seal looks like it would prevent the cover from being opened, but it doesn’t.

There was a place to put a seal on the circuit-board cover, through the hole labeled “DO NOT REMOVE”, but there was no seal there:

Somebody had removed a seal, probably a voting-machine repairman who had to open the cover to replace the batteries, and nobody bothered to install a new one.

The problem with paperless electronic voting machines is that if a crooked political operative has access to install fraudulent software, that software can switch votes from one candidate to another. So, in my report to the Court during the lawsuit, I wrote,


10.6. For a system of tamper-evident seals to provide effective protection, the seals must be consistently installed, they must be truly tamper-evident, and they must be consistently inspected. With respect to the Sequoia AVC Advantage, this means that all five of the
following would have to be true. But in fact, not a single one of these is true in practice, as I will explain.

  1. The seals would have to be routinely in place at all times when an attacker might wish to access the Z80 Program ROM; but they are not.
  2. The cartridge should not be removable without leaving evidence of tampering with
    the seal; but plastic seals can be quickly defeated, as I will explain.

  3. The panel covering the main circuit board should not be removable without removing the [vote-cartridge] seal; but in fact it is removable without disturbing the seal.
  4. If a seal with a different serial number is substituted, written records would have to reliably catch this substitution; but I have found major gaps in these records in New Jersey.
  5. Identical replacement seals (with duplicate serial numbers) should not exist; but the evidence shows that no serious attempt is made to avoid duplication.

Those five criteria are just common sense about what would be a required in any effective system for protecting something using tamper-indicating seals. What I found was that (1) the seals aren’t always there; (2) even if they were, you can remove the cartridge without visible evidence of tampering with the seal and (3) you can remove the circuit-board cover without even disturbing the plastic-strap seal; (4) even if that hadn’t been true, the seal-inspection records are quite lackadaisical and incomplete; and (5) even if that weren’t true, since the counties tend to re-use the same serial numbers, the attacker could just obtain fresh seals with the same number!

Since the time I wrote that, I’ve learned from the seal experts that there’s a lot more to a seal use protocol than these five observations. I’ll write about that in the near future.

But first, I’ll write about the State of New Jersey’s slapdash response to my first examination of their seals. Stay tuned.

Web Browser Security User Interfaces: Hard to Get Right and Increasingly Inconsistent

A great deal of online commerce, speech, and socializing supposedly happens over encrypted protocols. When using these protocols, users supposedly know what remote web site they are communicating with, and they know that nobody else can listen in. In the past, this blog has detailed how the technical protocols and legal framework are lacking. Today I’d like to talk about how secure communications are represented in the browser user interface (UI), and what users should be expected to believe based on those indicators.

The most ubiquitous indicator of a “secure” connection on the web is the “padlock icon.” For years, banks, commerce sites, and geek grandchildren have been telling people to “look for the lock.” However, The padlock has problems. First, it has been shown in user studies that despite all of the imploring, many people just don’t pay attention. Second, when they do pay attention, the padlock often gives them the impression that the site they are connecting to is the real-world person or company that the site claims to be (in reality, it usually just means that the connection is encrypted to “somebody”). Even more generally, many people think that the padlock means that they are “safe” to do whatever they wish on the site without risk. Finally, there are some tricky hacker moves that can make it appear that a padlock is present when it actually is not.

A few years ago, a group of engineers invented “Extended Validation(EV) certificates. As opposed to “Domain Validation(DV) certs that simply verify that you are talking to “somebody” who owns the domain, EV certificates actually do verify real-world identities. They also typically cause some prominent part of the browser to turn green and show the real-world entity’s name and location (eg: “Bank of America Corporation (US)”). Separately, the W3 Consortium recently issued a final draft of a document entitled “Web Security Context: User Interface Guidelines.” The document describes web site “identity signals,” saying that the browser must “make information about the identity of the Web site that a user interacts with available.” These developments highlight a shift in browser security UI from simply showing a binary padlock/no-padlock icon to showing more rich information about identity (when it exists).

In the course of trying to understand all of these changes, I made a disturbing discovery: different browser vendors are changing their security UI’s in different ways. Here are snapshots from some of the major browsers:

As you can see, all of the browsers other than Firefox still have a padlock icon (albeit in different places). Chrome now makes “https” and the padlock icon green regardless of whether it is DV or EV (see the debate here), whereas the other browsers reserve the green color for EV only. The confusion is made worse by the fact that Chrome appears to contain a bug in which the organization name/location (the only indication of EV validation) sometimes does not appear. Firefox chose to use the color blue for DV even though one of their user experience guys noted, “The color blue unfortunately carries no meaning or really any form of positive/negative connotation (this was intentional and the rational[e] is rather complex)”. The name/location from EV certificates appear in different places, and the method of coloring elements also varies (Safari in particular colors only the text, and does so in dark shades that can sometimes be hard to discern from black). Some browsers also make (different) portions of the url a shade of gray in an attempt to emphasize the domain you are visiting.

Almost all of the browsers have made changes to these elements in recent versions. Mozilla has been particularly aggressively changing Firefox’s user interface, with the most dramatic change being the removal of the padlock icon entirely as of Firefox 4. Here is the progression in changes to the UI when visiting DV-certified sites:

By stepping back to Firefox 2.0, we can see a much more prominent padlock icon in both the URL bar and in the bottom-right “status bar” along with an indication of what domain is being validated. Firefox 3.0 toned down the color scheme of the lock icon, making it less attention grabbing and removing it from the URL bar. It also removed the yellow background that the URL bar would show for encrypted sites, and introduced a blue glow around the site icon (“favicon”) if the site provided a DV cert. This area was named the “site identification button,” and is either grey, blue, or green depending on the level of security offered. Users can click on the button to get more information about the certificate, presuming they know to do so. At some point between Firefox 3.0 and 3.6, the domain name was moved from the status bar (and away from the padlock icon) to the “site identification button”.

In the soon-to-be-released Firefox 4 is the padlock icon removed altogether. Mozilla actually removed the “status bar” at the bottom of the screen completely, and the padlock icon with it. This has caused consternation among some users, and generated about 35k downloads of an addon that restores some of the functionality of the status bar (but not the padlock).

Are these changes a good thing? On the one hand, movement toward a more accurately descriptive system is generally laudable. On the other, I’m not sure whether there has been any study about how users interpret the color-only system — especially in the context of varying browser implementations. Anecdotally, I was unaware of the Firefox changes, and I had a moment of panic when I had just finished a banking transaction using a Firefox 4 beta and realized that there was no lock icon. I am not the only one. Perhaps I’m an outlier, and perhaps it’s worth the confusion in order to move to a better system. However, at the very least I would expect Mozilla to do more to proactively inform users about the changes.

It seems disturbing that the browsers are diverging in their visual language of security. I have heard people argue that competition in security UI could be a good thing, but I am not convinced that any benefits would outweigh the cost of confusing users. I’m also not sure that users are aware enough of the differences that they will consider it when selecting a browser… limiting the positive effects of any competition. What’s more, the problem is only set to get worse as more and more browsing takes place on mobile devices that are inherently constrained in what they can cram on the screen. Just take a look at iOS vs. Android:

To begin with, Mobile Safari behaves differently from desktop Safari. The green color is even harder to see here, and one wonders whether the eye will notice any of these changes when they appear in the browser title bar (this is particularly evident when browsing on an iPad). Android’s browser displays a lock icon that is identical for DV and EV sites. Windows Phone 7 behaves similarly, but only when the URL bar is present — and the URL bar is automatically hidden when you rotate your phone into landscape mode. Blackberry shows a padlock icon inconspicuously in the top status bar of the phone (the same area as your signal strength and battery status). Blackberry uniquely shows an unlocked padlock icon when on non-encrypted sites, something I don’t remember in desktop browsers since Netscape Navigator (although maybe it’s a good idea to re-introduce some positive indication of “not encrypted”).

Some of my more cynical realistic colleagues have said that given the research showing that most users don’t pay attention to this stuff anyway, trying to fix it is pointless. I am sympathetic to that view, and I think that making more sites default to HTTPS, encouraging adoption of standards like HSTS, and working on standards to make it easier to encrypt web communications are probably lower hanging fruit. There nevertheless seems to be an opportunity here for some standardization amongst the browser vendors, with a foundation in actual usability testing.

Burn Notice, season 4, and the abuse of the MacGuffin

One of my favorite TV shows is Burn Notice. It’s something of a spy show, with a certain amount of gadgets but generally no James Bond-esque Q to supply equipment that’s certainly beyond the reach of real-world spycraft. Burn Notice instead focuses on the value of teamwork, advance planning, and clever subterfuge to pull off its various operations combined with a certain amount of humor and romance to keep the story compelling and engaging. You can generally watch along and agree with the feasibility of what they’re doing. Still, when they get closer to technology I actually know something about, I start to wonder.

One thing they recently got right, at least in some broad sense, was the ability to set up a femtocell (cell phone base station) as a way of doing a man-in-the-middle attack against a target’s cell phone. A friend of mine has one of these things, and he was able to set it up to service my old iPhone without anything more than my phone number. Of course, it changed the service name (from “AT&T” to “AT&T Microcell” or something along those lines), but it’s easy to imagine, in a spy-vs-spy scenario, where that would be easy to fix. Burn Notice didn’t show the necessary longer-range antenna or amplifier in order to reach their target, who was inside a building while our wiretapping heroes were out on the street, but I’m almost willing to let the get away with that, never mind having to worry about GSM versus CDMA. Too much detail would detract from the story.

(Real world analogy: Rop Gonggrijp, a Dutch computer scientist who had some tangential involvement with WikiLeaks, recently tweeted: “Foreign intel attention is nice: I finally have decent T-Mobile coverage in my office in the basement. Thanks guys…”)

What’s really bothered me about this season’s Burn Notice, though, was the central plot MacGuffin. Quoting Wikipedia: “the defining aspect of a MacGuffin is that the major players in the story are (at least initially) willing to do and sacrifice almost anything to obtain it, regardless of what the MacGuffin actually is.” MacGuffins are essential to many great works of drama, yet it seems that Hollywood fiction writers haven’t yet adapted the ideas of MacGuffins to dealing with data, and it really bugs me.

Without spoiling too much, Burn Notice‘s MacGuffin for the second half of season 4 was a USB memory stick which happened to have some particularly salacious information on it (a list of employee ID numbers corresponding to members of a government conspiracy), and which lots of people would (and did) kill to get their hands on. Initially we had the MacGuffin riding around on the back of a motorcycle courier; our heroes had to locate and intercept it. Our heroes then had to decide whether to use the information themselves or pass it onto a trusted insider in the government. Later, after various hijinks, wherein our heroes lost the MacGuffin, the bad guy locked it a fancy safe which our heroes had to physically find and then remove from a cinderblock wall to later open with an industrial drill-press.

When the MacGuffin was connected to a computer, our heroes could read it, but due to some sort of unspecified “cryptography” they were unable to make copies. Had that essential element been more realistic, the entire story would have changed. Never mind that there’s no such “encryption” technology out there. For a show that has our erstwhile heroes regularly use pocket digital cameras to photograph computer screens or other sensitive documents, you’d think they would do something similar here. Nope. The problem is that any realistic attempt to model how easy it is to copy data like this would have blown apart the MacGuffin-centric nature of the plot. Our protagonists could have copied the data, early on, and handed the memory card over. They could have then handed over bogus data written to the same memory stick. They could have created thousands of webmail accounts, each holding copies of the data. They could have anonymously sent the incriminating data to any of a variety of third parties, perhaps borrowing some plot elements from the whole WikiLeaks fiasco. In short, there could still have been a compelling story, but it wouldn’t have followed the standard MacGuffin structure, and it would almost certainly have reached a very different conclusion.

All in all, it’s probably a good thing I don’t know too much about combat tactics, explosives, or actual spycraft, or I’d be completely unable to enjoy a show like this. I expect James Bond to do impossible things, but I appreciate Burn Notice for its ostensibility. I can almost imagine it actually happening.