March 29, 2024

Ohio Study: Scariest E-Voting Security Report Yet

The State of Ohio released the report of a team of computer scientists it commissioned to study the state’s e-voting systems. Though it’s a stiff competition, this may qualify as the scariest e-voting study report yet.

This was the most detailed study yet of the ES&S iVotronic system, and it confirmss the results of the earlier Florida State study. The study found many ways to subvert ES&S systems.

The ES&S system, like its competitors, is subject to viral attacks that can spread from one voting machine to others, and to the central vote tabulation systems.

Anyone with access to a machine can re-calibrate the touchscreen to affect how the machine records votes (page 50):

A terminal can be maliciously re-calibrated (by a voter or poll worker) to prevent voting for certain candidates or to cause voter input for one candidate to be recorded for another.

Worse yet, the system’s access control can be defeated by a poll worker or an ordinary voter, using only a small magnet and a PDA or cell phone (page 50).

Some administrative functions require entry of a password, but there is an undocumented backdoor function that lets a poll worker or voter with a magnet and PDA bypass the password requirements (page 51).

The list of problems goes on and on. It’s inconceivable that the iVotronic could have undergone any kind of serious security review before being put on the market. It’s also unclear how the machine managed to get certified.

Even if you don’t think anyone would try to steal an election, this should still scare you. A machine with so many design errors must also be susceptible to misrecording or miscounting votes due to the ordinary glitches and errors that always plague computer systems. Even if all poll workers and voters were angels, this machine would be too risky to use.

This is yet more evidence that today’s paperless e-voting machines can’t be trusted.

[Correction (December 18): I originally wrote that this was the first independent study of the iVotronic. In fact, the Florida State team studied the iVotronic first and reported many problems. The new report confirms the Florida State report, and provides some new details. My apologies to the Florida State team for omitting their work.]

Economics of Eavesdropping For Pay

Following up on Andrew’s post about eavesdropping as a profit center for telecom companies, let’s take a quick look at the economics of eavesdropping for money. We’ll assume for the sake of argument that (1) telecom (i.e. transporting bits) is a commodity so competition forces providers to sell it essentially at cost, (2) the government wants to engage in certain eavesdropping and/or data mining that requires cooperation from telecom providers, (3) cooperation is optional for each provider, and (4) the government is willing to pay providers to cooperate.

A few caveats are in order. First, we’re not talking about situations, such as traditional law enforcement eavesdropping pursuant to a warrant, where the provider is compelled to cooperate. Providers will cooperate in those situations, as they should. We’re only talking about additional eavesdropping where the providers can choose whether to cooperate. Second, we don’t care whether the government pays for cooperation or threatens retaliation for non-cooperation – either way the provider ends up with more money if it cooperates. Finally, we’re assuming that the hypothetical surveillance or data mining program, and the providers’ participation in it, is lawful; otherwise the law will (eventually) stop it. With those caveats out of the way, let the analysis begin.

Suppose a provider charges each customer an amount P for telecom service. The provider makes minimal profit at price P, because by assumption telecom is a commodity. The government offers to pay the provider an amount E per customer if the provider allows surveillance. The provider has two choices: accept the payment and offer service with surveillance at a price of P-E, or refuse the payment and offer reduced-surveillance service at price P. A rational provider will do whatever it thinks its customers prefer: Would typical customers rather save E, or would they rather avoid surveillance?

In this scenario, surveillance isn’t actually a profit center for the provider – the payment, if accepted, gets passed on to customers as a price discount. The provider is just an intermediary; the customers are actually deciding.

But of course the government won’t allow each customer to make an individual decision whether to allow surveillance – then the bad guys could pay extra to avoid being watched. If enough customers prefer for whatever reason to avoid surveillance (at a cost of E), then some provider will emerge to serve them. So the government will have to set E large enough that the number of customers who would refuse the payment is not large enough to support even one provider. This implies a decent-sized value for E.

But there’s another possibility. Suppose a provider claims to be refusing the payment, but secretly accepts the payment and allows surveillance of its customers. If customers fall for the lie, then the provider can change P while pocketing the government payment E. Now surveillance is a profit center for the provider, as long as customers don’t catch on.

If customers know that producers might be lying, savvy customers will discount a producer’s claim to be refusing the payments. So the premium customers are willing to pay for (claims of) avoiding surveillance will be smaller, and government can buy more surveillance more cheaply.

The incentives here get pretty interesting. Government benefits by undermining providers’ credibility, as that lowers the price government has to pay for surveillance. Providers who are cooperating with the government want to undermine their fellow providers’ credibility, thereby making customers less likely to buy from surveillance-resisting providers. Providers who claim, truthfully or not, to be be refusing surveillance want to pick fights with the government, making it look less likely that they’re cooperating with the government on surveillance.

If government wants to use surveillance, why doesn’t it require providers to cooperate? That’s a political question that deserves a post of its own.

On freezing your credit reports

In my last post, where I discussed the (likely) theft of my SSN from the State of Ohio, I briefly discussed the possibility of “freezing” my credit report. I’ve done some more investigation on how, exactly, this works.

Details seem to vary from state to state (Consumer’s Union has a nice summary), but you generally can write to each of the three major credit report bureaus, via postal mail, and request that your account be “frozen.” This will not prevent you from getting “pre-approved” credit-card offers. For that, you separately opt-out, although you can at least do it online. Once your request takes effect, most requests to access your credit report will be denied. There are a wide variety of exceptions, mostly related to people who you’re already doing business with, which strikes me as entirely reasonable.

Cost? If you’re the victim of identity fraud (and it’s unclear whether I meet that definition), it’s free. You include a copy of your police report when you’re writing your letters to each of the credit ratings bureaus. If not, the cost is $10 per bureau. Multiply by three, and that’s $30. You’re married and want to do it for your spouse? Add another $30. What if you want to temporarily (or permanently) lift the block? The price varies, but it’s comparable.

Here’s the problem with this system: let’s say you’re doing the sort of things for which people legitimately want to look up your credit report (e.g., borrowing money for a car, opening a new credit card, renting a new apartment, etc.). Particularly if you’re changing jobs, moving to a new area, and so forth, you’ll be doing a lot of this all at once. As a result, precisely when you’re most often giving out your SSN and thus increasing your vulnerability, you also have to disable the block on your account, exposing yourself to the risk of identity theft.

The proper answer, of course, is to arrange for SSNs to have no more value to an identity thief than your name and address. The unanswered question, then, is what exactly can replace it as an authenticator? One possibility, raised in the thread on car dealers who insist on fingerprints, is to require these sorts of transactions be notarized. A notary public‘s main function is to authenticate that a specific person signed a specific document. You already need a notary’s services when you buy or sell a house. Why not require their services for any transaction that involves a personal credit report? The answer, I imagine, is cost, both in time and money. Department stores would be unable to give you “instant credit cards.” Applying to rent an apartment would become more complicated and annoying. There would be more friction, all around, to get credit. However, if identity theft continues to be such a significant problem, maybe it’s a trade-off worth making.

(Aside: how, exactly, do you convince the notary of your identity? The answer varies, but it seems to involve a photo ID, signature, and in some cases a thumbprint. You could certainly imagine cutting the notary out of the process and pushing the same authentication process out to a cash register or wherever else, but this creates a trusted path problem. When a human notary is authenticating a paper document, there’s no question to anybody what, exactly, is being authenticated. If you give your biometric and ID card to a scanner in a store, you have no idea where that data is going and what, ultimately, is being authenticated on your behalf. Astute readers may see a connection between this and the need for election systems to have voter-verifiable paper trails, but that’s a discussion for another day.)

On stolen data with privacy-relevant information

I just received a first-class letter from the State of Ohio, telling me:

The State of Ohio has confirmed that your name and social security number was contained on a computer back-up device that was stolen. It is unlikely that someone can access the data contained in the device without specialized knowledge and equipment. Because we have no information to date that the data has been accessed, everything we are doing, or suggesting that you consider doing, is preventative.

The State of Ohio is doing everything possible to recover the stolen device and protect the personal information that was on the device. We regret that the loss of this sensitive data may place an undue burden of concern on you.

The letter explains how I can sign up with Debix for their identity protection services, and provides a PIN for me to use. (So, now I can spread my SSN further. Wonderful.)

The last time I set foot in Ohio was over three years ago, when I testified about electronic voting security issues, so it seems odd that they would still have my SSN on file. I don’t recall if they specifically asked me for my SSN, but it’s common for these sorts of things to ask for it as part of reimbursing travel expenses. It’s also possible that my SSN was on this backup tape for other reasons. Some news stories say that sixty Connecticut citizen’s information were present on the tape; I’m from Texas, so that shouldn’t have affected me. The State of Ohio has its own official web site to discuss the incident, which apparently happened back in June, yet they’re only telling me now.

Okay, let’s see if we can figure out what’s going on here. First, the “back-up device” in question appears to be nothing more than a backup tape. They don’t say what kind of tape it was, but there are only a handful of options these days, and it’s not exact hard to buy a tape drive, making the “specialized knowledge and equipment” line seem pretty unlikely. (As long as I’ve been doing security work, I’ve seen similar responses. The more things change…) So what actually happened? According to the official web site:

The Inspector General investigation determined that: “OAKS administrators failed to protect confidential information by authorizing state employees, including college interns, to take backup tapes containing sensitive data to their homes for overnight storage”; “OAKS, OIT (Office of Information Technology) and OBM (Office of Budget and Management) officials failed to report the theft of confidential information to state and law enforcement officials in a timely manner”; and “OAKS administrators failed to protect confidential information by allowing personnel to store sensitive data in an unsecured folder on the OAKS intranet.” The Inspector General found no evidence to suggest state agencies or employees engaged in criminal or illegal behavior surrounding these circumstances.

At its core, Ohio apparently had fantastically poor procedures along with what Jerry Saltzer refers to as the “bad news diode“, i.e., bad news never flows up the chain of command. Combine those and it shouldn’t be surprising that something would eventually go wrong. In my case, such poor procedures make it believable that nobody bothered to delete my information after it was no longer necessary to retain it. Or, maybe they have some misguided anti-terrorist accounting rule where they hang onto this data “just in case.” Needless to say, I don’t know.

It’s reasonable to presume that this sort of issue is only going to become more common over time. It’s exceptionally difficult to keep your SSN truly private, particularly if reimbursement paperwork, among other things, unnecessarily requires the disclosure of a SSN. The right answer is probably an amalgamation of data destruction policies (to limit the scope of leaks when they happen), rational data management policies (to make leaks less likely), and federal regulations making it harder to convert a SSN into cash (to make leaked SSNs less valuable).

(Sidebar: when my wife and I bought a new car in 2005, the dealer asked for my SSN. “I’m paying cash. You don’t need it,” I said. They replied that I could either wait until the funds cleared, or I could let them run a credit check on me. I grumbled and caved in. At least they didn’t ask for my fingerprint.)

iPhone Unlocking Secret Revealed

The iPhone unlocking story took its next logical turn this week, with the release of a free iPhone unlocking program. Previously, unlocking required buying a commercial program or following a scary sequence of documented hardware and software tweaks.

How this happened is interesting in itself. (Caveat: This is based on the stories I’m hearing; I haven’t confirmed it all myself.) The biggest technical barrier to a software-only unlock procedure was figuring out the unlocking program, once installed on the iPhone, could modify the machine’s innermost configuration information – something that Apple’s iPhone operating system software was trying to prevent. A company called iPhoneSimFree figured out a way to do this, and used it to develop easy-to-use iPhone unlocking software, which they started selling.

Somebody bought a copy of the iPhoneSimFree software and reverse engineered it, to figure out how it could get at the iPhone’s internal configuration. The trick, once discovered, was easy to replicate, which eliminated the last remaining barrier to the development and release of free iPhone unlocking software.

It’s a commonplace in computer security that physical control over a device can almost always be leveraged to control it. (This iceberg has sunk many DRM Titanics.) This principle was the basis for iPhoneSimFree’s business model – helping users control their iPhones – but it boomeranged on them when a reverse engineer applied the same principle to iPhoneSimFree’s own product. Once the secret was out, anyone could make iPhone unlocking software, and the price of that software would inevitably be driven down to its marginal cost of zero.

Intellectual property law had little to offer iPhoneSimFree. The trick turned out to be a fact about how Apple’s software worked – not copyrightable by iPhoneSimFree, and not patentable in practice. Trade secret law didn’t help either, because trade secrets are not shielded against reverse engineering (for good reason). They could have attached a license agreement to their product, making customers promise not to reverse engineer their product, but that would not be effective either. And it might not have been the smartest thing to rely on, given that their own product was surely based on reverse engineering of the iPhone.

Now that the unlocking software is out, the ball is in Apple’s court. Will they try to cram the toothpaste back into the tube? Will they object publicly but accept that the iPhone unlocking battle is essentially over? Will they try to play another round, by modifying the iPhone software? Apple tends to be clever about these things, so their strategy, whatever it is, will have something to teach us.