November 26, 2024

Voting Machine Insecurity

Recently, researchers at John Hopkins and Rice Universities reported serious security flaws in electronic voting technology sold by Diebold. I haven’t yet had a chance to read the paper carefully, but I know all of the authors and I would be very surprised if they are wrong. Eric Rescorla discusses the paper and Diebold’s response.

This story follows a common pattern, in which a company claims that its secret technology is secure, only to have the security claim collapse when the system’s design finally does become known. This happens so often that security experts now routinely discount security claims that have not been subject to public scrutiny.

The researchers’ results should not be taken as evidence that Diebold machines are less secure than other secret systems. Most likely, all of the secret systems suffer from a similar level of problems. If Diebold fixes the reported problems, then Diebold’s systems will probably be more secure than their competitors.

This effect is what makes legislation like H.R. 2239 so important. Secrecy makes it difficult for vendors to differentiate their products based on security, since the secrecy makes it so difficult for a buyer to tell a secure product from an insecure one. Opening the systems up for inspection allows vendors to compete based on security, and that competition helps everybody.

Conflict of Interest

Several readers have asked about the big project that has kept me from blogging much this summer. The “project” involved expert witness testimony in a lawsuit, Eolas Technologies and University of California v. Microsoft. I testified as an expert witness, called by the plaintiffs. (The case is ongoing.)

In some alternative universe, this lawsuit and my work on it would have provided fodder for many interesting blog posts. But, as so often happens here in this universe, I can’t really talk or write about most of it.

It’s depressing how often this kind of thing happens, with direct knowledge of a topic serving to disqualify somebody from talking about it. Many conflict of interest rules seem to have this effect, locking out of a discussion precisely those people who know the topic best.

The same thing often happens in discussions with the press, where people who are connected to an issue has to speak especially carefully, because their words might be attributed indirectly to one of the participants. The result can be that those unconnected to the events get most of the ink.

Now I understand why these rules and practices exist; and in most cases I agree that they are good policy. I understand why I cannot talk about what I have learned on various topics. Still, it’s frustrating to imagine how much richer our public discourse could be if everybody were free to bring their full knowledge and understanding to the table.

[I remember an interesting old blog post on a related topic from Lyn Millett over at uncorked.org; but I couldn’t find her post when I was writing this one.]

Here We Go Again

Rep. John Conyers has introduced the Author, Consumer, and Computer Owner Protection and Security (ACCOPS) Act of 2003 in the House of Representatives.

The oddest provision of the bill is this one:

(a) Whoever knowingly offers enabling software for download over the Internet and does not–

(1) clearly and conspicuously warn any person downloading that software, before it is downloaded, that it is enabling software and could create a security and privacy risk for the user’s computer; and

(2) obtain that person’s prior consent to the download after that warning;

shall be fined under this title or imprisoned not more than 6 months, or both.

(b) As used in this section, the term `enabling software’ means software that, when installed on the user’s computer, enables 3rd parties to store data on that computer, or use that computer to search other computers’ contents over the Internet.

As so often happens in these sorts of bills, the definition has unexpected consequences. For example, it would apparently categorize Microsoft Windows as “enabling software,” since Windows offers both file server facilities and network search facilities. But the original Napster client, lacking upload and search facilities, would not be “enabling software.”

Note also that the mandated security and privacy warnings would be misleading. After all, there is no reason why file storage or search services are inherently riskier than other network software. Misleading warnings impose a real cost, since they dilute users’ trust in any legitimate warnings they see.

The general approach of this bill, which we also saw in the Hollings CBDTPA, is to impose regulation on Bad Technologies. This approach will be a big success, once we work out the right definition for Bad Technologies.

Imagine the simplification we could achieve by applying this same principle to other areas of the law. For example, the entire criminal law can be reduced to a ban on Bad Acts, once we work out the appropriate definition for that term. Campaign finance law would be reduced to a ban on Corrupting Financial Transactions (with an appropriate exception for Constructive Debate).

Back in the Saddle

I haven’t been posting much lately, due to a high-intensity project that has sucked up all of my time. But now that’s over, so I should return to normal posting pace soon.

Why Aren't Virus Attacks Worse?

Dan Simon notes a scary NYT op-ed, “Terrorism and the Biology Lab,” by Henry C. Kelly. Kelly argues convincingly that ordinary molecular biology students will soon be able to make evil bio-weapons. Simon points out the analogy to computer viruses, which are easily made and easily released. If serious bio-weapons become as common as computer viruses, we are indeed in deep trouble.

Eric Rescorla responds by noting that the computer viruses we actually see do relatively little damage, at least compared to what they might have done. Really malicious viruses, that is, ones engineered to do maximum damage, are rare. What we see instead are viruses designed to get attention and to show that the author could have done damage. The most likely explanation is that the authors of well-known viruses have written them as a sort of (twisted) intellectual exercise rather than out of spite. [By the way, don’t miss the comments on Eric’s post.]

This reminds me of a series of conversations I had a few years ago with a hotshot mo-bio professor, about the national-security implications of bio-attacks versus cyber-attacks. I started out convinced that the cyber-attack threat, while real, was overstated; but bio-attacks terrified me. He had the converse view, that bio-attacks were possible but overhyped, while cyber-attacks were the real nightmare scenario. Each of us tried to reassure the other that really large-scale malicious attacks of the type we knew best (cyber- for me, bio- for him) were harder to carry out, and less likely, than commonly believed.

It seems to me that both of us, having spent many days in the lab, understood how hard it really is to make a novel, sophisticated technology work as planned. Since nightmare attacks are, by definition, novel and sophisticated and thus not fully testable in advance, the odds are pretty high that something would go “wrong” for the attacker. With a better understanding of how software can go wrong, I fully appreciated the cyber-attacker’s problem; and with a better understanding of how bio-experiments can go wrong, my colleague fully appreciated the bio-attacker’s problem. If there is any reassurance here, it is in the likelihood that any would-be attacker will miss some detail and his attack will fizzle.