August 18, 2019

E-Voting Links for Election Day

Today, of course, is Election Day in the U.S. Many of our U.S. readers will be casting their votes electronically.

CITP has been front and center on the e-voting issue. Here’s a quick set of CITP e-voting links:

Finally, in keeping with tradition here, on Election Day I post photos of unguarded voting machines in the Princeton area. Here are two photos taken over the weekend:

[This entry was updated at 1:00 PM Eastern on 2 Nov 2010, to add the Checkoway et al. item.]

Understanding the HDCP Master Key Leak

On Monday, somebody posted online an array of numbers which purports to be the secret master key used by HDCP, a video encryption standard used in consumer electronics devices such as DVD players and TVs. I don’t know if the key is genuine, but let’s assume for the sake of discussion that it is. What does the leak imply for HDCP’s security? And what does the leak mean for the industry, and for consumers?

HDCP is used to protect high-def digital video signals “on the wire,” for example on the cable connecting your DVD player to your TV. HDCP is supposed to do two things: it encrypts the content so that it can’t be captured off the wire, and it allows each endpoint to verify that the other endpoint is an HDCP-licensed device. From a security standpoint, the key step in HDCP is the initial handshake, which establishes a shared secret key that will be used to encrypt communications between the two devices, and at the same time allows each device to verify that the other one is licensed.

As usual when crypto is involved, the starting point for understanding the system’s design is to think about the secret keys: how many there are, who knows them, and how they are used. HDCP has a single master key, which is supposed to be known only by the central HDCP authority. Each device has a public key, which isn’t a secret, and a private key, which only that device is supposed to know. There is a special key generation algorithm (“keygen” for short) that is used to generate private keys. Keygen uses the secret master key and a public key, to generate the unique private key that corresponds to that public key. Because keygen uses the secret master key, only the central authority can do keygen.

Each HDCP device (e.g., a DVD player) has baked into it a public key and the corresponding private key. To get those keys, the device’s manufacturer needs the help of the central authority, because only the central authority can do keygen to determine the device’s private key.

Now suppose that two devices, which we’ll call A and B, want to do a handshake. A sends its public key to B, and vice versa. Then each party combines its own private key with the other party’s public key, to get a shared secret key. This shared key is supposed to be secret—i.e., known only to A and B—because making the shared key requires having either A’s private key or B’s private key.

Note that A and B actually did different computations to get the shared secret. A combined A’s private key with B’s public key, while B combined B’s private key with A’s public key. If A and B did different computations, how do we know they ended up with the same value? The short answer is: because of the special mathematical properties of keygen. And the security of the scheme depends on this: if you have a private key that was made using keygen, then the HDCP handshake will “work” for you, in the sense that you’ll end up getting the same shared key as the party on the other end. But if you tried to use a random “private key” that you cooked up on your own, then the handshake won’t work: you’ll end up with a different shared key than the other device, so you won’t be able to talk to that device.

Now we can understand the implications of the master key leaking. Anyone who knows the master key can do keygen, so the leak allows everyone to do keygen. And this destroys both of the security properties that HDCP is supposed to provide. HDCP encryption is no longer effective because an eavesdropper who sees the initial handshake can use keygen to determine the parties’ private keys, thereby allowing the eavesdropper to determine the encryption key that protects the communication. HDCP no longer guarantees that participating devices are licensed, because a maker of unlicensed devices can use keygen to create mathematically correct public/private key pairs. In short, HDCP is now a dead letter, as far as security is concerned.

(It has been a dead letter, from a theoretical standpoint, for nearly a decade. A 2001 paper by Crosby et al. explained how the master secret could be reconstructed given a modest number of public/private key pairs. What Crosby predicted—a total defeat of HDCP—has now apparently come to pass.)

The impact of HDCP’s failure on consumers will probably be minor. The main practical effect of HDCP has been to create one more way in which your electronics could fail to work properly with your TV. This is unlikely to change. Mainstream electronics makers will probably continue to take HDCP licenses and to use HDCP as they are now. There might be some differences at the margin, where manufacturers feel they can take a few more liberties to make things work for their customers. HDCP has been less a security system than a tool for shaping the consumer electronics market, and that is unlikely to change.

Why did anybody believe Haystack?

Haystack, a hyped technology that claimed to help political dissidents hide their Internet traffic from their governments, has been pulled by its promoters after independent researchers got a chance to study it and found severe problems.

This should come as a surprise to nobody. Haystack exhibited the warning signs of security snake oil: the flamboyant, self-promoting front man; the extravagant security claims; the super-sophisticated secret formula that cannot be disclosed; the avoidance of independent evaluation. What’s most interesting to me is that many in the media, and some in Washington, believed the Haystack hype, despite the apparent lack of evidence that Haystack would actually protect dissidents.

Now come the recriminations.

Jillian York summarizes the depressing line of adulatory press stories about Haystack and its front man, Austin Heap.

Evgeny Morozov at Foreign Affairs, who has been skeptical of Haystack from the beginning, calls several Internet commentators (Zittrain, Palfrey, and Zuckerman) “irresponsible” for failing to criticize Haystack earlier. Certainly, Z, P, and Z could have raised questions about the rush to hype Haystack. But the tech policy world is brimming with overhyped claims, and it’s too much to expect pundits to denounce them all. Furthermore, although Z, P, and Z know a lot about the Internet, they don’t have the expertise to evaluate the technical question of whether Haystack users can be tracked — even assuming the evidence had been available.

Nancy Scola, at TechPresident, offers a more depressing take, implying that it’s virtually impossible for reporters to cover technology responsibly.

It takes real work for reporters and editors to vet tech stories; it’s not enough to fact check quotes, figures, and events. Even “seeing a copy [of the product],” as York puts it, isn’t enough. Projects like Haystack need to be checked-out by technologists in the know, and I’d argue the before the recent rise of techno-advocates like, say, Clay Johnson or Tom Lee, there weren’t obvious knowledgeable sources for even dedicated reporters to call to help them make sense of something like Haystack, on deadline and in English.

Note the weasel-word “obvious” in the last sentence — it’s not that qualified experts don’t exist, it’s just that, in Scola’s take, reporters can’t be bothered to find out who they are.

I don’t think things are as bad as Scola implies. We need to remember that the majority of tech reporters didn’t hype Haystack. Non-expert reporters should have known to be wary about Haystack, just based on healthy journalistic skepticism about bold claims made without evidence. I’ll bet that many of the more savvy reporters shied away from Haystack stories for just this reason. The problem is that the few who did not got undeserved attention.

[Update (Tue 14 Sept 2010): Nancy Scola responds, saying that her point was that reporters’ incentives are to avoid checking up too much on enticing-if-true stories such as Haystack. Fair enough. I didn’t mean to imply that she condoned this state of affairs, just that she was pointing out its existence.]