November 29, 2024

Copyright, Censorship, and Domain Name Blacklists at Home in the U.S.

Last week, The New York Times reported that Russian police were using copyright allegations to raid political dissidents, confiscating the computers of advocacy groups and opposition newspapers “under the pretext of searching for pirated Microsoft software.” Admirably, Microsoft responded the next day with a declaration of license amnesty to all NGOs:

To prevent non-government organizations from falling victim to nefarious actions taken in the guise of anti-piracy enforcement, Microsoft will create a new unilateral software license for NGOs that will ensure they have free, legal copies of our products.

Microsoft’s authorization undercuts any claim that its software is being infringed, but the Russian authorities may well find other popular software to use as pretext to disrupt political opponents.

“Piracy” has become the new tax evasion, an all-purpose charge that can be lobbed against just about anyone. If the charge alone can prompt investigation — and any electronics could harbor infringing copies — it gives authorities great discretion to interfere with dissidents.

That tinge of censorship should raise grave concern here in the United States, where Patrick Leahy and Orrin Hatch, with Senate colleagues, have introduced the “Combating Online Infringement and Counterfeits Act.” (PDF).

Understanding the HDCP Master Key Leak

On Monday, somebody posted online an array of numbers which purports to be the secret master key used by HDCP, a video encryption standard used in consumer electronics devices such as DVD players and TVs. I don’t know if the key is genuine, but let’s assume for the sake of discussion that it is. What does the leak imply for HDCP’s security? And what does the leak mean for the industry, and for consumers?

HDCP is used to protect high-def digital video signals “on the wire,” for example on the cable connecting your DVD player to your TV. HDCP is supposed to do two things: it encrypts the content so that it can’t be captured off the wire, and it allows each endpoint to verify that the other endpoint is an HDCP-licensed device. From a security standpoint, the key step in HDCP is the initial handshake, which establishes a shared secret key that will be used to encrypt communications between the two devices, and at the same time allows each device to verify that the other one is licensed.

As usual when crypto is involved, the starting point for understanding the system’s design is to think about the secret keys: how many there are, who knows them, and how they are used. HDCP has a single master key, which is supposed to be known only by the central HDCP authority. Each device has a public key, which isn’t a secret, and a private key, which only that device is supposed to know. There is a special key generation algorithm (“keygen” for short) that is used to generate private keys. Keygen uses the secret master key and a public key, to generate the unique private key that corresponds to that public key. Because keygen uses the secret master key, only the central authority can do keygen.

Each HDCP device (e.g., a DVD player) has baked into it a public key and the corresponding private key. To get those keys, the device’s manufacturer needs the help of the central authority, because only the central authority can do keygen to determine the device’s private key.

Now suppose that two devices, which we’ll call A and B, want to do a handshake. A sends its public key to B, and vice versa. Then each party combines its own private key with the other party’s public key, to get a shared secret key. This shared key is supposed to be secret—i.e., known only to A and B—because making the shared key requires having either A’s private key or B’s private key.

Note that A and B actually did different computations to get the shared secret. A combined A’s private key with B’s public key, while B combined B’s private key with A’s public key. If A and B did different computations, how do we know they ended up with the same value? The short answer is: because of the special mathematical properties of keygen. And the security of the scheme depends on this: if you have a private key that was made using keygen, then the HDCP handshake will “work” for you, in the sense that you’ll end up getting the same shared key as the party on the other end. But if you tried to use a random “private key” that you cooked up on your own, then the handshake won’t work: you’ll end up with a different shared key than the other device, so you won’t be able to talk to that device.

Now we can understand the implications of the master key leaking. Anyone who knows the master key can do keygen, so the leak allows everyone to do keygen. And this destroys both of the security properties that HDCP is supposed to provide. HDCP encryption is no longer effective because an eavesdropper who sees the initial handshake can use keygen to determine the parties’ private keys, thereby allowing the eavesdropper to determine the encryption key that protects the communication. HDCP no longer guarantees that participating devices are licensed, because a maker of unlicensed devices can use keygen to create mathematically correct public/private key pairs. In short, HDCP is now a dead letter, as far as security is concerned.

(It has been a dead letter, from a theoretical standpoint, for nearly a decade. A 2001 paper by Crosby et al. explained how the master secret could be reconstructed given a modest number of public/private key pairs. What Crosby predicted—a total defeat of HDCP—has now apparently come to pass.)

The impact of HDCP’s failure on consumers will probably be minor. The main practical effect of HDCP has been to create one more way in which your electronics could fail to work properly with your TV. This is unlikely to change. Mainstream electronics makers will probably continue to take HDCP licenses and to use HDCP as they are now. There might be some differences at the margin, where manufacturers feel they can take a few more liberties to make things work for their customers. HDCP has been less a security system than a tool for shaping the consumer electronics market, and that is unlikely to change.

Why did anybody believe Haystack?

Haystack, a hyped technology that claimed to help political dissidents hide their Internet traffic from their governments, has been pulled by its promoters after independent researchers got a chance to study it and found severe problems.

This should come as a surprise to nobody. Haystack exhibited the warning signs of security snake oil: the flamboyant, self-promoting front man; the extravagant security claims; the super-sophisticated secret formula that cannot be disclosed; the avoidance of independent evaluation. What’s most interesting to me is that many in the media, and some in Washington, believed the Haystack hype, despite the apparent lack of evidence that Haystack would actually protect dissidents.

Now come the recriminations.

Jillian York summarizes the depressing line of adulatory press stories about Haystack and its front man, Austin Heap.

Evgeny Morozov at Foreign Affairs, who has been skeptical of Haystack from the beginning, calls several Internet commentators (Zittrain, Palfrey, and Zuckerman) “irresponsible” for failing to criticize Haystack earlier. Certainly, Z, P, and Z could have raised questions about the rush to hype Haystack. But the tech policy world is brimming with overhyped claims, and it’s too much to expect pundits to denounce them all. Furthermore, although Z, P, and Z know a lot about the Internet, they don’t have the expertise to evaluate the technical question of whether Haystack users can be tracked — even assuming the evidence had been available.

Nancy Scola, at TechPresident, offers a more depressing take, implying that it’s virtually impossible for reporters to cover technology responsibly.

It takes real work for reporters and editors to vet tech stories; it’s not enough to fact check quotes, figures, and events. Even “seeing a copy [of the product],” as York puts it, isn’t enough. Projects like Haystack need to be checked-out by technologists in the know, and I’d argue the before the recent rise of techno-advocates like, say, Clay Johnson or Tom Lee, there weren’t obvious knowledgeable sources for even dedicated reporters to call to help them make sense of something like Haystack, on deadline and in English.

Note the weasel-word “obvious” in the last sentence — it’s not that qualified experts don’t exist, it’s just that, in Scola’s take, reporters can’t be bothered to find out who they are.

I don’t think things are as bad as Scola implies. We need to remember that the majority of tech reporters didn’t hype Haystack. Non-expert reporters should have known to be wary about Haystack, just based on healthy journalistic skepticism about bold claims made without evidence. I’ll bet that many of the more savvy reporters shied away from Haystack stories for just this reason. The problem is that the few who did not got undeserved attention.

[Update (Tue 14 Sept 2010): Nancy Scola responds, saying that her point was that reporters’ incentives are to avoid checking up too much on enticing-if-true stories such as Haystack. Fair enough. I didn’t mean to imply that she condoned this state of affairs, just that she was pointing out its existence.]

A Software License Agreement Takes it On the Chin

[Update: This post was featured on Slashdot.]

[Update: There are two discrete ways of asking whether a court decision is “correct.” The first is to ask: is the law being applied the same way here as it has been applied in other cases? We can call this first question the “legal question.” The second is to ask: what is the relevant social or policy goal from a normative standpoint (say, technological progress) and does the court decision advance that goal? We can call this second question “the policy question.” Eric Felten, who addressed my August 31st post at length in his article in the Wall Street Journal (Video Game Tort: You Made Me Play You), is clearly addressing the policy question. He describes “[t]he proliferation of annoying and obnoxious license agreements” as having great social utility because they prevent customers from “abusing” software companies. What Mr. Felten fails to grasp, however, is that I have not weighed in on the policy question at all. My point is much simpler. My point addressed only the legal question and set forth the (apparently controversial) proposition that courts should be faithful to the law. In the case of EULAs, that means applying the same standards, the same doctrines, and the same rules as the courts have applied to analogous consumer contracts in the brick and mortar world. Is that too much to ask? Apparently it was not too much to ask of the federal court in Smallwood, because that was exactly how the court proceeded. Mr. Felten’s only discussion of why the Smallwood decision may be legally incorrect involves the question of whether or not “physical” injury occurred. Although this is an interesting factual question with respect to the plaintiff’s “Negligent Infliction of Emotional Distress” claim (count 7), the court found it irrelevant with respect to the plain-old negligence and gross negligence claims (counts 4 and 5). These were the counts that my original blog post primarily addressed. It’s hard to parse Prof. Zittrain’s precise legal reasoning from the quotes in Mr. Felten’s article, but it’s possible that the two of us would agree on the law. In any event, Mr. Felten is content to basically bypass the legal questions and merely fulminate–superficially, I might add–on the policy question.]

The case law governing software license agreements has evolved dramatically over the past 20 years as cataloged by Doug Phillips in his book The Software License Unveiled. One of the recent trends in this evolution, as correctly noted by Phillips, is that courts will often honor contractual limitations of liability which appear in these agreements, which seek to insulate the software company from various claims and categories of damages, notwithstanding the lack of bargaining power on the part of the user. The case law has been animated, in large part, by the normative economics of Judges associated with the University of Chicago. Certain courts, as a result, could be fairly criticized as being institutionally hostile to the user public at large. Phillips notes that a New York appellate court, in Moore v. Microsoft Corp., 741 N.Y.S.2d 91 (N.Y. App. Div. 2002), went so far as to hold that a contractual limitation of liability barred pursuit of claims for deceptive trade practices. Although the general rule is that deceit-based claims, as well as intentional torts, cannot be contractually waived in advance, there are various doctrines, exceptions, and findings that a court might use (or misuse) to sidestep the general rule. Such rulings are unsurprising at this point, because the user, as chronicled by Phillips, has been dying a slow death under the decisional law, with software license agreements routinely interpreted in favor of software companies on any number of issues.

It was against this backdrop that, on August 4, 2010, a software company seeking to use a contractual limitation of liability as a basis to dismiss various tort claims, met with stunning defeat. The U.S. District Court for the District of Hawaii ruled that the plaintiff’s gross negligence claims could proceed against the software company and that the contractual limitation of liability did not foreclose a potential recovery of punitive damages based on such claims. Furthermore, the matter remains in federal court in Hawaii notwithstanding a forum selection clause (section 15 of the User Agreement) in which the user apparently agreed “that any action or proceeding instituted under this Agreement shall be brought only in State courts of Travis County, State of Texas.”

The case is Smallwood v. NCsoft Corp., and involved the massively multiplayer, subscription-based online fantasy roll-playing game “Lineage II.” The plaintiff, a subscriber, alleged that the software company failed to warn of the “danger of psychological dependence or addiction from continued play” and that he had suffered physically from an addiction to the game. The plaintiff reportedly played Lineage II for 20,000 hours from 2004 through 2009. (Is there any higher accolade for a gaming company?) The plaintiff also alleged that, in September of 2009, he was “locked out” and “banned” from the game. The plaintiff claimed that the software company had told him he was banned “for engaging in an elaborate scheme to create real money transfers.” The plaintiff, in his Second Amended Complaint, couched his claims against the software company in terms of 8 separate counts: (1) misrepresentation/deceit, (2) unfair and deceptive trace practices, (3) defamation/libel/slander, (4) negligence, (5) gross negligence, (6) intentional infliction of emotional distress, (7) negligent infliction of emotional distress and (8) punitive damages.

The software company undertook to stop the lawsuit dead in its tracks and filed a motion to dismiss all counts. The defendants argued, among other things, that Section 12 of the User Agreement, entitled “Limitation of Liability,” foreclosed essentially any recovery. The provision, which is common in the industry, purported to cap the amount of the software company’s liability at the amount of the user’s account fees, the price of additional features, or the amount paid by the user to the software company in the preceding six months, whichever was less. The provision also stated that it barred incidental, consequential, and punitive damages:

12. Limitation of Liability
* * *
IN NO EVENT SHALL NC INTERACTIVE . . . BE LIABLE TO YOU OR TO ANY
THIRD PARTY FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL,
PUNITIVE OR EXEMPLARY DAMAGES . . . REGARDLESS OF THE THEORY
OF LIABILITY (INCLUDING CONTRACT, NEGLIGENCE, OR STRICT
LIABILITY) ARISING OUT OF OR IN CONNECTION WITH THE SERVICE,
THE SOFTWARE, YOUR ACCOUNT OR THIS AGREEMENT WHICH MAY BE
INCURRED BY YOU . . . .

The Court considered the parties’ arguments and then penned a whopping 49-page decision granting the software company’s motion to dismiss, but only partially. The Court determined that the User Agreement contained a valid “choice of law” provision stating that Texas law would govern the interpretation of the contract. However, the Court then ruled that both Texas and Hawaii law did not permit people to waive in advance their ability to make gross negligence claims. The plaintiff’s remaining negligence claims survived as well. The claims based on gross negligence remained viable for the full range of tort damages, including punitive damages, whereas the straight-up negligence-based claims would be subject to the contractually agreed on limitation on damages.

The fact that the gross negligence claims survived is significant in and of itself, but in reality having the right to sue for “gross negligence” is the functional equivalent of having the right to sue for straight-up negligence as well—thus radically broadening the scope of claims that (according to the court) cannot be waived in a User Agreement. Although it is true that negligence and gross negligence differ in theory (“negligence” = breach of the duty of ordinary care in the circumstances; “gross negligence” = conduct much worse than negligence), it is nearly impossible to pin down with precision the dividing line between the two concepts. Interestingly, Wikipedia notes that the Brits broadly distrust the concept of gross negligence and that, as far back as 1843, in Wilson v. Brett, Baron Rolfe “could see no difference between negligence and gross negligence; that it was the same thing, with the addition of a vituperative epithet.” True indeed.

The lack of a clear dividing line is an important tactical consideration. A plaintiff often pleads a single set of facts as supporting claims for both negligence and gross negligence and—in the absence of a contractual limitation on liability—expects both claims to survive a motion to dismiss, survive a motion for summary judgment, and make it to a jury. When the contractual limitation of liability is introduced into the mix, and the plaintiff is forced to give up the pure negligence claims, it hardly matters: the gross negligence claims—based on the exact same facts—cannot be waived (at least under Texas and Hawaii law) and therefore survive, at least up to the point of trial. Courts will not decide genuine factual disputes—that is the function of the jury. This is usually enough for the plaintiff, since the overwhelming majority of cases settle. Thus, a gross negligence claim, in most situations, is the functional equivalent of a negligence claim. For these reasons, the Smallwood decision, if it stands, may achieve some lasting significance in the software license wars.

Indian E-Voting Researcher Freed After Seven Days in Police Custody

FLASH: 4:47 a.m. EDT August 28 — Indian e-voting researcher Hari Prasad was released on bail an hour ago, after seven days in police custody. Magistrate D. H. Sharma reportedly praised Hari and made strong comments against the police, saying Hari has done service to his country. Full post later today.