December 23, 2024

Comcast's Disappointing Defense

Last week, Comcast offered a defense in the FCC proceeding challenging the technical limitations it had placed on BitTorrent traffic in its network. (Back in October, I wrote twice about Comcast’s actions.)

The key battle line is whether Comcast is just managing its network reasonably in the face of routine network congestion, as it claims, or whether it is singling out certain kinds of traffic for unnecessary discrimination, as its critics claim. The FCC process has generated lots of verbiage, which I can’t hope to discuss, or even summarize, in this post.

I do want to call out one aspect of Comcast’s filing: the flimsiness of its technical argument.

Here’s one example (p. 14-15).

As Congresswoman Mary Bono Mack recently explained:

The service providers are watching more and more of their network monopolized by P2P bandwidth hogs who command a disproportionate amount of their network resources. . . . You might be asking yourself, why don’t the broadband service providers invest more into their networks and add more capacity? For the record, broadband service providers are investing in their networks, but simply adding more bandwidth does not solve [the P2P problem]. The reason for this is P2P applications are designed to consume as much bandwidth as is available, thus more capacity only results in more consumption.

(emphasis in original). The flaws in this argument start with the fact that the italicized segment is wrong. P2P protocols don’t aim to use more bandwidth rather than less. They’re not sparing with bandwidth, but they don’t use it for no reason, and there does come a point where they don’t want any more.

But even leaving aside the merits of the argument, what’s most remarkable here is that Comcast’s technical description of BitTorrent cites as evidence not a textbook, nor a standards document, nor a paper from the research literature, nor a paper by the designer of BitTorrent, nor a document from the BitTorrent company, nor the statement of any expert, but a speech by a member of Congress. Congressmembers know many things, but they’re not exactly the first group you would turn to for information about how network protocols work.

This is not the only odd source that Comcast cites. Later (p. 28) they claim that the forged TCP Reset packets that they send shouldn’t be called “forged”. For this proposition they cite some guy named George Ou who blogs at ZDNet. They give no reason why we should believe Mr. Ou on this point. My point isn’t to attack Mr. Ou, who for all I know might actually have some relevant expertise. My point is that if this is the most authoritative citation Comcast can find, then their argument doesn’t look very solid. (And, indeed, it seems pretty uncontroversial to call these particular packets “forged”, given that they mislead the recipient about (1) which IP address sent the packet, and (2) why the packet was sent.)

Comcast is a big company with plenty of resources. It’s a bit depressing that they would file arguments like this with the FCC, an agency smart enough to tell the difference. Is this really the standard of technical argumentation in FCC proceedings?

Google Objects to Microhoo: Pot Calling Kettle Black?

Last week Microsoft offered to buy Yahoo at a big premium over Yahoo’s current stock price; and Google complained vehemently that Microsoft’s purchase of Yahoo would reduce competition. There’s been tons of commentary about this. Here’s mine.

The first question to ask is why Microsoft made such a high offer for Yahoo. One possibility is that Microsoft thinks the market had drastically undervalued Yahoo, making it a good investment even at a big markup. This seems unlikely.

A more plausible theory is that Microsoft thinks Yahoo is a lot more valuable when combined with Microsoft than it would be on its own. Why might this be? There are two plausible theories.

The synergy theory says that combining Yahoo’s businesses with Microsoft’s businesses creates lots of extra value, that is that the whole is much more profitable than the parts would be separately.

The market structure theory says that Microsoft benefits from Yahoo’s presence in the market (as a counterweight to Google), that Microsoft worried that Yahoo’s market position was starting to slip, so Microsoft acted to prop up Yahoo by giving Yahoo credible access to capital and strong management. In this theory, Microsoft cares less (or not at all) about actually combining the businesses, and wants mostly to keep Google from capturing Yahoo’s market share.

My guess is that both theories have some merit – that Microsoft’s offer is both offensive (seeking synergies) and defensive (maintaining market structure).

Google objected almost immediately that a Microsoft-Yahoo merger would reduce competition to the extent that government should intervene to block the merger or restrict the conduct of the merged entity. The commentary on Google’s complaint has focused on two points. First, at least in some markets, two-way competition between Microhoo and Google might be more vigorous than the current three-way competition between a dominant Google and two rivals. Second, even assuming that the antitrust authorities ultimately reject Google’s argument and allow the merger to proceed, government scrutiny will delay the merger and distract Microsoft and Yahoo, thereby helping Google.

Complaining has downsides for Google too – a government skeptical of acquisitions by dominant high-tech companies could easily boomerang and cause Google its own antitrust headaches down the road.

So why is Google complaining, despite this risk? The most intriguing possibility is that Google is working the refs. Athletes and coaches often complain to the referee about a call, knowing that the ref won’t change the call, but hoping to generate some sympathy that will pay off next time a close call has to be made. Suppose Google complains, and the government rejects its complaint. Next time Google makes an acquisition and the government comes starts asking questions, Google can argue that if the government didn’t do anything about the Microhoo merger, then it should lay off Google too.

It’s fun to toss around these Machiavellian theories, but I doubt Google actually thought all this through before it reacted. Whatever the explanation, now that it has reacted, it’s stuck with the consequences of its reaction – just as Microsoft is stuck, for better or worse, with its offer to buy Yahoo.

Unattended Voting Machines, As Usual

It’s election day, so tradition dictates that I publish some photos of myself with unattended voting machines.

To recap: It’s well known that paperless electronic voting machines are vulnerable to tampering, if an attacker can get physical access to a machine before the election. Most of the vendors, and a few election officials, claim that this isn’t a problem because the machines are well guarded so that no would-be attacker can get to them. Which would be mildly reassuring – if it were true.

Here’s me with two unattended voting machines, taken on Sunday evening in a Princeton polling place:

Here are four more unattended voting machines, taken on Monday evening in another Princeton polling place.

I stood conspicuously next to this second set of machines for fifteen minutes, and saw nobody.

In both cases I had ample opportunity to tamper with the machines – but of course I did not.

MySpace Photos Leaked; Payback for Not Fixing Flaw?

Last week an anonymous person published a file containing half a million images, many of which had been gathered from private profiles on MySpace. This may be the most serious privacy breach yet at MySpace. Kevin Poulsen’s story at Wired News implies that the leak may have been deliberate payback for MySpace failing to fix the vulnerability that allowed the leaks.

“I think the greatest motivator was simply to prove that it could be done,” file creator “DMaul” says in an e-mail interview. “I made it public that I was saving these images. However, I am certain there are mischievous individuals using these hacks for nefarious purposes.”

The MySpace hole surfaced last fall, and it was quickly seized upon by the self-described pedophiles and ordinary voyeurs who used it, among other things, to target 14- and 15-year-old users who’d caught their eye online. A YouTube video showed how to use the bug to retrieve private profile photos. The bug also spawned a number of ad-supported sites that made it easy to retrieve photos. One such site reported more than 77,000 queries before MySpace closed the hole last Friday following Wired News’ report.

MySpace plugged a a href=”http://grownupgeek.blogspot.com/2006/08/myspace-closes-giant-security-hole.html”>similar security hole in August 2006 when it made the front page of Digg, four months after it surfaced.

The implication here, not quite stated, is that DMaul was trying to draw attention to the flaw in order to force MySpace to fix it. If this is what it took to get MySpace to fix the flaw, this story reflects very badly on MySpace.

Anyone who has discovered security flaws in commercial products knows that companies react to flaws in two distinct ways. Smart companies react constructively: they’re not happy about the flaws or the subsequent PR fallout, but they acknowledge the truth and work in their customers’ interest to fix problems promptly. Other companies deny problems and delay addressing them, treating security flaws solely as PR problems rather than real risks.

Smart companies have learned that a constructive response minimizes the long-run PR damage and, not coincidentally, protects customers. But some companies seem to lock themselves into the deny-delay strategy.

Now suppose you know that a company’s product has a flaw that is endangering its customers, and the company is denying and delaying. There is something you can do that will force them to fix the problem – you can arrange an attention-grabbing demonstration that will show customers (and the press) that the risk is real. All you have to do is exploit the flaw yourself, get a bunch of private data, and release it. Which is pretty much what DMaul did.

To be clear, I’m not endorsing this course of action. I’m just pointing out why someone might find it attractive despite the obvious ethical objections.

The really interesting aspect of Poulsen’s article is that he doesn’t quite connect the dots and say that DMaul meant to punish MySpace. But Poulsen is savvy enough that he probably wouldn’t have missed the implication either, and he could have written the article to avoid it had he wanted to. Maybe I’m reading too much into the article, but I can’t help suspecting that DMaul was trying to punish MySpace for its lax security.

New $2B Dutch Transport Card is Insecure

The new Dutch transit card system, on which $2 billion has been spent, was recently shown by researchers to be insecure. Three attacks have been announced by separate research groups. Let’s look at what went wrong and why.

The system, known as OV-chipkaart, uses contactless smart cards, a technology that allows small digital cards to communicate by radio over short distances (i.e. centimeters or inches) with reader devices. Riders would carry either a disposable paper card or a more permanent plastic card. Riders would “charge up” a card by making a payment, and the card would keep track of the remaining balance. The card would be swiped past the turnstile on entry and exit from the transport system, where a reader device would authenticate the card and cause the card to deduct the proper fare for each ride.

The disposable and plastic cards use different technologies. The disposable card, called Mifare Ultralight, is small, light, and inexpensive. The reusable plastic card, Mifare Classic, uses more sophisticated technologies.

The first attack, published in July 2007, came from Pieter Sieckerman and Maurits van der Schee of the University of Amsterdam, who found vulnerabilities in the Ultralight system. Their main attacks manipulated Ultralight cards, for example by “rewinding” a card to a previous state so it could be re-used. These attacks looked fixable by changing the system’s software, and Sieckerman and van der Schee described the necessary fixes. But it was also evident that a cleverly constructed counterfeit Ultralight card would be able to defeat the system in a manner that would be very difficult to defense.

The fundamental security problem with the disposable Ultralight card is that it doesn’t use cryptography, so the card cannot keep any secrets from an attacker. An attacker who can read a card (e.g., by using standard equipment to emulate a card reader) can know exactly what information is stored on the card, and therefore can make another device that will behave identically to the card. Except, of course, that the attacker’s device can always return itself to the “fully funded” state. Roel Verdult of Raboud University implemented this “cloning” attack and demonstrated it on Dutch television, leading to the recent uproar.

The plastic Mifare Classic card does use cryptography: legitimate cards contain secret keys that they use to authenticate themselves to readers. So attackers cannot straightforwardly clone a card. Mifare Classic was designed to use a secret encryption algorithm.

Karsten Nohl, “Starbug,” and Henryk Plötz announced an attack that involved opening up a Mifare Classic card and capturing a high-resolution image of the circuitry, which they then used to reverse-engineer the cryptographic algorithm. They didn’t publish the algorithm, but their work shows that a real attacker could get the algorithm too.

Unmasking of the algorithm should have been no problem, had the system been engineered well. Kerckhoffs’s Principle, one of the bedrock maxims of cryptography, says that security should never rely on keeping an algorithm secret. It’s okay to have a secret key, if the key is randomly chosen and can be changed when needed, but you should never bank on an algorithm remaining secret.

Unfortunately the designers of Mifare Classic did not follow this principle. Instead, they chose to combine a secret algorithm with a relatively short 48-bit key. This is a problem because once you know the algorithm it’s possible for an attacker to search the entire 48-bit key space, and therefore to forge cards, in a matter or days or weeks. With 48 key bits, there are only about 280 trillion possible keys, which sounds like a lot to the person on the street but isn’t much of a barrier to today’s computers.

Now the Dutch authorities have a mess on their hands. About $2 billion have been invested in this project, but serious fraud seems likely if it is deployed as designed. This kind of disaster would have been less likely had the design process been more open. Secrecy was not only an engineering mistake (violating Kerckhoffs’s Principle) but also a policy mistake, as it allowed the project to get so far along before independent analysts had a chance to critique it. A more open process, like the one the U.S. government used in choosing the Advanced Encryption Standard (AES) would have been safer. Governments seem to have a hard time understanding that openness can make you more secure.