May 3, 2024

Comcast's Disappointing Defense

Last week, Comcast offered a defense in the FCC proceeding challenging the technical limitations it had placed on BitTorrent traffic in its network. (Back in October, I wrote twice about Comcast’s actions.)

The key battle line is whether Comcast is just managing its network reasonably in the face of routine network congestion, as it claims, or whether it is singling out certain kinds of traffic for unnecessary discrimination, as its critics claim. The FCC process has generated lots of verbiage, which I can’t hope to discuss, or even summarize, in this post.

I do want to call out one aspect of Comcast’s filing: the flimsiness of its technical argument.

Here’s one example (p. 14-15).

As Congresswoman Mary Bono Mack recently explained:

The service providers are watching more and more of their network monopolized by P2P bandwidth hogs who command a disproportionate amount of their network resources. . . . You might be asking yourself, why don’t the broadband service providers invest more into their networks and add more capacity? For the record, broadband service providers are investing in their networks, but simply adding more bandwidth does not solve [the P2P problem]. The reason for this is P2P applications are designed to consume as much bandwidth as is available, thus more capacity only results in more consumption.

(emphasis in original). The flaws in this argument start with the fact that the italicized segment is wrong. P2P protocols don’t aim to use more bandwidth rather than less. They’re not sparing with bandwidth, but they don’t use it for no reason, and there does come a point where they don’t want any more.

But even leaving aside the merits of the argument, what’s most remarkable here is that Comcast’s technical description of BitTorrent cites as evidence not a textbook, nor a standards document, nor a paper from the research literature, nor a paper by the designer of BitTorrent, nor a document from the BitTorrent company, nor the statement of any expert, but a speech by a member of Congress. Congressmembers know many things, but they’re not exactly the first group you would turn to for information about how network protocols work.

This is not the only odd source that Comcast cites. Later (p. 28) they claim that the forged TCP Reset packets that they send shouldn’t be called “forged”. For this proposition they cite some guy named George Ou who blogs at ZDNet. They give no reason why we should believe Mr. Ou on this point. My point isn’t to attack Mr. Ou, who for all I know might actually have some relevant expertise. My point is that if this is the most authoritative citation Comcast can find, then their argument doesn’t look very solid. (And, indeed, it seems pretty uncontroversial to call these particular packets “forged”, given that they mislead the recipient about (1) which IP address sent the packet, and (2) why the packet was sent.)

Comcast is a big company with plenty of resources. It’s a bit depressing that they would file arguments like this with the FCC, an agency smart enough to tell the difference. Is this really the standard of technical argumentation in FCC proceedings?

Google Objects to Microhoo: Pot Calling Kettle Black?

Last week Microsoft offered to buy Yahoo at a big premium over Yahoo’s current stock price; and Google complained vehemently that Microsoft’s purchase of Yahoo would reduce competition. There’s been tons of commentary about this. Here’s mine.

The first question to ask is why Microsoft made such a high offer for Yahoo. One possibility is that Microsoft thinks the market had drastically undervalued Yahoo, making it a good investment even at a big markup. This seems unlikely.

A more plausible theory is that Microsoft thinks Yahoo is a lot more valuable when combined with Microsoft than it would be on its own. Why might this be? There are two plausible theories.

The synergy theory says that combining Yahoo’s businesses with Microsoft’s businesses creates lots of extra value, that is that the whole is much more profitable than the parts would be separately.

The market structure theory says that Microsoft benefits from Yahoo’s presence in the market (as a counterweight to Google), that Microsoft worried that Yahoo’s market position was starting to slip, so Microsoft acted to prop up Yahoo by giving Yahoo credible access to capital and strong management. In this theory, Microsoft cares less (or not at all) about actually combining the businesses, and wants mostly to keep Google from capturing Yahoo’s market share.

My guess is that both theories have some merit – that Microsoft’s offer is both offensive (seeking synergies) and defensive (maintaining market structure).

Google objected almost immediately that a Microsoft-Yahoo merger would reduce competition to the extent that government should intervene to block the merger or restrict the conduct of the merged entity. The commentary on Google’s complaint has focused on two points. First, at least in some markets, two-way competition between Microhoo and Google might be more vigorous than the current three-way competition between a dominant Google and two rivals. Second, even assuming that the antitrust authorities ultimately reject Google’s argument and allow the merger to proceed, government scrutiny will delay the merger and distract Microsoft and Yahoo, thereby helping Google.

Complaining has downsides for Google too – a government skeptical of acquisitions by dominant high-tech companies could easily boomerang and cause Google its own antitrust headaches down the road.

So why is Google complaining, despite this risk? The most intriguing possibility is that Google is working the refs. Athletes and coaches often complain to the referee about a call, knowing that the ref won’t change the call, but hoping to generate some sympathy that will pay off next time a close call has to be made. Suppose Google complains, and the government rejects its complaint. Next time Google makes an acquisition and the government comes starts asking questions, Google can argue that if the government didn’t do anything about the Microhoo merger, then it should lay off Google too.

It’s fun to toss around these Machiavellian theories, but I doubt Google actually thought all this through before it reacted. Whatever the explanation, now that it has reacted, it’s stuck with the consequences of its reaction – just as Microsoft is stuck, for better or worse, with its offer to buy Yahoo.

Could Use-Based Broadband Pricing Help the Net Neutrality Debate?

Yesterday, thanks to a leaked memo, it came to light that Time Warner Cable intends to try out use-based broadband pricing on a few of its customers. It looks like the plan is for several tiers of use, with the heaviest users possibly paying overage charges on a per-byte basis. In confirming its plans to Reuters, Time Warner pointed out that its heaviest-using five percent of customers generate the majority of data traffic on the network, but still pay as though they were typical users. Under the new proposal, pricing would be based on the total amount of data transferred, rather than the peak throughput on a connection.

If the current, flattened pricing is based on what the connection is worth to a typical customer, who makes only limited use of the connection, then the heaviest five percent of users (let’s call them super-users as shorthand) are reaping a surplus. Bandwidth use might be highly elastic with respect to price, but I think it is also true that the super users do reap a great deal more benefit from their broadband connections than other users do – think of those who pioneer video consumption online, for example.

What happens when network operators fail to see this surplus? They have marginally less incentive to build out the network and drive down the unit cost of data transfer. If the pricing model changed so that network providers’ revenue remained the same in total but was based directly on how much the network is used, then the price would go down for the lightest users and up for the heaviest. If a tiered structure left prices the same for most users and raised them on the heaviest, operators’ total revenue would go up. In either case, networks would have an incentive to encourage innovative, high-bandwidth uses of their networks – regardless of what kind of use that is.

Gigi Sohn of Public Knowledge has come out in favor of Time Warner’s move on these and other grounds. It’s important to acknowledge that network operators still have familiar, monopolistic reasons to intervene against traffic that competes with phone service or cable. But under the current pricing structure, they’ve had a relatively strong argument to discriminate in favor of the traffic they can monetize, and against the traffic they can’t. By allowing them to monetize all traffic, a shift to use based pricing would weaken one of the most persuasive reasons network operators have to oppose net neutrality.

Clinton's Digital Policy

This is the second in our promised series summing up where the 2008 presidential candidates stand on digital technology issues. (See our first post, about Obama). This time,we’ll take a look at Hillary Clinton

Hillary has a platform plank on innovation. Much of it will be welcome news to the research community: She wants to up funding for basic research, and increase the number and size of NSF fellowships for graduate students in the sciences. Beyond urging more spending (which is, arguably, all too easy at this point in the process) she indicates her priorities by urging two shifts in how science funds are allocated. First, relative to their current slice of the federal research funding pie, she wants a disproportionate amount of the increase in funding to go the physical sciences and engineering. Second, she wants to “require that federal research agencies set aside at least 8% of their research budgets for discretionary funding of high-risk research.” Where the 8% figure comes from, and which research would count as “high risk,” I don’t know. Readers, can you help?

As far as specifically digital policy questions, she highlights just one: broadband. She supports “tax incentives to encourage broadband deployment in underserved areas,” as well as providing “financial support” for state, local, and municipal broadband initiatives. Government mandates designed to help the communications infrastructure of rural America keep pace with the rest of the country are an old theme, familiar in the telephone context as universal service requirements. That program taxes the telecommunications industry’s commercial activity, and uses the proceeds to fund deployment in areas where profit-seeking actors haven’t seen fit to expand. It’s politically popular in part because it serves the interests of less-populous states, which enjoy disproportionate importance in presidential politics.

On the larger question of subsidizing broadband deployment everywhere, the Clinton position outlined above strikes me, at its admittedly high level of vagueness, as being roughly on target. I’m politically rooted in the laissez-faire, free-market right, which tends to place a heavy burden of justification on government interventions in markets. In its strongest and most brittle form, the free-market creed can verge on naturalistic fallacy: For any proposed government program, the objection can be raised, “if that were really such a good idea, a private enterprise would be doing it already, and turning a profit.” It’s an argument that applies against government interventions as such, and that has often been used to oppose broadband subsidies. Broadband is attractive and valuable, and people like to buy it, the reasoning goes–so there’s no need to bother with tax-and-spend supports.

The more nuanced truth, acknowledged by thoughtful participants all across the debate, is that subsidies can be justified if but only if the market is failing in some way. In this case, the failure would be a positive externality: adding one more customer to the broadband Internet conveys benefits to so many different parties that network operators can’t possibly hope to collect payment from all of them.

The act of plugging someone in creates a new customer for online merchants, a present and future candidate for employment by a wide range of far-flung employers, a better-informed and more critical citizen, and a happier, better-entertained individual. To the extent that each of these benefits is enjoyed by the customer, they will come across as willingness to pay a higher price for broadband service. But to the extent that other parties derive these benefits, the added value that would be created by the broadband sale will not express itself as a heightened willingness to pay, on the part of the customer. If there were no friction at all, and perfect foreknowledge of consumer behavior, it’s a good bet that Amazon, for example, would be willing to chip in on individual broadband subscriptions of those who might not otherwise get connected but who, if they do connect, will become profitable Amazon customers. As things are, the cost of figuring out which third parties will benefit from which additional broadband connection is prohibitive; it may not even be possible to find this information ahead of time at any price because human behavior is too hard to predict.

That means there’s some amount of added benefit from broadband that is not captured on the private market – the price charged to broadband customers is higher than would be economically optimal. Policymakers, by intervening to put downward pressure on the price of broadband, could lead us into a world where the myriad potential benefits of digital technology come at us stronger and sooner than they otherwise might. Of course, they might also make a mess of things in any of a number of ways. But at least in principle, a broadband subsidy could and should be done well.

One other note on Hillary: Appearing on Meet the Press yesterday (transcript here), she weighed in on Internet-enabled transparency. It came up tangentially, when Tim Russert asked her to promise she wouldn’t repeat her husband’s surprise decision to pardon political allies over the objection of the Justice Department. The pardon process, Hillary maintained, should be made more transparent–and, she went on to say:

I want to have a much more transparent government, and I think we now have the tools to make that happen. You know, I said the other night at an event in New Hampshire, I want to have as much information about the way our government operates on the Internet so the people who pay for it, the taxpayers of America, can see that. I want to be sure that, you know, we actually have like agency blogs. I want people in all the government agencies to be communicating with people, you know, because for me, we’re now in an era–which didn’t exist before–where you can have instant access to information, and I want to see my government be more transparent.

This seems strongly redolent of the transparency thrust in Obama’s platform. If nothing else, it suggests that his focus on the issue may be helping pull the field into more explicit, more concrete support for the Internet as a tool of government transparency. Assuming that either Obama or Clinton becomes the nominee, November will offer at least one major-party presidential candidate who is on record supporting specific new uses of the Internet as a transparency tool.

Scoble/Facebook Incident: It's Not About Data Ownership

Last week Facebook canceled, and then reinstated, Robert Scoble’s account because he was using an automated script to export information about his Facebook friends to another service. The incident triggered a vigorous debate about who was in the right. Should Scoble be allowed to export this data from Facebook in the way he did? Should Facebook be allowed to control how the data is presented and used? What about the interests of Scoble’s friends?

An interesting meme kept popping up in this debate: the idea that somebody owns the data. Kara Swisher says the data belong to Scoble:

Thus, [Facebook] has zero interest in allowing people to escape easily if they want to, even though THE INFORMATION ON FACEBOOK IS THEIRS AND NOT FACEBOOK’S.

Sorry for the caps, but I wanted to be as clear as I could: All that information on Facebook is Robert Scoble’s. So, he should–even if he agreed to give away his rights to move it to use the service in the first place (he had no other choice if he wanted to join)–be allowed to move it wherever he wants.

Nick Carr disagrees, saying the data belong to Scoble’s friends:

Now, if you happen to be one of those “friends,” would you think of your name, email address, and birthday as being “Scoble’s data” or as being “my data.” If you’re smart, you’ll think of it as being “my data,” and you’ll be very nervous about the ability of someone to easily suck it out of Facebook’s database and move it into another database without your knowledge or permission. After all, if someone has your name, email address, and birthday, they pretty much have your identity – not just your online identity, but your real-world identity.

Scott Karp asks whether “Facebook actually own your data because you agreed to that ownership in the Terms of Service.” And Louis Gray titles his post “The Data Ownership Wars Are Heating Up”.

Where did we get this idea that facts about the world must be owned by somebody? Stop and consider that question for a minute, and you’ll see that ownership is a lousy way to think about this issue. In fact, much of the confusion we see stems from the unexamined assumption that the facts in question are owned.

It’s worth noting, too, that even today’s expansive intellectual property regimes don’t apply to the data at issue here. Facts aren’t copyrightable; there’s no trade secret here; and this information is outside the subject matter of patents and trademarks.

Once we give up the idea that the fact of Robert Scoble’s friendship with (say) Lee Aase, or the fact that that friendship has been memorialized on Facebook, has to be somebody’s exclusive property, we can see things more clearly. Scoble and Aase both have an interest in the facts of their Facebook-friendship and their real friendship (if any). Facebook has an interest in how its computer systems are used, but Scoble and Aase also have an interest in being able to access Facebook’s systems. Even you and I have an interest here, though probably not so strong as the others, in knowing whether Scoble and Aase are Facebook-friends.

How can all of these interests best be balanced in principle? What rights do Scoble, Aase, and Facebook have under existing law? What should public policy says about data access? All of these are difficult questions whose answers we should debate. Declaring these facts to be property doesn’t resolve the debate – all it does is rule out solutions that might turn out to be the best.