June 29, 2017

Archives for October 2005

Net Neutrality and Competition

No sooner do I start writing about net neutrality than Ed Whitacre, the CEO of baby bell company SBC, energizes the debate with a juicy interview:

Q: How concerned are you about Internet upstarts like Google, MSN, Vonage, and others?

A: How do you think they’re going to get to customers? Through a broadband pipe. Cable companies have them. We have them. Now what they would like to do is use my pipes free, but I ain’t going to let them do that because we have spent this capital and we have to have a return on it. So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using. Why should they be allowed to use my pipes?

The Internet can’t be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo or Vonage or anybody to expect to use these pipes [for] free is nuts!

This is a pretty dumb thing for him to say, for several reasons. First, it shows amazing disrespect for his home broadband customers, who are paying $40 or so every month to use SBC’s pipes. If I were an SBC broadband customer, I’d be dying to ask Mr. Whitacre exactly what my monthly payment is buying, if it isn’t buying access to Google, Yahoo, Vonage, and any other $%&^* Internet service I want to use. Didn’t SBC’s advertising say I was buying access to the Internet?

Second, if somebody is going to pay somebody in this situation, it’s not clear who should be doing the paying. There is some set of customers who want to use SBC broadband service to access Google. Why should Google pay SBC for this? Why shouldn’t SBC pay Google instead?

Sure, SBC would like its customers to have free access to Google, Yahoo, and Vonage. But as Mr. Whitacre would put it, the Internet can’t be free in that sense, because Google, Yahoo, and Vonage have made an investment and for SBC or anybody to expect to use those services for free is nuts!

My point is not that SBC should necessarily pay, but that there is no rule of nature saying that one layer of the protocol stack should pay another layer. If SBC gets paid by Google, it’s because SBC faces less competition and hence has more market power. As Susan Crawford observes, Mr. Whitacre speaks with “the voice of someone who doesn’t think he has any competitors.”

At this point, economists will object that it’s sometimes efficient to let ISPs levy these kinds of charges, and so requiring net neutrality from SBC may lead to an inefficient outcome. I appreciate this point, and will be writing more about it in the future.

For now, though, notice that Mr. Whitacre isn’t speaking the language of efficiency. He wants to extract payments because he can. There’s a whiff here of the CEO-tournament syndrome that infected the media world in the 1990s, as documented in Ken Auletta’s “mogul” stories. Can Mr. Whitacre make the CEOs of Google, Yahoo, and Vonage genuflect to him? Is he really the man with the biggest … market power? If there are to be side payments, will they reflect business calculation, or just ego?

It’s one thing to argue that a policy can lead to efficient results. It’s another thing entirely to show that itwill lead to efficient results, in the hands of real human beings.

Discrimination Against Network Hogs

Adam Thierer has an interesting post about network neutrality over at Tech Liberation Front. He is reacting to a recent Wall Street Journal story about how some home broadband service providers (BSPs) are starting to modify their networks to block or frustrate network applications they don’t like.

Why would a BSP discriminate against an application’s traffic? The standard scenario that people worry about is that a BSP hinders traffic from Vonage or some other VoIP application, because the BSP wants to sell phone service to the customer and VoIP competes with that phone service. One can cook up a hypothetical like this whenever a BSP wants to sell an application-level service. The standard response to this worry is to suggest “net neutrality” regulation, which would require BSPs to carry all traffic on an equal footing, regardless of which application or protocol is used. There is a complicated literature about the economics of net neutrality; for now, suffice it to say that net neutrality regulation can help or hurt, depending on the precise circumstances.

Thierer opposes net neutrality regulation. He seems especially worried that neutrality might require BSPs to treat all customers the same, regardless of how much network traffic they generate. If a few customers use lots of bandwidth this will leave less for everybody else, or alternatively will require the BSP to upgrade the network and pass on the cost neutrally to all users. It’s better, he argues, to let BSPs price differentially based on bandwidth usage.

It’s hard to argue with that proposition. I don’t think any reasonable net neutrality advocate would object to a BSP discriminating or pricing based solely on bandwidth usage. They would of course object if a BSP blocked a particular app and rationalized that act with vague excuses about saving bandwidth; but a real bandwidth limit ought to be uncontroversial.

(Technically, customers already have bandwidth limits, in the sense that a given class of service limits the maximum instantaneous bandwidth that a customer can use. What we’re talking about here are limits that are defined over a longer period, such as a day or a week.)

It’s already the case that some customers use much more bandwidth than others. Thierer quotes a claim that fewer than 10% of Time-Warner customers use more than 75% of bandwidth; and another BSP makes an even stronger claim. This isn’t a surprise – this kind of business is often subject to an 80/20 rule (80% of the resources used by 20% of the customers) or even a 90/10 rule.

But will ISPs actually apply bandwidth limits? Here’s Thierer:

This raises the most interesting issue in this entire debate: Why is it that BSPs are not currently attempting to meter broadband usage and price it to account for demand and “excessive” usage by some users? In my opinion, this would be the most efficient and least meddlesome way of dealing with this problem. Per-minute or per-bit pricing schemes could help conserve pipe space, avoid congestion, recover costs and enable BSPs to plow the savings into new capacity / innovation. Despite this, no BSP seems willing to engage in any sort of metering of the pipe. Why is that?

I think there are two reasons that BSPs have so far been unwilling to price discriminate. Frist broadband operators are probably concerned that such a move would bring about unwanted regulatory attention. Second, and more importantly, cable and telco firms are keenly aware of the fact that the web-surfing public has come to view “all you can eat” buffet-style, flat-rate pricing as a virtual inalienable right. Internet guru Andrew Odlyzko, has correctly argued that “People react extremely negatively to price distrimination. They also dislike the bother of fine-grained pricing, and are willing to pay extra for simple prices, especially flat-rate ones.”

So if BSPs aren’t willing to bandwidth-discriminate now, and doing so would anger customers, why would we expect them to start discriminating in the future? It’s not enough to point to a 90/10 rule of bandwidth usage. If, as seems likely, a 90/10 rule has been operating for a while now, and BSPs have not responded with differential pricing, then it’s not clear why anything would change in the future. Perhaps there is data showing that the customer-to-customer imbalance is getting worse; but I haven’t seen it.

Ultimately, BSPs’ general refusal to bandwidth-discriminate would seem to contradict claims that bandwidth discrimination is necessary. Still, even net neutrality advocates ought to support BSPs’ freedom to bandwidth-discriminate.

Alert readers have surely noticed by this point that I haven’t said whether I support net neutrality regulation. The reason is pretty simple: I haven’t made up my mind yet. Both sides make plausible arguments, and the right answer seems to depend on what assumptions we make about the markets and technology of the near future. I’ll probably be talking myself through the issue in occasional blog posts here over the next few weeks. Maybe, with your help, I’ll figure it out.

RFID, Present and Future

One of the advantages of teaching in a good university is the opportunity to hear smart students talk to each other about complicated topics. This semester I’m teaching a graduate seminar in technology and privacy, to a group of about ten computer science and electrical engineering students. On Monday the class discussed the future of RFID technology.

The standard scenario for RFID involves affixing a small RFID “tag” to a consumer product, such as an item of clothing sold at WalMart. (I’m using WalMart as a handy example here; anyone can use RFID.) Each tag has a unique ID number. An RFID “reader” can use radio signals to determine the ID numbers of any tags that are nearby. WalMart might use an RFID reader to take an inventory of which items are in their store, or which items are in the shopping cart of a customer. This has obvious advantages in streamlining inventory control, which helps WalMart operate more efficiently and sell products at lower prices.

This sounds fine so far, but there is a well-known problem with this scheme. When a customer buys the item and takes it home, the RFID tag is still there, so people may be able to track the customer or learn what he is carrying in his backpack, by scanning him and his possessions for RFID tags. This scares many people.

The risk of post-sale misuse of RFID tags can be mitigated by having WalMart deactivate or “kill” the tags when the customer buys the tag-containing item. This could be done by sending a special radio code to the tag. On receiving the kill code, the tag would stop operating. (Any practical kill feature would allow a special scanner to detect that a dead tag was present, but not to learn the dead tag’s ID number.)

Killing tags is a fine idea, but perhaps the consumer wants to use the tag for his own purposes. It would be cool if my laundry hamper knew which clothes were in it and could warn me of an impending clean-sock crisis, or if my fridge knew whether it contained any milk and how long that milk had been present. These things are possible if my clothing and food containers have working RFID tags.

One way to get what we want is to have smarter tags that use cryptography to avoid leaking information to outsiders. A smart tag would know the cryptographic key of its owner, and would only respond to requests properly signed by that key; and it would reveal its ID number in such a way that only its owner could understand it. At the checkout stand, WalMart would transfer cryptographic ownership of a tag to the buyer, rather than killing the tag. Any good cryptographer can figure out how to make this work.

The problem at present is that garden-variety RFID tags can’t do fancy crypto. Tags don’t have their own power source but get their power parasitically from an electromagnetic “carrier wave” broadcast by the reader. This means that the tag has a very limited power budget and very limited time – not nearly enough of either to do serious crypto. Some people argue that the RFID privacy problem is an artifact of these limitations of today’s RFID tags.

If so, that’s good news, because Moore’s Law is increasing the amount of computing we can do with a fixed power or time budget. If Moore’s Law applies to RFID circuits – and it seems that it should – then the time will come in a few years when dirt-cheap RFID tags can do fancy crypto, and therefore can be more privacy-friendly than they are today. The price difference between simple tags and smart tags will be driven toward zero by Moore’s Law, so there won’t be a cost justification for using simpler but less privacy-friendly tags.

But here’s the interesting question: when nicer RFID tags become possible, will people switch over to using them, or will they keep using today’s readable-by-everybody tags? If there’s no real cost difference, there are only two reasons we might not switch. The first is that we are somehow locked in by backward compatibility, so that any switch to a new technology incurs costs that nobody wants to be the first to pay. The second is a kind of social inertia, in which people are so accustomed to accepting the privacy risks of dumber RFID technologies that they don’t insist on improvement. Either of these scenarios could develop, and if they do, we may be locked out from a better technology for quite a while.

Our best hope, perhaps, is that WalMart can benefit from a stronger technology. Current systems are subject to various uses that WalMart may not like. For example, a competitor might use RFID to learn how many of each product WalMart is stocking, or to learn where WalMart customers live. Or a malicious customer might try to kill or impersonate a WalMart tag. Smarter RFID tags can prevent these attacks. Perhaps that will be enough to get WalMart to switch.

Looking further into the future, the privacy implications of small, communicating devices will only get more serious. The seminar read a paper on “smart dust”, a more futuristic technology involving tiny, computationally sophisticated motes that might some day be scattered across an area, then picked up by passersby, as any dust mote might be. This is a really scary technology, if it’s used for evil.

Today, inventory control and remote tracking come in a single technology called RFID. Tomorrow, they can be separated, so that we can have the benefits of inventory control (for businesses and individuals) without having to subject ourselves to tracking. Tracking will be more possible than ever before, but at least we won’t have to accept tracking as a side-effect of shopping.