December 22, 2024

The Last Mile Bottleneck and Net Neutrality

When thinking about the performance of any computer system or network, the first question to ask is “Where is the bottleneck?” As demand grows, one part of the system reaches its capacity first, and limits performance. That’s the bottleneck. If you want to improve performance, often the only real options are to use the bottleneck more efficiently or to increase the bottleneck’s capacity. Fiddling around with the rest of the system won’t make much difference.

For a typical home broadband user, the bottleneck for Internet access today is the “last mile” wire or fiber connecting their home to their Internet Service Provider’s (ISP’s) network. This is true today, and I’m going to assume from here on that it will continue to be true in the future. I should admit up front that this assumption could turn out to be wrong – but if it’s right, it has interesting implications for the network neutrality debate.

Two of the arguments against net neutrality regulation are that (a) ISPs need to manage their networks to optimize performance, and (b) ISPs need to monetize their networks in every way possible so they can get enough revenue to upgrade the last mile connections. Let’s consider how the last mile bottleneck affects each of these arguments.

The first argument says that customers can get better performance if ISPs (and not just customers) have more freedom to manage their networks. If the last mile is the bottleneck, then the most important management question is which packets get to use the last mile link. But this is something that each customer can feasibly manage. What the customer sends is, of course, under the customer’s control – and software on the customer’s computer or in the customer’s router can prioritize outgoing traffic in whatever way best serves that customer. Although it’s less obvious to nonexperts, the customer’s equipment can also control how the link is allocated among incoming data flows. (For network geeks: the customer’s equipment can control the TCP window size on connections that have incoming data.) And of course the customer knows better than the ISP which packets can best serve the customer’s needs.

Another way to look at this is that every customer has their own last mile link, and if that link is not shared then different customers’ links can be optimized separately. The kind of global optimization that only an ISP can do – and that might be required to ensure fairness among customers – just won’t matter much if the last mile is the bottleneck. No matter which way you look at it, there isn’t much ISPs can do to optimize performance, so we should be skeptical of ISPs’ claims that their network management will make a big difference for users. (All of this assumes, remember, that the last mile will continue to be the bottleneck.)

The second argument against net neutrality regulation is that ISPs need to be able to charge everybody fees for everything, so there is maximum incentive for ISPs to build their next-generation networks. If the last mile is the bottleneck, then building new last-mile infrastructure is one of the most important steps that can be taken to improve the Net, and so paying off the ISPs to build that infrastructure might seem like a good deal. Giving them monopoly rents could be good policy, if that’s what it takes to get a faster Net built – or so the argument goes.

It seems to me, though, that if we accept this last argument then we have decided that the residential ISP business is naturally not very competitive. (Otherwise competition will erode those monopoly rents.) And if the market is not going to be competitive, then our policy discussion will have to go beyond the simple “let the market decide” arguments that we hear from some quarters. Naturally noncompetitive communications markets have long posed difficult policy questions, and this one looks like no exception. We can only hope that we have learned from the regulatory mistakes of the past.

Lets hope that the residential ISP business turns out instead to be competitive. If technologies like WiMax or powerline networking turn out to be practical, this could happen. A competitive market is the best outcome for everybody, letting the government safely keeps its hands off the Internet, if it can.

ICANN Says No to .xxx

Susan Crawford reports that the ICANN board has voted not to proceed with creation of the .xxx domain. Susan, who is on ICANN’s board but voted against the decision, calls it a “low point” in ICANN’s history.

[Background: ICANN is a nonprofit organization that administers the Domain Name System (DNS), which translates human-readable Internet names like “www.freedom-to-tinker.com” into the numeric IP addresses like 192.168.1.4 that are actually used by the Internet Protocol. Accordingly, part of ICANN’s job is to decide on the creation and management of new top-level domains like .info, .travel, and so on.]

ICANN had decided, some time back, to move toward a .xxx domain for adult content. The arrangements for .xxx seemed to be ready, but now ICANN has pulled the plug. The reason, apparently, is that the ICANN board was worried that ICM, the company that would have run .xxx, could not ensure that sites in the domain complied with all applicable laws. Note that this is a different standard than other domain managers would have to meet – nobody expects the managers of .com to ensure, proactively, that .com sites obey all of the national laws that might apply to them. And of course we all know why the standard was different: governments are touchy about porn.

Susan argues that the .xxx decision is a departure from ICANN’s proper role.

ICANN’s mission is to coordinate the allocation of domain names and numbers, while preserving the operational stability, reliability, and global interoperability of the Internet. The vision of a non-governmental body managing key (but narrow) technical coordination functions for the Internet remains in my view the approach most likely to reflect the needs of the Internet community.

[…]

I am not persuaded that there is any good technical-competency or financial-competency reason not to [proceed with .xxx].

The vision here is of ICANN as a technocratic standard-setter, not a policy-maker. But ICANN, in setting the .xxx process in motion, had already made a policy decision. As I wrote last year, ICANN had decided to create a top-level domain for adult content, when there wasn’t one for (say) religious organizations, or science institutes. ICANN has put itself in the position of choosing which kinds of domains will exist, and for what purposes. Here is Susan again:

ICANN’s current process for selecting new [top-level domains], and the artificial scarcity this process creates, continues to raise procedural concerns that should be avoided in the future. I am not in favor of the “beauty contest” approach taken by ICANN thus far, which relies heavily on relatively subjective and arbitrary criteria, and not enough on the technical merits of the applications. I believe this subjective approach generates conflict and is damaging to the technically-focused, non-governmental, bottom-up vision of ICANN activity. Additionally, both XXX and TEL raise substantial concerns about the merits of continuing to believe that ICANN has the ability to choose who should “sponsor” a particular domain or indeed that “sponsorship” is a meaningful concept in a diverse world. These are strings we are considering, and how they are used at the second level in the future and by whom should not be our concern, provided the entity responsible for running them continues to comply with global consensus policies and is technically competent.

We need to adopt an objective system for the selection of new [top-level domains], through creating minimum technical and financial requirements for registries. Good proposals have been put forward for improving this process, including the selection of a fixed number annually by lottery or auction from among technically-competent bidders.

One wonders what ICANN was thinking when it set off down the .xxx path in the first place. Creating .xxx was pretty clearly a public policy decision – though one might argue about that decision’s likely effects, it was clearly not a neutral standards decision. The result, inevitably, was pressure from governments to reverse course, and a lose-lose choice between losing face by giving in to government pressure, on the one hand, and ignoring governments’ objections and thereby strengthening the forces that would replace ICANN with some kind of government-based policy agency, on the other.

We can only hope that ICANN will learn from its .xxx mistake and think hard about what it is for and how it can pursue its legitimate goals.

Nuts and Bolts of Net Discrimination: Encryption

I’ve written several times recently about the technical details of network discrimination, because understanding these details is useful in the network neutrality debate. Today I want to talk about the role of encryption.

Scenarios for network discrimination typically involve an Internet Service Provider (ISP) who looks at users’ traffic and imposes delays or other performance penalties on certain types of traffic. To do this, the ISP must be able to tell the targeted data packets apart from ordinary packets. For example, if the ISP wants to penalize VoIP (Internet telephony) traffic, it must be able to distinguish VoIP packets from ordinary packets.

One way for users to fight back is to encrypt their packets, on the theory that encrypted packets will all look like gibberish to the ISP, so the ISP won’t be able to tell one type of packet from another.

To do this, the user would probably use a Virtual Private Network (VPN). The idea is that whenever the user’s computer wanted to send a packet, it would encrypt that packet and then send the encrypted packet to a “gateway” computer that was outside the ISP’s network. The gateway computer would then decrypt the packet and send it on to its intended destination. Incoming packets would follow the same path in reverse – they would be sent to the gateway, where they would be encrypted and forwarded on to the user’s computer. The ISP would see nothing but a bi-directional stream of packets, all encrypted, flowing between the user’s computer and the gateway.

The most the user can hope for from a VPN is to force the ISP to handle all of the user’s packets in the same way. The ISP can still penalize all of the user’s packets, or it can single out randomly chosen packets for special treatment, but those are the only forms of discrimination available to it. The VPN has some cost – packets must be encrypted, decrypted, and forwarded – but the user might consider it worthwhile if it stops network discrimination.

(In practice, things are a bit more complicated. The ISP might be able to infer which packets are which by observing the size and timing of packets. For example, a sequence of packets, all of a certain size and flowing with metronome-like regularity in both directions, is probably a voice conversation. The user might use countermeasures, such as altering the size and timing of packets, but that can be costly too. To simplify our discussion, let’s pretend that the VPN gives the ISP no way to distinguish packets from each other.)

The VPN user and the ISP are playing an interesting game of chicken. The ISP wants to discriminate against some of the user’s packets, but doesn’t want to inconvenience the user so badly that the user discontinues the service (or demands a much lower price). The user responds by making his packets indistinguishable and daring the ISP to discriminate against all of them. The ISP can back down, by easing off on discrimination in order to keep the user happy – or the ISP can call the user’s bluff and hamper all or most of the user’s traffic.

But the ISP may have a different and more effective strategy. If the ISP wants to hamper a particular application, and there is a way to manipulate the user’s traffic that affects that application much more than it does other applications, then the ISP has a way to punish the targeted application. Recall my previous discussion of how VoIP is especially sensitive to jitter (unpredictable changes in delay), but most other applications can tolerate jitter without much trouble. If the ISP imposes jitter on all of the user’s packets, the result will be a big problem for VoIP apps, but not much impact on other apps.

So it turns out that even using a VPN, and encrypting everything in sight, isn’t necessarily enough to shield a user from network discrimination. Discrimination can work in subtle ways.

Discrimination, Congestion, and Cooperation

I’ve been writing lately about the nuts and bolts of network discrimination. Today I want to continue that discussion by talking about how the Internet responds to congestion, and how network discrimination might affect that response. As usual, I’ll simplify the story a bit to spare you a lengthy dissertation on network management, but I won’t mislead you about the fundamental issues.

I described previously how network congestion causes Internet routers to discard some data packets. When data packets arrive at a router faster than the router can forward those packets along the appropriate outgoing links, the packets will pile up in the router’s memory. Eventually the memory will fill up, and the router will have to drop (i.e., discard) some packets.

Every dropped packet has some computer at the edge of the network waiting for it. Eventually the waiting computer and its communication partner will figure out that the packet must have been dropped. From this, they will deduce that the network is congested. So they will re-send the dropped packet, but in response to the probable congestion they will slow down the rate at which they transmit data. Once enough packets are dropped, and enough computers slow down their packet transmission, the congestion will clear up.

This is a very indirect way of coping with congestion – drop packets, wait for endpoint computers to notice the missing packets and respond by slowing down – but it works pretty well.

One interesting aspect of this system is that it is voluntary – the system relies on endpoint computers to slow down when they see congestion, but nothing forces them to do so. We can think of this as a kind of deal between endpoint computers, in which each one promises to slow down if its packets are dropped.

But there is an incentive to defect from this deal. Suppose that you defect – when your packets are dropped you keep on sending packets as fast as you can – but everybody else keeps the deal. When your packets are dropped, the congestion will continue. Then other people’s packets will be dropped, until enough of them slow down and the congestion eases. By ignoring the congestion signals you are getting more than your fair share of the network.

Despite the incentive to defect, most people keep the deal by using networking software that slows down as expected in response to congestion. Why is this? We could debate the reasons, but it seems safe to say that there is a sort of social contract by which users cooperate with their peers, and software vendors cooperate by writing software that causes users to keep the deal.

One of the reasons users comply, I think, is a sense of fairness. If I believe that the burdens of congestion control fall pretty equally on everybody, at least in the long run, then it seems fair to me to slow down my own transmissions when my turn comes. One time I might be the one whose packets get dropped, so I will slow down. Another time, by chance, somebody else’s packets may be dropped, so it will be their turn to slow down. Everybody gets their turn.

(Note: I’m not claiming that the average user has thought through these issues carefully. But many software providers have made decisions about what to do, and those decisions factor in users’ wants and needs. Software developers act as proxies for users in making these decisions. Perhaps this point will get more discussion in the comments.)

But now suppose that the network starts singling out some people and dropping their packets first. Now the burden of congestion control falls heavily on them – they have to slow down and others can just keep going. Suddenly the I’ll-slow-down-if-you-do deal doesn’t seem so fair, and the designated victims are more likely to defect from the deal and just keep sending data even when the network tells them to slow down.

The implications for network discrimination should now be pretty clear. If the network discriminates by sending misleading signals about congestion, and sending them preferentially to certain machines or certain applications, the incentive for those machines and applications to stick to the social contract and do their share to control congestion, will weaken. Will this lead to a wave of defections that destroys the Net? Probably not, but I can’t be sure. I do think this is something we should think about.

We should also listen to the broader lesson of this analysis. If the network discriminates, users and applications will react by changing their behavior. Discrimination will have secondary effects, and we had better think carefully about what they will be.

[Note for networking geeks: Yes, I know about RED, and congestion signaling, non-TCP protocols that don’t do backoff, and so on. I hope you’ll agree that despite all of those real-world complications, the basic argument in this post is valid.]

Nuts and Bolts of Net Discrimination, Part 2

Today I want to continue last week’s discussion of how network discrimination might actually occur. Specifically, I want to talk about packet reordering.

Recall that an Internet router is a device that receives packets of data on some number of incoming links, decides on which outgoing link each packet should be forwarded, and sends packets on the appropriate outgoing links. If, when a packet arrives, the appropriate outgoing link is busy, the packet is buffered (i.e., stored in the router’s memory) until the outgoing link is available.

When an outgoing link becomes available, there may be several buffered packets that are waiting to be transmitted on that link. You might expect the router to send the packet that has been waiting the longest – a first-come, first-served (FCFS) rule. Often that is what happens. But the Internet Protocol doesn’t require routers to forward packets in any particular order. In principle a router can choose any packet it likes to forward next.

This suggests an obvious mechanism for discriminating between two categories of traffic: a network provider can program its routers to always forward high-priority packets before low-priority packets. Low-priority packets feel this discrimination as an extra delay in passing through the network.

Recall that last week, when the topic was discrimination by packet-dropping, I distinguished between minimal dropping, which drops low-priority packets first but only drops a packet when necessary, and non-minimal dropping, which intentionally drops some low-priority packets even when it is possible to avoid dropping anything. The same kind of distinction applies to today’s discussion of discrimination by delay. A minimal form of delay discrimination only delays low-priority packets when it is necessary to delay some packet – for example when multiple packets are waiting for a link that can only transmit one packet at a time. There is also a non-minimal form of delay discrimination, which may delay a low-priority packet even when the link it needs is available. As before, a net neutrality rule might want to treat minimal and non-minimal delay discrimination differently.

One interesting consequence of minimal delay discrimination is that it hurts some applications more than others. Internet traffic is usually bursty, with periods of relatively low activity punctuated by occasional bursts of packets. If you’re browsing the Web, for example, you generate little or no traffic while you’re reading a page, but there is a burst of traffic when your browser needs to fetch a new page.

If a network provider is using minimal delay discrimination, and the high-priority traffic is bursty, then low-priority traffic will usually sail through the network with little delay, but will experience noticeable delay whenever there is a burst of high-priority traffic. The technical term for this kind of on-again, off-again delay is “jitter”.

Some applications can handle jitter with no problem. If you’re downloading a big file, you care more about the average packet arrival rate than about when any particular packet arrives. If you’re browsing the web, modest jitter will cause, at worst, a slight delay in downloading some pages. If you’re watching a streaming video, your player will buffer the stream so jitter won’t bother you much.

But applications like voice conferencing or Internet telephony, which rely on steady streaming of interactive, realtime communication, can suffer a lot if there is jitter. Users report that VoIP services like Vonage and Skype can behave poorly when subjected to network jitter.

And we know that residential ISPs are often phone companies or offer home phone service, so they may have a special incentive to discriminate against competing Internet phone services. Causing jitter for such services, whether by minimal or non-minimal delay discrimination, could be an effective tactic for an ISP that wants to drive customers away from independent Internet telephone services.

There is some anecdotal evidence to suggest that Comcast’s residential Internet customers may be having trouble using the Vonage Internet phone service because of jitter problems.

Let’s assume for the sake of argument that these reports are accurate – that Comcast’s network has high jitter, and that this is causing problems for Vonage users. What might be causing this? One possibility is that Comcast is using delay discrimination, either minimal or non-minimal, with the goal of causing this problem. Many people would want rules against this kind of behavior.

(To be clear: I’m not accusing Comcast of anything. I’m just saying that if we assume that Comcast’s network causes high jitter, and if we assume that high jitter does cause Vonage problems, then we should consider the possibility that Comcast is trying to cause the jitter.)

Another possibility is that Comcast isn’t trying to cause problems for Vonage users, and Comcast’s management of its network is completely reasonable and nondiscriminatory, but for reasons beyond Comcast’s control its network happens to have higher jitter than other networks have. Perhaps the jitter problems are temporary. In this case, most people would agree that net neutrality rules shouldn’t punish Comcast for something that isn’t really its fault.

This most challenging possibility, from a policy standpoint, (still assuming that the jitter problem exists) is that Comcast didn’t take any obvious steps to cause the problem but is happy that it exists, and is subtly managing its network in a way that fosters jitter. Network management is complicated, and many management decisions could impact jitter one way or the other. A network provider who wants to cause high jitter can do so, and might have pretextual excuses for all of the steps it takes. Can regulators tell this kind of strategem apart from fair and justified engineering decisions that happen to cause a little temporary jitter?

Surely some discriminatory strategies are so obvious, and the offered engineering pretexts so weak, that we could block or punish them without worrying about being wrong. But there would be hard cases too. Net neutrality regulation, even if justified, will inevitably lead to some difficult line-drawing.