October 5, 2024

The Last Mile Bottleneck and Net Neutrality

When thinking about the performance of any computer system or network, the first question to ask is “Where is the bottleneck?” As demand grows, one part of the system reaches its capacity first, and limits performance. That’s the bottleneck. If you want to improve performance, often the only real options are to use the bottleneck more efficiently or to increase the bottleneck’s capacity. Fiddling around with the rest of the system won’t make much difference.

For a typical home broadband user, the bottleneck for Internet access today is the “last mile” wire or fiber connecting their home to their Internet Service Provider’s (ISP’s) network. This is true today, and I’m going to assume from here on that it will continue to be true in the future. I should admit up front that this assumption could turn out to be wrong – but if it’s right, it has interesting implications for the network neutrality debate.

Two of the arguments against net neutrality regulation are that (a) ISPs need to manage their networks to optimize performance, and (b) ISPs need to monetize their networks in every way possible so they can get enough revenue to upgrade the last mile connections. Let’s consider how the last mile bottleneck affects each of these arguments.

The first argument says that customers can get better performance if ISPs (and not just customers) have more freedom to manage their networks. If the last mile is the bottleneck, then the most important management question is which packets get to use the last mile link. But this is something that each customer can feasibly manage. What the customer sends is, of course, under the customer’s control – and software on the customer’s computer or in the customer’s router can prioritize outgoing traffic in whatever way best serves that customer. Although it’s less obvious to nonexperts, the customer’s equipment can also control how the link is allocated among incoming data flows. (For network geeks: the customer’s equipment can control the TCP window size on connections that have incoming data.) And of course the customer knows better than the ISP which packets can best serve the customer’s needs.

Another way to look at this is that every customer has their own last mile link, and if that link is not shared then different customers’ links can be optimized separately. The kind of global optimization that only an ISP can do – and that might be required to ensure fairness among customers – just won’t matter much if the last mile is the bottleneck. No matter which way you look at it, there isn’t much ISPs can do to optimize performance, so we should be skeptical of ISPs’ claims that their network management will make a big difference for users. (All of this assumes, remember, that the last mile will continue to be the bottleneck.)

The second argument against net neutrality regulation is that ISPs need to be able to charge everybody fees for everything, so there is maximum incentive for ISPs to build their next-generation networks. If the last mile is the bottleneck, then building new last-mile infrastructure is one of the most important steps that can be taken to improve the Net, and so paying off the ISPs to build that infrastructure might seem like a good deal. Giving them monopoly rents could be good policy, if that’s what it takes to get a faster Net built – or so the argument goes.

It seems to me, though, that if we accept this last argument then we have decided that the residential ISP business is naturally not very competitive. (Otherwise competition will erode those monopoly rents.) And if the market is not going to be competitive, then our policy discussion will have to go beyond the simple “let the market decide” arguments that we hear from some quarters. Naturally noncompetitive communications markets have long posed difficult policy questions, and this one looks like no exception. We can only hope that we have learned from the regulatory mistakes of the past.

Lets hope that the residential ISP business turns out instead to be competitive. If technologies like WiMax or powerline networking turn out to be practical, this could happen. A competitive market is the best outcome for everybody, letting the government safely keeps its hands off the Internet, if it can.

Comments

  1. Andrew: OK, just so long as they don’t start trying to regulate the content that flows over the wires. They should treat it like they currently do the phone: absent evidence of criminal activity, they don’t listen in let alone care what you say. Fourth Amendment and all that.

    Oh, wait, the NSA does listen in on our phone calls now. 😛

    Hey! Stop that man! That old dude that’s tearing gouges out of the Constitution and sits in that oddly-shaped office room all the time, you know the one!

  2. Private companies should have some regulations. If you have only 2 private companies in your area offering broadband service and both decide to keep their prices very high then we people have to suffer.

    Government or some Internet regulating authority has to keep a check over pricing and some noms need to be set for private companies.

  3. Agreed. IMO, anything that is a “natural monopoly” should be managed as a public good, and the management should be highly transparent; the alternative is too ghastly to contemplate. (Contracting the actual work out is acceptable, so long as transparency is maintained and the ultimate decision makers are we, the people.)

    Of course, private companies would remain free to connect up their businesses and provide services. Just as companies can open shops along that road, or market roadside assistance, companies would remain free to operate ebusinesses with Web storefronts, and to provide things like email.

  4. Wiring entire swaths of residential areas to the internet is prohibitively expensive. This is why in many areas there are only two broadband providers available: the phone company and the cable company.

    I think it’s clear that the wire connections from houses to the internet form what’s called a “natural monopoly”. A natural monopoly occurs when competition is practically impossible and thus a free market cannot develop. One example of a natural monopoly is roads. You might be able to have two competing roads side-by-side for long stretches of desert highway, but in most situations, you simply cannot have several roads and then give people a choice which one to take. Thus, roads are a natural monopoly, and you cannot have a market in them.

    It’s clear to me that the wires should be taken away from the phone companies and cable companies. The wires should be maintained by the government and given, pretty much without restriction, to any business that wants to compete for customers.

  5. Ned Ulbricht says

    Especially if I’m paying $$$ for an on-demand movie over the internet, I’m gonna be unhappy if it breaks up when 6 of my neighbors do the same thing.

    Walt,

    Over a well designed ‘net, a movie—even an “on-demand” movie—is likely to need a fair amount of throughput, but be relatively insensitive to jitter and light packet loss.

    Andrew Odlyzko argues in a number of places, as do others, that video delivery is likely to be in the form of file transfers. One place where Odlyzko makes this argument is in Internet TV: Implications for the long distance network (p.8 in PDF):

    [V]ideo is likely to be in the form of file transfers, not streaming real time traffic. There are more detailed arguments in [CoffmanO2], but the basic argument is that video will follow the example of Napster (or MP3, to be more precise), which is delivered primarily as files for local storage and replay, and not in streaming form. This local storage and replay model been known as a possibility for a long time, cf. [Owen]. It has several advantages. It can be deployed easily (no need to wait for the whole Internet to be upgraded to provide high quality transmission). It also allows for faster than real time transmission when networks acquire sufficient bandwidth. (This will allow for sampling and for easy transfer to
    portable storage units.)

    The prediction that streaming multimedia traffic will not dominate the Internet has been made before, in [Odlyzko4, StArnaud]. It fits in well with the abundance of local storage we are increasingly experiencing.

    Imo, the key point is “faster than real time transmission”.

    As long as there’s sufficient best-effort bandwidth for you and your six—or sixty— neighbors, there’s no real need to have a separately prioritized traffic class for movies.

  6. Walt French says

    The points are good, but as others say, it’s not necessarily the last mile that’s a user’s bottleneck. “Sensible” assumptions that not all users will be simultaneously expecting max throughput simultaneously call for much less capacity on upstream parts of the internet; here, multiple users (all well within their expected bandwidth) could overload the link.

    So things aren’t *quite* as simple as suggested. Especially if I’m paying $$$ for an on-demand movie over the internet, I’m gonna be unhappy if it breaks up when 6 of my neighbors do the same thing.

    My concern is that building all these toll roads will slow down commerce dramatically. Nickle-and-diming every content provider will naturally favor the multi-service conglomerates, aka The Media, due to complexity that small service provicers will face, dealing with multiple ISPs’ rate cards. Of course, you might get a break with ATT if you promise not to sign a similar deal with EarthLink; that’d reinforce ATT’s monopoly and these days, sleazy deals are winked at by AntiTrust.

    The real issue is that the ISPs are selling service (bandwidth) they can’t currently deliver, and they like to continue that into the future when it will become increasingly obvious. I’d say the fix is to insist on enforcable contracts between ISPs and their oftentimes captive clientele, the end users.

  7. Jon Healey says

    I’ve watched the development of broadband service since TCI rolled out cable modems in Fremont, CA. The bottleneck for both cable and phone-based ISPs hasn’t been the last mile so much as the shared bandwidth within the ISP’s network, before the traffic reaches the public Internet. As other commenters have noted, ISPs might sell you a 3 Mbps connection, but they don’t provide a dedicated 3 Mbps pipe for you all the way out to the Net. IMHO, one of the real questions is whether ISPs should be able to influence what happens when a great number of their customers simultaneously try to view 3 Mbps HD streams. Today, it’s a best-effort world. But w/o Net neutrality, HBO could pay to insure that its HD streams get through the clutter — good for HBO and those trying to view it, not so good for everybody else.

  8. Avi, I can live with providers giving priority to packets on technical grounds; streaming video getting priority over peer2peer and spam. But should my ISP prioritize my web traffic to microsoft.com over my web traffic to freedom-to-tinker.com? As a paying customer I like to see that my requests are handled efficiently, without artificial slowdowns because “premium” sites that I am not interested in should have better performance.

  9. Avi Flamholz says

    What the customer sends is, of course, under the customer’s control — and software on the customer’s computer or in the customer’s router can prioritize outgoing traffic in whatever way best serves that customer.

    I don’t think that the first part of this statement is true. Customers largely send packets they don’t know about, or have no desire to send. This is especially true when they have lots of spyware or are part of a botnet.

    Your argument might be turned on its head on the basis of this point (just playing devil’s advocate here). The ISP is in a unique position to look at the packets and deduce some of them to be spyware related. Most users are incapable of this deduction, and the ones who are capable probably don’t have spyware on their machine. An ISP could then lower the priority of spyware-generate packets or just drop them entirely.

  10. Since the ISP’s arguaments only make sense if there are other bottle necks can ou really justify your assertation that the last mile is the current bottle neck ? if you are jsut oberving that not everyone has access to “broadband” then I think your aguament is week – you need to show that the end user can’t actually get a broadband that is broad enough, and that would be tricky I think.

  11. Ned Ulbricht says

    For a typical home broadband user, the bottleneck for Internet access today is the “last mile” wire or fiber connecting their home to their Internet Service Provider’s (ISP’s) network.

    This statement lumps together the “last mile” waveguide and the “last mile” endpoint equipment.

    If the waveguide is copper twisted-pair, then the “wire” may indeed be the bottleneck. But if the waveguide is optical fiber, then the endpoint switching equipment is more likely the bottleneck.

    One of my networking professors lectured our class at length and in detail about Andrew Tanenbaum’s textbook statement:

    In the race between computing and communncation, communication won. The full implications of essentially infinite bandwidth (although not at zero cost) have not yet sunk in to a generation of computer scientists and engineers taught to think in terms of the low Nyquist and Shannon limits imposed by copper wire. The new conventional wisdom should be that all comuters are hopelessly slow, and networks should try to avoid computation at all costs, no matter how much bandwith that wastes.

    (Tanenbaum, Computer Networks, 3rd ed., Prentice-Hall, New Jersey, 1996, p.87).

    In the years since I had that class, for new, fixed-waveguide communications facilities (i.e. not wireless) I haven’t seen any real reason to doubt Tanenbaum’s forward-looking design wisdom.

    Increasing the computational load for endpoint equipment on “new” networks doesn’t address the bottleneck.

  12. In practice, every customer doesn’t have their own last mile — cable modems typically share bandwidth among all the customers on a relatively small physical segment, and DSL lines are often aggregated onto a network connection with rather less bandwidth than the sum promised to all of the customers in question. So the ISP can get some mileage out of playing games with last-mile traffic to maximize total throughput, or even total quality of service (defined somehow or other).

    If anything, however, this reinforces Felten’s reasoning. If net neutrality is abandoned, it’s in the last-mile ISPs’ interest to always have a bottleneck, because that way they’ll always have the opportunity to sell priority access for particular kinds of bits — either to the customer or the application provider or both. (Yeah, I’m assuming that the last mile is fairly noncompetitive, but in most of the country that’s a good assumption.) In this case, granting monopoly rents could in fact be a disincentive to rapid investment in last-mile bandwidth.

  13. Those artificial bottlenecks are merely a way for the ISPs to extort more money out of content providers, and have little to do with managing network traffic effectively.

    But one of the network neutrality opponents’ arguments against net neutrality is that the additional money is needed to upgrade the Internet, and if the bottleneck is at the last mile to the consumer, then it would need to be upgraded first. Hence, Prof. Felten is legitimately addressing the arguments of net neutrality opponents and hasn’t missed the point at all.

  14. There are other places in the network where congestion may arise; it’s common to bundle the traffic of a group of users through “concentrators” where the outgoing “bundle” only has a fraction of the total subscriber bandwidth. Concentrators, routers and the backbone may get congested thanks to the practice called “overbooking”. It is easier and relatively cheaper to upgrade the backbone infrastructure than to upgrade the “last mile” to the consumers, so in effect the consumer link will be the most costly to upgrade.

    If, thanks to overbooking, there is some upstream congestion there are several ways to resolve the issue. One content-neutral way would be allocating each active customer a fair proportion of the available bandwidth and have the customer allocate it to his applications. In this scenario customers are in control and can complain at their ISP when it doesn’t provide the advertised bandwidth.
    What I fear about the “have the websites pay for premium service” plans is that they discourage the ISPs to improve their backbone… when the bottlenecks get bigger there, websites will get more encouragement to pay up for premium service. In the end the consumer will be put up with a choice between several “bad access” ISPs.

    Another issue is what obtaining “premium access” would mean for a smaller business: The big ISPs are looking at Google and such, but would a specialist newssite (lwn.net for example) be able to obtain “premium” status at a fair number of ISPs. How many ISPs are there to negotiate with in the US? What will be the pricing for a small commercial site? What will be the burocracy overhead for the ISPs?

  15. If your assumption of where the bottlenet is isn’t fullfilled, then all the remaing conclusion are just plain wrong.

    The issue here in not about bottlenecks itself, but the imposition of artificial bottlenecks on the ISP backbone, to prevent some of its customers to access some sites in the best way possible.

    Here you’ve missed the point completely.