November 13, 2024

How Fragile Is the Internet?

With Barack Obama’s election, we’re likely to see a revival of the network neutrality debate. Thus far the popular debate over the issue has produced more heat than light. On one side have been people who scoff at the very idea of network neutrality, arguing either that network neutrality is a myth or that we’d be better off without it. On the other are people who believe the open Internet is hanging on by its fingernails. These advocates believe that unless Congress passes new regulations quickly, major network providers will transform the Internet into a closed network where only their preferred content and applications are available.

One assumption that seems to be shared by both sides in the debate is that the Internet’s end-to-end architecture is fragile. At times, advocates on both sides debate seem to think that AT&T, Verizon, and Comcast have big levers in their network closets labeled “network neutrality” that they will set to “off” if Congress doesn’t stop them. In a new study for the Cato Institute, I argue that this assumption is unrealistic. The Internet has the open architecture it has for good technical reasons. The end-to-end principle is deeply embedded in the Internet’s architecture, and there’s no straightforward way to change it without breaking existing Internet applications.

One reason is technical. Advocates of regulation point to a technology called deep packet inspection as a major threat to the Internet’s open architecture. DPI allows network owners to look “inside” Internet packets, reconstructing the web page, email, or other information as it comes across the wire. This is an impressive technology, but it’s also important to remember its limitations. DPI is inherently reactive and brittle. It requires human engineers to precisely describe each type of traffic that is to be blocked. That means that as the Internet grows ever more complex, more and more effort would be required to keep DPI’s filters up to date. It also means that configuration problems will lead to the accidental blocking of unrelated traffic.

The more fundamental reason is economic. The Internet works as well as it does precisely because it is decentralized. No organization on Earth has the manpower that would have been required to directly manage all of the content and applications on the Internet. Networks like AOL and Compuserve that were managed that way got bogged down in bureaucracy while they were still a small fraction of the Internet’s current size. It is not plausible that bureaucracies at Comcast, AT&T, or Verizon could manage their TCP/IP networks the way AOL ran its network a decade ago.

Of course what advocates of regulation fear is precisely that these companies will try to manage their networks this way, fail, and screw the Internet up in the process. But I think this underestimates the magnitude of the disaster that would befall any network provider that tried to convert their Internet service into a proprietary network. People pay for Internet access because they find it useful. A proprietary Internet would be dramatically less useful than an open one because network providers would inevitably block an enormous number of useful applications and websites. A network provider that deliberately broke a significant fraction of the content or applications on its network would find many fewer customers willing to pay for it. Customers that could switch to a competitor would. Some others would simply cancel their home Internet service and rely instead on Internet access at work, school, libraries, etc. And many customers that had previously taken higher-speed Internet service would downgrade to basic service. In short, even in an environment of limited competition, reducing the value of one’s product is rarely a good business strategy.

This isn’t to say that ISPs will never violate network neutrality. A few have done so already. The most significant was Comcast’s interference with the BitTorrent protocol last year. I think there’s plenty to criticize about what Comcast did. But there’s a big difference between interfering with one networking protocol and the kind of comprehensive filtering that network neutrality advocates fear. And it’s worth noting that even Comcast’s modest interference with network neutrality provoked a ferocious response from customers, the press, and the political process. The Comcast/BitTorrent story certainly isn’t going to make other ISPs think that more aggressive violations of network neutrality would be a good business strategy.

So it seems to me that new regulations are unnecessary to protect network neutrality. They are likely to be counterproductive as well. As Ed has argued, defining network neutrality precisely is surprisingly difficult, and enacting a ban without a clear definition is a recipe for problems. In addition, there’s a real danger of what economists call regulatory capture—that industry incumbents will find ways to turn regulatory authority to their advantage. As I document in my study, this is what happened with 20th-century regulation of the railroad, airline, and telephone industries. Congress should proceed carefully, lest regulations designed to protect consumers from telecom industry incumbents wind up protecting incumbents from competition instead.

Comments

  1. Tim, this is a great post. I’m going to try to read the study in full. BTW, here’s an essay about how the telephone system has been messed up by regulation: http://bits.blogs.nytimes.com/2008/11/03/the-very-expensive-myth-of-long-distance/

  2. I think Mitch’s last question poses an interesting point. Insofar as ISPs have effective monopolies or can subject customers to strong effective lock-in, attempts to leverage their market position in bit-handling to increase revenues in other areas is pretty clearly subject to anti-trust sanctions. So would the anti-neutrality folks be willing to stipulate that any differential treatment of ISP-affiliated bits and non-affiliated bits that occurs under effective-monopoly conditions should, without further adjudication, be subject to the standard triple damages, with the only question being to settle the amount?

  3. mtich golden says

    Here is an amusing bit of historical trivia:

    The first automatic telephone switch was invented by a guy name Strowger who was an undertaker. The wife of a competitor of his was the one who manually connected calls in their town. Whenever someone would call asking for Strowger’s business, the called would be connected to Strowger’s competitor. Repeated complaints to the phone company resulted in no improvement, so he invented a way to get the manual operators out of the loop.

    http://www.strowger.com/About-us/Strowger-Invention-of-Telephone-Switch.html

    There is nothing stopping an ISP today from doing the same thing. In fact, there is already a history of a Canadian ISP (telus) blocking its customers from reaching the web site of a union with which it was embroiled in a conflict.

    http://yro.slashdot.org/article.pl?sid=05/08/04/1219223&from=rss

    So, the question is, how many people canceled their telus contracts when telus decided to do this?

    Now the question anti-regulation advocates should address is Should this be legal?

    • So how would this work? You’re suggesting that Barnes and Noble would pay Comcast to automatically re-direct all traffic to intended for Amazon.com to go to B&N’s site instead? And that customers wouldn’t notice this or be upset?

      The Telus case is a bit of a red herring. What Telus did was clumsy and stupid, but Telus also ultimately got a court order to have the site shut down. Canada isn’t exactly a police state, so this suggests it wasn’t a run-of-the-mill political website. It’s silly to extrapolate from that to more general ISP censorship.

      • That question seems like a little bit of a red herring itself. You don’t need redirection, you just need Amazon traffic to be slower and have a higher percentage of dropped packets, and for B&N’s site to run smoothly. In the absence of proof to the contrary, people will assume that Amazon is having some kind of trouble with its connection or its servers, and some percentage of them will go to the site that doesn’t freeze unpredictably just after they try to confirm their order.

        It doesn’t take much of a difference to tip marginal customers from one site to another, and it doesn’t take a large percentage change in traffic to yield millions in revenue.

        (Amazon vs B&N is probably a bad comparison, since Amazon long since stopped being a bookstore. Better might be youtube vs hulu vs vimeo vs netflix et al. There the market isn’t as fixed, and the products are rather more subsitutable. Or the original voip example.)

        • So does Comcast advertise that it offers this “service?” Or is it something that Comcast’s president negotiates with Hulu’s president in a smoke-filled room somewhere and that only happens once? I don’t see how Comcast can do it often enough to make significant profits but secretly enough that they don’t get caught.

          • Of course they advertise the service. They tout it as “premium” or “ultra-reliable” or some other combination of nice-sounding words. They say “become a comcast partner, and we’ll work with you to make sure your bits are less likely to get lost.” How could anyone complain about that?

            The publicity around their P4P experiments is a perfect example of how this could be done. The “partners” and “affiliates” and whatever other terms they figure out for people paying vig get the benefit of all the improvements they make in their network infrastructure, and if some of those improvements just happen to have a slight impact on the network’s performance for unimportant lower-priority traffic, who’s to complain? For a fee, they’d be willing to work with anyone. (Remember, the goal of Comcast and other big ISPs is to make money. If they can get all the people on the other end of their pipes to pay them extra, they have no problem with neutrality.)

          • Now you’re changing your example. Before, Comcast was going to kneecap particular websites at the behest of their competitors. Now you’re suggesting they’ll kneecap everyone who doesn’t pay up. But since the vast majority of websites won’t pay at the outset, this is equivalent to saying that Comcast’s business strategy will be to make virtually the entire Internet unusable. That doesn’t sound like a strategy that will improve Comcast’s bottom line.

          • P4P is a system that makes peer-to-peer run faster by identifying shorter paths and faster flows. It’s a technique that was developed in an open forum by multiple ISPs and doesn’t have any fees associated with it. It’s good for consumers because it speeds up downloads and it’s good for ISP because it reduces transit costs. It’s hardly an example of price manipulation, anti-competitive behavior, or any other bad thing. In fact, it’s the biggest nightmare of companies like Google who have bought themselves a fast lane by deploying mega-server-farms all over the place with super high speed links to Internet NAPs.

            “rp”, you’re pulling things out of the air. If you can substantiate any of the charges you make against Comcast, do so with an actual link.

          • mtich golden says

            Suppose Time Warner or some other oligopoly ISP simply enters into a “partnership” with a competitor of youtube’s. I can easily imagine that NBC or another TV network would want to set a video on demand service, for example. (Or, consider that the cable companies already sell on-demand videos for a high price. Youtube is starting to compete directly with that.)

            Now, Time Warner has a financial interest in all the traffic that goes to NBC and an anti-interest in the traffic that goes to youtube. I think that it’s reasonable to expect that the ISP won’t be making its best effort to make sure that the traffic to both places is handled equally. They may or may not deliberately degrade the traffic to youtube. The fact is, however, that you will never know if the performance difference is just a result of some networking issue, or if it’s an intentional configuration on the ISP’s routers.

            This is, of course, *already* the situation when it comes to VOIP phones. I know several people who had Vonage who told me that they had to change to Time Warner VOIP, because their internet connection just wasn’t good enough. How are they supposed to know why that happened?

            You are the one making an unwarranted assertion: namely, that the unregulated market will straighten this out. There is no reason to believe that. Customers – even highly technical ones – won’t be able to figure out what is happening, and even if they did, you’re asserting without evidence that they would be able to and sufficiently motivated to change their ISP. ISP’s do lots of things to keep people from switching, from bundling lots of services in (such as VOIP phones and cable channels) to contracts with cancellation fees. Thus, for example, I know of no one who in the situation I mentioned above who chose to change ISPs and keep Vonage.

          • “This is, of course, *already* the situation when it comes to VOIP phones. I know several people who had Vonage who told me that they had to change to Time Warner VOIP, because their internet connection just wasn’t good enough. How are they supposed to know why that happened?”

            The individual customers may not know, but you can bet that Vonage does, and Vonage is in a position to raise a big stink or pay a big legal team.

          • It is very easy to test the jitter and packet loss between endpoints. In the next few years you will find that every VoIP device supports RTCP and someone will write handy software to collect statistics on the gateways so we will have instant publication of who’s service delivers the goods and who’s service does not. VoIP providers will give customers a website where they can check their link quality, and customers will talk to one another.

            If you want to go a bit further, most routers will respond to expired TTL fields on UDP (read up on how traceroute works) so you can use a UDP packet that looks outwardly very similar to a VoIP packet (maybe even embedded in a live VoIP stream) with a short TTL, in order to measure round trip time to various midpoints on the route. This allows both endpoints to actively probe out where the jitter is coming from. If you don’t care about fine details, the existing traceroute command can do this work already.

            It sounds technical and complex, but then again so is VoIP, so is the Internet as a whole. Once packaged in a convenient downloadable utility the mechanism no longer matters. Every Joe will be making probes and taking notes. If the ISP blocks the probes by not responding to expired TTL, every Joe will know that they have been blocked, and will ask why.

  4. “It requires human engineers to precisely describe each type of traffic that is to be blocked”

    But that is nonsense – it doesn’t require humans and the descriptions don’t need to be precise, and if you are bocking all of te internet apart from “your bit” imprecise definitions are what you need.

    The rest of your argument – that any ISP blocking large chunks of the internet would loose paying customers – is demonstrably incorrect since several ISPs do block a lot and don’t suffer as a result (regardless of the ferocious response to Comcast/BitTorrent), and the situation is getting worse not better.

    • Well sure, an ISP could block its customers’ access to the 99 percent of the Internet that they didn’t control. But that doesn’t seem like a very good business strategy. Who’s going to pay for Internet access that only lets you access a tiny fraction of the web?

  5. Mitch Golden’s comment about strawmen is correct. The effect of latency is much more dramatic than people realize: even for things that should be latency tolerant (e.g. displaying a web page), people don’t tolerate latency very well. Google does a _lot_ of engineering to keep the latency of search results down (and they do it very well). The reason for this is simple: a 100ms increase in latency (seemingly barely perceptible) would likely cost Google hundreds of millions of dollars of revenue per year, because measurably fewer searches would get done. The ISPs know this, too.

    That said, I’m definitely worried that regulation may be worse than the problem. It may be better to watch how market forces evolve before making decisions that are going to be very hard to undo later.

    • mtich golden says

      Actually, Ed Felten was the one who brought up latency, quite some time ago:

      http://www.freedom-to-tinker.com/blog/felten/nuts-and-bolts-network-discrimination

      I think we all agree that regulation of these matters is very tricky, and one doesn’t want to throw the baby out with the bathwater. However, that doesn’t mean that we should trust the market to straighten out these matters – especially since the ISPs are a very small oligopoly. Probably the only reason we haven’t been slammed with these sorts of issues already is that the ISPs are worried about the regulations they would face if they step over the line, so they’ve been very circumspect so far.

    • Google does a _lot_ of engineering to keep the latency of search results down (and they do it very well).

      From where I sit (in Sydney) using a fairly typical DSL connection I can get a response from most Australian ISP websites in less than 30 milliseconds (including Optus, Telstra, PacNet, PIPE, etc) but google.com.au comes in via asianetcom.net and seems to be hosted in Japan. That gives a ping time of over 150 milliseconds. Either google don’t do their job as well as you think, or they couldn’t give a stuff about Australian customers (unlikely since they have an office in Sydney), or they made a cost/value judgement and they decided that an extra 120 milliseconds round trip latency doesn’t make much difference after all.

      Importantly, google’s page loads faster than Telstra simply because google’s web design is less cluttered with junk.

      By the way, a good chunk of Australian websites (targeting Australian customers) are hosted offshore purely for price reasons, mostly in the USA. If there was serious commercial value in low latency, they would fork out for local rackspace at hurt-me-plenty Australian prices.

  6. mtich golden says

    This article addresses strawman arguments. There are lots of things the pro-regulation side wants to see addressed, only some of them are what you discuss. Here are four counterexamples (and there are others)

    1) In many cases, the content going over the network is something the ISP has a financial interest in, and so it may be subtly favored. For example, I have a VOIP phone supplied by Vonage. My broadband is supplied (ultimately) by Time Warner. Now it is relatively simple for TW to add jitter into its connections to the Vonage access points (as Prof. Felten has discussed) in order to render the Vonage service inferior or unusable. What would I then have to do? If I want a VIOP phone, I would have to get it from TW.

    If the ISP has a financial interest in some of the content going over its network you are asking for trouble. Even if they don’t deliberately sabotage the competitor’s traffic (and we will never know if they are), we can be relatively sure that they won’t be as quick to fix any problems that arise in it as they would in their own traffic.

    2) Along these lines, you don’t have to look too far into a packet to figure out enough about whether to favor it. For example, if an ISP with lots of users (such as AT&T) wants to shake down google, it can just decide to slow down packets to and from the youtube servers. This isn’t a theoretical concern – ISPs have already proposed precisely these sorts of charges.

    3) The issue of DPI and spying won’t necessarily come about by ISPs deciding to do something on their own. There is a great deal of collaboration among them, and when they decide to do something it may be with a purported goal of something other than favoring their own interests. The most likely trojan horse issue will likely be copyright, which they are already discussing collaborating on.

    http://bits.blogs.nytimes.com/2008/01/08/att-and-other-isps-may-be-getting-ready-to-filter/

    Of course, this could be a lever to all sorts of filtering. Comcast probably thought they would get away with their P2P filtering by advancing this argument.

    4) Your analysis is flawed because at the last mile, there is no free market in connections. In Manhattan, where I live, there are essentially two choices for the connection to your home: Verizon and TW. If they both decide to do something (such as hijacking DNS failures, for example) there’s nothing I can do about it. Many places in the country don’t even have two options.

    The pro-regulation side doesn’t just include “activists”. It includes very hard-headed businesses (such as Google and Microsoft) who know that they will be targets of the ISP oligopoly if the ISPs ever think they can get away with it. They know very well about the open architecture of the internet and its limitations in the face of monopoly (or oligopoly) power.

    • Your analysis is flawed because at the last mile, there is no free market in connections.

      You have no wireless data in Manhattan? Who is sitting on all that empty spectrum?

      I found these guys in about 20 seconds of searching: http://www.dynalinktel.com/dsl.html

      The CLECs lease raw copper (probably from Verizon) so they run their own routers, make their own peering agreements and are completely independent with respect to QoS and filtering. Verizon can bash them a bit on the copper rental charges but existing anti-monopoly laws will prevent that going too far, and if you value some particular service then it isn’t at all unreasonable that you should pay money for what you value. If enough people move to CLECs then Verizon will no longer be in a position to make filtering decisions.

      The pro-regulation side doesn’t just include “activists”. It includes very hard-headed businesses (such as Google and Microsoft) who know that they will be targets of the ISP oligopoly if the ISPs ever think they can get away with it. They know very well about the open architecture of the internet and its limitations in the face of monopoly (or oligopoly) power.

      When it came to China, there was a choice of obeying government censorship rules or making less profits. Warms the heart to know that both Google and Microsoft made the decision in favour of profits which they would do again if it was convenient to appease the censorship regulators in any other country. These hard-headed businesses are not seeking an open architecture, nor a level playing field. They are merely looking for ways to maximise their own particular bit of leverage.

  7. Although the internet is decentralized as a whole, there are lots of parts of it that are, at least in the short run, de facto monopolies. Lots of cases wherhe there’s only one route from here to there. It’s quite possible that the lesson people will take from the Comcast debacle (which really hasn’t worked out that badly for Comcast in terms of market share) is that they didn’t do their meddling carefully enough.

  8. Tim Carstens says

    “But there’s a big difference between interfering with one networking protocol and the kind of comprehensive filtering that network neutrality advocates fear.”

    I’m not sure that this is a fair characterization of the opposing position. While some neutrality advocates are definitely afraid of worse situations, I think that many are reasonably concerned about the types of interference we have already seen. And while it was the case that the FCC held hearings about Comcast’s interference with BitTorrent, it is simply not the case that consumers were aware of this.

    The trouble is that consumers aren’t sufficiently educated about the issue for us to rely on a free market resolution; we either need to educate consumers, or look for regulation to protect them. While technical challenges might be enough to prevent large-scale DPI, there are no credible technical challenges from many simple schemes for simply not routing traffic.

    Heck, even with consumer education it’s not obvious that consumers will demand open networks. They certainly don’t place such demands on the data service on their cell phones. Since many neutrality advocates are specifically worried about an internet with the types of unpredictable restrictions we see in cellular data, it seems to me that this scenario deserves more attention in your analysis.

    • Has it ocured to anyone that the same reasons why it isn’t safe to trust the giant ISPs of this world to make decisions on behalf of consumers apply many times over to why it isn’t safe to trust governments to make these decisions?

      Do a bit of searching on Stephen Conroy’s firewall project and you will see that the people with the biggest axe to grind (and the biggest stick to wield) are your elected representatives (and deep in the bureaucracy, not-so-elected representatives).

      Heck, even with consumer education it’s not obvious that consumers will demand open networks.

      Egats! People not wanting what they are told to want. Why, this is dreadful.

      If such foolish people are going to make silly decisions, they can’t be trusted anymore. We can’t trust them with money, because they will buy the wrong thing. We certainly can’t trust them with a vote. Oh, no, no, no, democracy is far too dangerous. It not obvious that they will vote for the right candidate.

  9. Why couldn’t an ISP simply use DPI to limit what kinds of packets they DO allow, rather than trying to list which ones they don’t? Wouldn’t that be
    1) easier for the ISP by your own argument
    2) significantly more harmful to internet innovation