December 15, 2024

The Journal Misunderstands Content-Delivery Networks

There’s been a lot of buzz today about this Wall Street Journal article that reports on the shifting positions of some of the leading figures of the network neutrality movement. Specifically, it claims that Google, Microsoft, and Yahoo! have abandoned their prior commitment to network neutrality. It also claims that Larry Lessig has “softened” his support for network neutrality, and it implies that because Lessig is an Obama advisor, that Lessig’s changing stance may portend a similar shift in the president-elect views, which would obviously be a big deal.

Unfortunately, the Journal seems to be confused about the contours of the network neutrality debate, and in the process it has mis-described the positions of at least two of the key players in the debate, Google and Lessig. Both were quick to clarify that their views have not changed.

At the heart of the dispute is a question I addressed in my recent Cato paper on network neutrality: do content delivery networks (CDNs) violate network neutrality? A CDN is a group of servers that improve website performance by storing content closer to the end user. The most famous is Akamai, which has servers distributed around the world and which sells its capacity to a wide variety of large website providers. When a user requests content from the website of a company that uses Akamai’s service, the user’s browser may be automatically re-directed to the nearest Akamai server. The result is faster load times for the user and reduced load on the original web server. Does this violate network neutrality? If you’ll forgive me for quoting myself, here’s how I addressed the question in my paper:

To understand how Akamai manages this feat, it’s helpful to know a bit more about what happens under the hood when a user loads a document from the Web. The Web browser must first translate the domain name (e.g., “cato.org”) into a corresponding IP address (72.32.118.3). It does this by querying a special computer called a domain name system (DNS) server. Only after the DNS server replies with the right IP address can the Web browser submit a request for the document. The process for accessing content via Akamai is the same except for one small difference: Akamai has special DNS servers that return the IP addresses of different Akamai Web servers depending on the user’s location and the load on nearby servers. The “intelligence” of Akamai’s network resides in these DNS servers.

Because this is done automatically, it may seem to users like “the network” is engaging in intelligent traffic management. But from a network router’s perspective, a DNS server is just another endpoint. No special modifications are needed to the routers at the core of the Internet to get Akamai to work, and Akamai’s design is certainly consistent with the end-to-end principle.

The success of Akamai has prompted some of the Internet’s largest firms to build CDN-style networks of their own. Google, Microsoft, and Yahoo have already started building networks of large data centers around the country (and the world) to ensure there is always a server close to each end user’s location. The next step is to sign deals to place servers within the networks of individual residential ISPs. This is a win-win-win scenario: customers get even faster response times, and both Google and the residential ISP save money on bandwidth.

The Journal apparently got wind of this arrangement and interpreted it as a violation of network neutrality. But this is a misunderstanding of what network neutrality is and how CDNs work. Network neutrality is a technical principle about the configuration of Internet routers. It’s not about the business decisions of network owners. So if Google signs an agreement with a major ISP to get its content to customers more quickly, that doesn’t necessarily mean that a network neutrality violation has occurred. Rather, we have to look at how the speed-up was accomplished. If, for example, it was accomplished by upgrading the network between the ISP and Google, network neutrality advocates would have no reason to object. In contrast, if the ISP accomplished by re-configuring its routers to route Google’s packets in preference to those from other sources, that would be a violation of network neutrality.

The Journal article had relatively few details about the deal Google is supposedly negotiating with residential ISPs, so it’s hard to say for sure which category it’s in. But what little description the Journal does give us—that the agreement would “place Google servers directly within the network of the service providers”—suggests that the agreement would not violate network neutrality. And indeed, over on its public policy blog, Google denies that its “edge caching” network violates network neutrality and reiterates its support for a neutral Internet. Don’t believe everything you read in the papers.

Comments

  1. If, for example, it was accomplished by upgrading the network between the ISP and Google, network neutrality advocates would have no reason to object. In contrast, if the ISP accomplished by re-configuring its routers to route Google’s packets in preference to those from other sources, that would be a violation of network neutrality.

    Let’s run with this example, let’s presume that some particular ISP is very popular and two competing hosting companies (let’s call them google and newgle) both want fast access to that ISP. There happens to be a fiber optic running into the ISP with 1G spare capacity and google orders a 100M link into the ISP so the telco allows google a share of the 1G spare capacity, then newgle orders a 200M link and they also get a share on the same fiber. Obviously, these shares are virtual services, meted out by deliberate traffic shaping such that a single high bandwidth fiber will behave like a bunch of parallel lower bandwidth links. The business model wants the flexibility to sell arbitrary bandwidth exactly as ordered, while the engineering model only lays down fixed sizes of physical connection.

    The process goes on until all the spare capacity has been sold, then a startup comes along and wants to buy 50M but the capacity is already sold. The telco can lay a new fiber but this startup doesn’t have enough money to put in a big order to make it worth laying a new fiber. The smart telco figures out that their 1G fiber is only actually fully used 2% of the time, and that 50M of bandwidth is available on that fiber for 95% of the time. So they can sell the 50M to the new startup, but they put a proviso that they are selling contended bandwidth, sell it cheap and they lower the priority of those packets. Google and newgle still get exactly what they paid for and they can’t even see the traffic backfilling their links (nor should they care). People buying bandwidth on the cheap usually get their traffic through, and are mostly happy and we have violated the “neutrality”.

    If we actually had to lay a physical 100M cable, laying parallel to a 200M cable and a 50M cable then ALL users would pay more, and the resources get wasted because large amounts of that bandwidth sits unused most of the time (and upgrades to a given customer are a nightmare too).

  2. What an amazing claim: “Network neutrality is a technical principle about the configuration of Internet routers..”

    Since f*cking when?

    “Network neutrality” is all kinds of things depending on who’s using the term, and I don’t think I’ve ever heard any of the prominent advocates use such a tweakish definition.

    • Marc Deveaux says

      It’s not the way most people define it, but technically that is exactly what is being done. When an ISP uses traffic shaping software to degrade certain kinds of connections, or to insert links to their own affiliates, it is often (if not always) implemented at the routers. Since the routers are part of the basic infrastructure though, they wind up being invisible to anyone how isn’t familiar with networks, just another beige box that does something they don’t really understand. Which is why only technical blogs will even mention them.

      • Traffic shapers, traffic policers, and Layer 2 switches are the primary means of applying QoS to packet streams, and these aren’t routers. The methods they employ consist of delaying, accelerating, or discarding packets, which are not routing activities.

        Some of the proposed legislation aims to constrain router behavior, but not all of it. Anybody who says “NN means this and only this” is BSing; it has at least three major definitions, all at odds with reality since the Internet is not neutral in its own right. Traffic shaping can either make it more neutral or less neutral, depending on how you define “neutral.”

  3. The important thing is to tighten Net neutrality so that it want treat these solutions different. Example: If traffic from cache or direct lines are not counted of your monthly download limit on the basis that these are located in-house, a violation have been made.

  4. I think this highlights one of the problems of “Net Neutrality” is that it’s a bit of slippery slope in terms of definitions. Paying an ISP to give your traffic differentially better service is bad (i.e. violates net neutrality), but what about paying the ISP to increase their capacity and therefore give you better service – that’s good. But what if the other guy’s content merely get the same service as before (i.e. worse than yours) – then that’s bad (violates net neutrality). But if you do it with distributed DNS caching technology (only for your own services) that’s good (doesn’t violate net neutrality).

    Just how are you allowed to pay ISPs to improve your service relative to the competition? Certainly not with cash. But if you do it with cache (i.e. equipment at their facilities) that’s ok?

    I don’t think the Journal misunderstands the technology so much as it is (intentionally?) taking a different spin on the soft definition of the words “net neutrality”, at least in the mind of the lay public and politicians. Google (and perhaps Lessig) just don’t like that spin – that’s fair, just don’t try to claim the moral high ground because your spin is “better”.

    “Don’t believe everything you read in the papers” – or the web.

    • Net neutrality is about not giving preferential treatment to some party’s traffic, it’s as simple as that.

      Putting a cache in ISP’s network is not giving preferential treatment to Google — access to the cache is faster because it is on faster (local) network. Same would be true for any other servers within the same network.

      Putting a fat pipe between the ISP and Google is not preferential treatment — traffic going through other routes is not degraded (or otherwise affected) by it.

      On the other hand, allocating half of shared link’s capacity for Google exclusively would be bad, because it would negatively affect all other traffic (by cutting its pipe in half).

      • “Putting a cache in ISP’s network is not giving preferential treatment to Google”

        Yes it is. You’re providing a shorter path for traffic to reach Google, and discriminating against providers who can’t afford the time or money to co-lo a CDN on every ISP’s network.

        Where you’re talking about gaining competitive advantages, increasing the performance of Google is logically equivalent to degrading the service of everyone that isn’t Google.

        This might not fall into the traditional notion of “network” neutrality, but the end effect is the same. Right now the difference may be minor, but if the ISPs start constricting the onward flow of traffic to the Internet (Because it’s a costly business remember!) and the only companies that achieve reasonable performance are those that are providing CDNs local to the end-user and effectively not using the internet.

        • … discriminating against providers who can’t afford the time or money to co-lo a CDN on every ISP’s network.

          I would be very surprised to see a workable explanation of how to make an economic system “fair” in such a way that a person willing to throw large amounts of additional hardware at a given problem will get the same end-result as a person investing minimal hardware contributions.

          This does not happen in graphics rendering, physical equation solving, code cracking or any of the many purposes that computers are put towards. Why should it happen in communication networks? I can think of no other industry where there is some expectation that outcome should be independent of investment.