December 3, 2024

How Would Two-Tier Internet Work?

The word is out now that residential ISPs like BellSouth want to provide a kind of two-tier Internet service, where ordinary Internet services get one level of performance, and preferred sites or services, presumably including the ISPs’ own services, get better performance. It’s clear why ISPs want to do this: they want to charge big web sites for the privilege of getting preferred service.

I should say up front that although the two-tier network is sometimes explained as if there were two tiers of network infrastructure, the obvious and efficient implementation in practice would be to have a single fast network, and to impose deliberate delay or bandwidth throttling on non-preferred traffic.

Whether ISPs should be allowed to do this is an important policy question, often called the network neutrality issue. It’s a harder issue than advocates on either side admit. Regular readers know that I’ve been circling around this issue for a while, without diving into its core. My reason for shying away from the main issue is simply that I haven’t figured it out yet. Today I’ll continue circling.

Let’s think about the practical aspects of how an ISP would present the two-tier Internet to customers. There are basically two options, I think. Either the ISP can create a special area for preferred sites, or it can let sites keep their ordinary URLs. As we’ll see, either option leads to problems.

The first option is to give the preferred sites special URLs. For example, if this site had preferred status on AcmeISP, its URL for AcmeISP customers would be something like freedom-to-tinker.preferred.acmeisp.com. This has the advantage of telling customers clearly which sites are expected to have preferred-level performance. But it has the big disadvantage that URLs are no longer portable from one ISP to another. Portability of URLs – the fact that a URL means the same thing no matter where you use it – is one of the critical features that makes the web work, and makes sites valuable. It’s hard to believe that sites and users will be willing to give it up.

The second option is for users to name sites using ordinary names and URLs. For example, this site would be called freedom-to-tinker.com, regardless of whether it had preferred status on your ISP. In this scenario, the only difference between preferred and ordinary sites is that users would see much better performance for perferred sites.

To an ordinary user, this would look like a network that advertises high peak performance but often has lousy performance in practice. If you’ve ever used a network whose performance varies widely over time, you know how aggravating it can be. And it’s not much consolation to learn that the poor performance only happens when you’re trying to use that great video site your friend (on another ISP) told you about. You assume something is wrong, and you blame the ISP.

In this situation, it’s hard to believe that a complaining user will be impressed by an explanation that the ISP could have provided higher performance, but chose not to because the site didn’t pay some fee. Users generally expect that producers will provide the best product they can at a given cost. Business plans that rely on making products deliberately worse, without reducing the cost of providing them, are widely seen as unfair. Given that explanation, users will still blame the ISP for the performance problems they see.

The basic dilemma for ISPs is pretty simple. They want to segregate preferred sites in users’ minds, so that users will blame the site rather than the ISP for the poor performance of non-preferred sites; but segregating the preferred sites makes the sites much less valuable because they can no longer be named in the same way on different ISPs.

How can ISPs escape this dilemma? I’m not sure. It seems to me that ISPs will be driven to a strategy of providing Internet service alongside exclusive, only-on-this-ISP content. That’s a strategy with a poor track record.

Clarification (3:00 PM EST): In writing this post, I didn’t mean to imply that web sites were the only services among which providers wanted to discriminate. I chose to use Web sites because they’re useful in illustrating the issues. I think many of the same issues would arise with other types of services, such as VoIP. In particular, there will be real tension between the ISPs desire to label preferred VoIP services as strongly associated with, and supported by, that particular ISP; but VoIP services will have strong reasons to portray themselves as being the same service everywhere.

Comments

  1. Look at it this way.
    My ISP is a Fixed Wireless provider.
    His T-3 provider is 11 miles from his tower via backhaul and is a phone co-op. SBC is having a fit over the fact that I droped my 4 ISDN lines, 3 voice lines and went to VOIP. Now SBC wants the copper they installed on my property back. So if you think about the pea brains of the Mega Corps out there I can understand why they want to charge google for bandwidth. They are going broke. 30 years ago ATT knew the data revolution was coming and they are now crying foul since they did not do anything to proffit from it.

  2. Donny Williams says

    Look everyone; these companies are money hungry and greedy. They are trying to create a method to removing the smaller competition from the internet market. Basically, anyone that is trying to compete with larger internet companies will be severally crippled because of this new “tax”. In addition, these companies are obviously worried that the VOIP services are taking away their valuable customers. Let’s face it these guys don’t legally own the internet and have no right to reduce performance for services that don’t pay their premiums. This is just another way for big business to control the way that consumers decide to spend their hard earned money. I think its BS that the US government and US businesses think that they have a right to regulate and control a global communication system like the internet.

  3. What I really don’t understand is that I have never gotten the bandwidth that was promised me upon setup. I had DSL, but never got the full speed. I have TWC RR now, and still never get the speed they promise. If they say “always on, always fast”, shouldn’t I expect it?

    I understand the whole QoS arguement, but I have never even gotten the basic terms of my agreement. No where in my ToS does it say “5Mbps, when it is available or convenient/profitable for us.” It says 5Mbps down/387 kbps up. If they oversold their network capabilities and undercharged to grab customers, that is their fault for making poor business decisions, not mine, and I will refuse to support any company that now charges me because I am using the full extent of the service I was promised. If they are selling me a set bandwidth, charge me for it. Don’t charge me for what I do with it. (Yes, I agree that services like VOIP should include code to better handle errors/lost packets. That is the responsibility of an application, not the physical network infrastructure providers.)

  4. “How can ISPs escape this dilemma?”

    Simply by monopolizing the high-bandwidth option, and/or persuading other high-bandwidth providers to take the same position.

    Remember, in lots of places, the monopolies are local. You want a different option? Move to another town.

    Also, notice the language these people use: “I need this change as an incentive to improve network bandwidth.” They don’t even consider the incentive provided by competitive pressure. “Improve service or die” is not in their language.

    Whether or not they have a monopoly currently, they think as though they do, and they are machinating to get it back.

    I also wonder about backbone access… could they apply the same kinds of techniques to push cable providers off the network? “We’ll reduce your QoS at your access to our backbone. We’ll be sneaky about it, so it’s hard to identify how we do it, or even who’s doing it.”

  5. the zapkitty says

    So the consumer will be faced with two different stories each and every time they view an affected web page.

  6. ISPs will develop a firefox extension, which will display a “Normal” / “Superfast – powered by ISP” icon to give feedback to their customers….

  7. John Mazza says

    Nobody has mentioned another possible push-back from the website operators – they could simply detect the IP address of inbound requests and post a little message about the situation… something along the lines of “Dear SBC subscriber… Please be patient… your ISP is deliberately slowing down your GreatWebSite.com experience.”

  8. Kiran Nagaraj says

    Option one is definitely hopeless ! it should not even be considered. With Option one you are taking away the most important USP of the Internet – parallel contruction and portatbility.
    Instead one could try and lobby for a global ISP organization which would act as the umbrella organization for all ISPs the world over. Every ISP everywhere must come under the purview of this organization both for technical and business standards. This has a lot of advantages including the ones the present Internet Governance project is trying to provide.

    The point in question here though is the chanelling of prefered traffic. All websites must be held payable to this ISP organization. So that If yahoo and amazon can afford an amount google and ebay can too. Likewise if newwebsitewithlessfunds1.com can afford it so can newwebsitewithlessfunds2.com. The umbrealla organization can then pass on this benefit to its memeber ISPS based on the traffic that they respectively handled. This would solve a lot of issues from national competition to monopolisation.

  9. Please see the Washington Post article of 1/22/06.

    http://www.washingtonpost.com/wp-dyn/content/article/2006/01/21/AR2006012100094.html
    ———————————————————————————–
    The Coming Tug of War Over the Internet

    By Christopher Stern

    Sunday, January 22, 2006; Page B01

    Do you prefer to search for information online with Google or Yahoo? What about bargain shopping — do you go to Amazon or eBay? Many of us make these kinds of decisions several times a day, based on who knows what — maybe you don’t like bidding, or maybe Google’s clean white search page suits you better than Yahoo’s colorful clutter.

    But the nation’s largest telephone companies have a new business plan, and if it comes to pass you may one day discover that Yahoo suddenly responds much faster to your inquiries, overriding your affinity for Google. Or that Amazon’s Web site seems sluggish compared with eBay’s.
    ————————————————————————————-

  10. I’m new to this subject, but the key points seem to be:

    Ed Says: “ For example, BellSouth is talking about charging sites like Google and Yahoo for the privilege of getting their packets delivered reliably to customers. If Google pays up, and Yahoo doesn’t, then BellSouth customers will see good performance when accessing Google, and worse performance when accessing Yahoo.”

    and

    johnT Says: -so the bandwidth has already been payed for on both ends of the transaction between my computer and Google’s. Now they want someone to pay for it a third time?

    So this “two-tier” system is just another name for a business model that’s been around for a long time, sometimes called “protection money” but let’s call it what it is: extortion.

    If it’s not OK for organized crime syndicates, why should it be OK for ISPs?

  11. I don’t know, I think it depends on how they implement the thing, and make it clear to us customers. By the way, I’ve been suspicious with SBC’s “3Mbit” service that I got, I can only get lower download speeds, tested on a wide variety of websites, on any hour of the day, and it always tops out at about 205 kB/s, which would mean this limit is imposed by the ISP rather than being because of traffic. As you know, it is just about 1.6 Mbits/s. I used to have the “1.5 Mbit” which topped at 1.2 Mbit (about 150kB/s), which was OK as I understand some throughput is lost with DSL connections. But the new one is not much of an improvement, ant it’s supposed to be twice as fast. Anyone has any thoughts on this, or similar experiences?

    Anyway, for intance with my connection, if they upped their own services while maintaining the speed I’m getting now for the lower ones, and not charge more, I’d be OK with it (disregarding of course the other issue that they’re already screwing me right now).

  12. “How can ISPs escape this dilemma? I’m not sure.”

    I can think of a couple of solutions, actually.

    1. redirect requests for http://www.google.com to an interstitial ad that redirects back to the site after a delay. This is a similar to a technique used by many for-pay wireless access points, and is also similar to Salon’s advertising system.

    2. custom browsers that display in their chrome whether the site being viewed is a ‘preferred’ site.

    There are a number of service degradation possibilities that can happen above the TCP/IP level, BTW:

    1. manipulating the HTTP headers to stop the browser from caching files. Plenty of other games can be played with caching.

    3. Also To the extent that ISPs are already using caching HTTP proxies, it would be easy to modify these to return some files faster or slower (in fact I beleive some of these already do some prioritization). Not everything can be cached in the first place, but even google search results have *some* static page elements like the logo that could slow down how fast the page as a whole loads. This could be particularly critical if the piece not being loaded quickly is the adwords javascript file.

  13. Remember, we are dealing with people who are still gnashing their teeth because they couldn’t charge $1000 a month for a T1 (1.5Mb/s) line or $100 month for ISDN (128Kb/s). Why couldn’t they? They couldn’t because the cable companies offered high speed data service at reasonable prices.

    Phone company economists look at the huge number of data line customers paying $40 a month and see a loss of $960 per customer per month. It drives them nuts. That’s why they come up with schemes like these.

    My guess is that if SBC gets serious, they are going to be losing even more customers to cable companies, power companies, and wireless ISPs. The technology keeps changing. Ma Bell is dead.

  14. One other fact I think needs to be explicitly stated, is that companies like Google aren’t getting a free ride. They pay their ISP for the bandwidth they use every month, just like everyone else (exept they pay a lot more then I do for much greater bandwidth).

    So, I’m payiing for the bandwidth coming and going from my computer, and Google is paying for the bandwidth coming and going from their computers -so the bandwidth has already been payed for on both ends of the transaction between my computer and Google’s. Now they want someone to pay for it a third time?

    How do I get into this racket?

  15. Mr. Felten,

    Quite possibly they don’t, but I don’t think many people notice the text of their URL either. They either use a hyperlink, or they bookmark a site, after which they never look at the URL.
    More importantly, I don’t think most users would care what it meant anyway (either the emblem or the URL text). They would just learn to avoid the slower sites, as others here have mentioned. Users don’t have loyalty to an ISP; they’re interested in the individual sites. So it seems to me that the ISP is in the position of either trying to blame the subscribers for the slow sites through branding to the users, or (for users that don’t know or don’t care why the slowdown is occuring) simply losing subscriber customers because users are now avoiding the slower sites. Most people would probably feel the former is a little disingenuous, and the latter simply loses customers directly. Neither choice looks helpful to the ISPs’ profits.

  16. Ed,

    I agree that ISPs don’t want to maximize utilization. I believe most network operators would say that the only way to operate a highly available network is with links around 50% utilization (so links can safely absorb shifts in traffic when , e.g., failures occur.) Indeed, it is precisely for this reason that ISPs’ options for offering gold-plated service are somewhat limited; if congestion is already rare withinin the ISP’s network, few customers will see value in the promise to deviate from network-neutrality in those rare instances when congestion does occur.

    It’s one thing to withold an value-added service that could be sold, but actively degrading an existing service is frought with technical and business risks. I would like to think that that network operations folks at a large ISP would push back pretty hard against any proposal that might destabilize the best-effort service they currently provide. Already ISPs make (some? most?) of their profit from commercial customers who pay a premium for high availability (e.g. a SLA promisng

  17. I think it’s foolish to think that customers (unless educated) will have the understanding to blame this kind of performance degradation on their ISPs (as if most of them could do anything even if they did). The performance of the net is highly variable by site, by time of day, by state of world news, by weather and a zillion different variables. If foo.com’s service starts to be consistently lousy, most people will just assume that FooCorp hasn’t been investing enough in servers or other infrastructure (which in a sense will be true, if you consider QoS bribes to be infrastructure). I do wonder, though, whether the ISPs’ terms of service with users have been updated to include the right to alter quality based on payments from a third party.

    What does the endgame of this kind of double-ended charging look like? For example, in cases where a single company can’t provide end-to-end connections between Foo and its users, will everyone whose lines the bits transit get to exact a separate toll? Will there be another round of consolidation so that only end-to-end providers are still in business, and there are effectively half a dozen redundant internets connecting all the big players to the unwashed masses?

  18. This will also effect thier common carrier status if content companies who will be requried to pay for prefered status find infringing copyrighted material on the Networks .

  19. “Google is not discussing sharing of the costs of broadband networks with any carrier. We believe consumers are already paying to support broadband access to the Internet through subscription fees and, as a result, consumers should have the freedom to use this connection without limitations.”

    Google’s Barry Schnitt ,Networking Pipleine ,January 18, 2006

    http://www.networkingpipeline.com/blog/archives/2006/01/google_we_wont.html

  20. Jeff Hildebrand says

    Doesn’t this idea fly in the face of the whole “common carrier” thing? If this idea gains any traction, how long will it be before someone says “well, if you can slow down non-preferred traffic, why can’t you stop illegal traffic?”

  21. Dan (and Jonathan),

    I don’t think it’s right to assume that ISPs will try to maximize the utilization of their networks. Instead, they’ll maximize the revenue they can derive. And it’s well-known from lots of businesses that it can be a profit-maximizing strategy to withhold an extra service that you can provide at no extra cost, and then charge the customer for it.

    Think of first-class seats on airplanes and trains, which are sometimes empty even though there are people sitting back in coach who could ride in the nicer seats at no extra cost to anyone. Think, also, of software products that come in Home and Professional versions, even though the difference is just a bit that is flipped somewhere and it would cost the vendor nothing extra to give everybody the Professional level product. In all of these cases, the vendor’s strategy is to differentiate the product in order to collect more money from those customers who are willing to pay more for better service, while still collecting some revenue from customers who care more about price.

    Obviously, it’s often in the interest of a service provider to give customers extra service when they can, especially in highly competitive markets. Many providers do this routinely. But it’s not safe to assume that providers will never withhold extra services that they can provide at no cost.

    In the case of ISPs, if you believe their CEOs’ rhetoric, they’re interested in doing more than just prioritizing traffic under congestion. They say that sites should have to pay to get good service, period.

  22. Edward Kuns says

    Dan,

    ISPs *do* have multi-tiered pricing already. You can pay for various capped download and upload rates. I pay more for 3MBit down and 3/8 MBit up. I could pay less for one third that rate. And ISPs implement these rate limits, technically, by dropping packets.

    Thus, this technology exists today. The difference is that they are now proposing to drop packets not because YOU didn’t pay, but because the remote server didn’t pay. That’s like UPS arbitrarily delaying package delivery unless BOTH sides of that transaction pay extra. If ISPs are allowed to do this sort of thing, we risk the Internet becoming very different — as someone else pointed out above. Back in the day, instead of the Internet you instead had Compuserve, Delphi, AOL, Genie, and others. Some services were on many of those BBSs, some only on one, and the whole user experience was less rich as a result. The explosive growth of the Internet was precisely because it was a uniform, nondescriminatory medium.

  23. Dan Simon’s point about the economic (non)sense of degrading bandwidth and the observation that the links of large ISPs are typically way underutilized are both very well taken.

    I am also wondering about whether service degradation makes sense from a practical technical standpoint. It seems to me that there are three possible ways to degrade service: throttling bandwith, adding delay, and introducing unnecessary packet loss. Throttling bandwidth can readily be done at access links probably using the existing routers already deployed, but it may not be effective against a service like VOIP, whose performance is more dependent on latency and loss than bandwidth (assuming bandwidth isn’t throttled to below the minimum amount required for a call). Adding extra delay seems like a huge headache from a technical standpoint because it ties up router buffer resources for those packets that are receiving degraded service, resources which could perhaps be better used to ensure a llow loss service for the high paying traffic. I wonder whether introducing delay it is even a feature in routers, which it would have to be for an ISP to seriously consider it as a possibility. (I suppose one could send low-paying traffic on sub-optimal routes to add delay, but again this seems to use ISP resources in exactly the wrong way.) Discarding packets unnecessarily sounds crazy, but maybe someone could convince me otherwise.

    In the end, I think Dan is correct in his speculation that rather than degrading the low-paying traffic’s service, the best an ISP can offer high-paying traffic is a fairly limited guarantee: something like (a) that in times of congestion the low paying traffic will be preferentially dropped and (b) a queueing discipline that favors high-paying traffic to minimize latency when queues build up at routers.

    Regarding Rob Kurtz’s point about the ISP offering an Akamai-like service to prefered customers. It is possible to use a network of co-located end-hosts to do things beyond an intelligent caching service for http. There are, in theory, overlay-network services that could improve performance for other types of applications as well.

  24. The BBC currently has some sort of “peering” arrangement with major ISPs whereby they effectively move their servers onto the ISP’s network, saving both traffic. (I’m not entirely sure what this means, but the effect would seem to be similar to that of a two-tier internet–customers of these ISPs would not only get faster response times from the BBC website than they otherwise would, but faster response times than from “competing” sites.) I would imagine e.g. Akamai have some sort of similar arrangement with big ISPs.

    See http://support.bbc.co.uk/support/peering/, http://support.bbc.co.uk/support/network/ for a network diagram.

  25. My response is pretty long, so I wrote it up on a blog posting of my own.

    http://ideas.4brad.com/node/337

  26. If we are talking corporate “business model” there are serious anti-consumer implications. If we are talking technology, I don’t have the background to comment.

    The Internet and personal computers have become a success story because, on the whole, they have a “universal” interface. As much as we may not like Microsoft’s business practices, the fact that Windows is (cringe) a de-facto “universal” language allows us to easily communicate with each other, the whole world actually. The current trend in corporate business practices, however, is to make their products proprietary in nature, including Microsoft. I see the Sony rootkit debacle as falling into this trend. I believe that Sony was (is) attempting implement a proprietary system where you need a Sony player to listen to a Sony CD.

    Before the Internet exploded into the public conscience, there was Compuserve. Compuserve was a large network of mainframe computers, which you logged into and which offered a wide variety of services. One could advocate that it was a pseudo Internet. Ed’s post seems to imply the resurrection of a Compuserve like service in new clothing. Essentially, the Internet “turf” would be divided up into pseudo corporate territories. So instead of simply logging onto the Internet as we do now, in the future you would login into the pseudo world of LaLa Land USA, Inc., which provides you access to its affiliates, partners, and associates. Direct access by the consumer to the Internet may be limited and require that you have some sort of passport. Come to think of that, Microsoft seems to have already developed the “passport” concept.

    Based on the current trend in business models towards proprietary technologies, we are talking about the balkanization of the Internet. The end result, the Internet as the Tower of Babel where we are prevented from freely communicating.

  27. I don’t agree that competition in the commercial ISP market is in such a weak state that ISPs can get away with gratuitous degradation of service. If they were, they’d have long ago introduced two-tier pricing for consumer customers, with non-“preferred” users getting packets dropped randomly (out of spite, as it were). As things stand, though, any ISP that tried that would face massive defections to ISPs willing to provide actual “best effort” connectivity.

    In general, absent conditions of scarcity–and we’re not exactly living in a world of scarce bandwidth these days–wasting a capital good to avoid selling it below a selected (profit-making) price point makes no economic sense. That’s why, for example, airlines sell last-minute seats very cheaply–say, to “standby” customers–rather than let them go empty.

    More likely, “preferred” Internet service would come into play (only) when bandwidth really is scarce–during “flash crowd” incidents, when congestion hits individual routers, during DDoS attacks, and perhaps sometimes at or near the last hop, where pipes are sometimes too narrow for all the contending traffic. In those scenarios, auctioning off scarce bandwidth actually makes good economic sense.

    Assuming, then, that “preferred” service only buys priority at these points of congestion, do you still worry about it?

  28. Dan,

    Certainly ISPs have been selling bandwidth to customers for a long time. Customers who pay more get better service. But the practice has been that it’s up to customers to decide how to use the bandwidth they buy.

    What’s new here is that residential ISPs are talking about charging web sites and other content providers for access to the ISPs’ customers. For example, BellSouth is talking about charging sites like Google and Yahoo for the privilege of getting their packets delivered reliably to customers. If Google pays up, and Yahoo doesn’t, then BellSouth customers will see good performance when accessing Google, and worse performance when accessing Yahoo.

    Given that one goal of this approach is to extract maximum revenue from web sites and service providers, it’s clear that ISPs are likely to block and delay traffic from non-preferred sites even when the network is not congested, in order to increase sites’ incentive to buy preferred service. So it’s not just a question of which packet to drop first under congestion.

    This is not inherently evil, but it does raise complicated questions of economic efficiency, given the relatively weak state of competition in the commercial ISP market.

  29. I don’t understand why anyone considers “two-tier” service such a departure. Right now, ISPs offer several tiers of service. At each tier, performance degrades massively when a certain traffic rate is reached. In some cases, this performance degradation happens at extremely slow rates, like 56Kbits/s. To avoid this degradation, customers pay extra to upgrade to a higher tier of service.

    Does this mean that ISPs are deliberately degrading the service of lower-tier customers for the benefit of “preferred” higher-tier customers, or that they’re on a slippery slope leading to some kind of insidious content-control or censorship regimen? And if not, what difference does it make if the non-preferred traffic is dropped off the end of a queue at a router other than the last-hop one?

  30. I added a clarification to the original post, to say that I was using HTTP/Web service as an example, but the same kinds of issues I raised would appear for other services, such as VoIP and TV.

  31. Bill Tedeski says

    your argument works for the sophisticated users, but not for the average user.

    The average user will see it as low cost or high cost network option and will frequent sites that perform better. In this case it would be to the advantage of “ACME Road Runner Travel” to pay the ISP so that “Black Bart Tours” site seems slower. This way the Grandma books her steam boat passage on ACME’s site not Bart’s.

    The sophisticated user, on the other hand knows that is low speed or high speed networking that he is buying and that the high speed option will allow fast VPN access, and better performance when he books his mooner lander passage on bart’s site.

    When you look at this way, the ISP strategy makes cents.

  32. I think the solution, from the ISP’s point of view, is easy: don’t change the URLs, but do the performance degradation in a way that announces itself. For instance, force all HTTP transactions through a transparent proxy, and if it’s a transaction bound for a site that you want a fee from but that refuses to pay the fee, then show the user an ad before taking them to the site. Prominent on the ad screen would be whatever language you can write to try to convince the user that this is the site’s fault for not paying you.

    Something else to bear in mind: this isn’t really about HTTP. This is about VOIP and television, where “degraded performance” is not merely annoying but renders the service unusable. Phone companies want to sell you Internet access while still forcing you to deal with them for your phone service; and cable companies want to sell you Internet access while still forcing you to deal with them for your television service. If the law says they aren’t allowed to do straight-out “Buy this or you can’t buy that” tied selling, then they’ll use any means necessary to force that choice on you without technically breaking the law.

  33. The ISP’s need to be careful with this. If network neutrality isn’t mandated then there is nothing to stop someone higher on the food chain like Microsoft or Intel from building ISP preference into the chips/OS.

    Imagine one of these Co.’s demanding payment from an ISP or they will degrade the service at the TCP/IP stack level for all sites.

  34. I thought the 2-tier approach wasn’t so much to collect a tax from “premiere” web sites, but to staunch the loss of revenue from people switching to services like VOIP. That way the carriers either get paid when you make a “normal” long distance call, get paid by your VOIP service provider (and indirectly you) not to delay your VOIP packets, or they don’t get paid (much) and the user gets degraded service -which will be much more noticeable in a voice call than a page-load.

    IIRC web traffic is a shrinking (though still significant) percentage of network traffi.

  35. Two-tiering seems like it would have more of an effect on small-time e-commerce sites and the colos that serve them.

    User behavior studies show that users frequently leave slow-to-load sites before the first page comes up. Recent example: a 5-second PHP delay chased away 50% of visitors.

    So if you’re running the example.com e-commerce site, and you have a choice of putting your servers at Big Colo Company A, which has preferential peering with the retail ISPs that your customers use, and Big Colo Company B, which doesn’t, you’re likely to go with A. Why should you spend ad and promotion money for incoming traffic and then drop half of new users on the floor because you didn’t spend a little more on hosting (a small fraction of your budget)?

    This creates a nice opportunity for an analyst firm to sell a convenient table of performance data for retail ISP/colo combinations.

  36. Perhaps the ISPs would take steps to improve performance of “premier” sites. One way would be to provide a service similar to what Akamai provides today, where you hit the main site, it figures out where you are coming from, and points you to a nearby cache rather than the actual site. The ISPs would operate the caches (or call them mirrors if you like) which would improve performance for those sites which pay up.

    Admittedly, this works best for real http type traffic rather than other apps such as mail, p2p, etc.

  37. Well, I’m not sure I get this. You’re talking about a two tier internet, but you’re referring only to web sites in your argument. Filtering web traffic and applying this two tier regulation is one way to technically go about addressing web traffic.

    Of course, lots of traffic (including Rich Internet Application traffic) is not http post/get. Are you talking about two tiers for the other traffic as well? If not, I think it would be very hard for ISPs to cap your web surfing speed as we would probably see Tor-like tunneling to get around this issue.

    If on the other hand they are talking about imposing a literal two tier system on ALL data, where out of network stuff is slower than in network stuff, that’s a different story. What will happen with my email? I have my own domain name hosted at a hosting company which I pay. I frequently use different ISPs to collect my email. Does this mean I will have to suffer slow downs?

    I have a different topic I would like to see you talk about some time. I have been very interested in censorship in certain countries for a while. I was wondering what you thought about Microsoft making a “trusted” computing platform targeted towards repressive governments. How do you think this would play out for them? What technical hurtles would there be? Why hasn’t Microsoft already explored this market?

  38. Chris,

    Do people really notice those little emblems? I don’t think I do.

  39. “They want to segregate preferred sites in users’ minds, so that users will blame the site rather than the ISP for the poor performance of non-preferred sites;”

    If it’s a simple matter of branding individual sites to the users, the ISP could ask the subscribers of the preferred service to include a prominent emblem on their site (similar to the VeriSign security tags). This allows for user discrimination without affecting the URLs.
    However, this strikes me as an unsavoury marketing tactic. As you said, the ISP isn’t providing a special service for a special price; they’re deliberately ‘breaking’ the service for those that don’t pay more. So branding sites (however it’s done) is really just an attempt to distract the users from the real cause of the slowdown; the ISPs intentional decision to do so.