April 18, 2024

Discrimination Against Network Hogs

Adam Thierer has an interesting post about network neutrality over at Tech Liberation Front. He is reacting to a recent Wall Street Journal story about how some home broadband service providers (BSPs) are starting to modify their networks to block or frustrate network applications they don’t like.

Why would a BSP discriminate against an application’s traffic? The standard scenario that people worry about is that a BSP hinders traffic from Vonage or some other VoIP application, because the BSP wants to sell phone service to the customer and VoIP competes with that phone service. One can cook up a hypothetical like this whenever a BSP wants to sell an application-level service. The standard response to this worry is to suggest “net neutrality” regulation, which would require BSPs to carry all traffic on an equal footing, regardless of which application or protocol is used. There is a complicated literature about the economics of net neutrality; for now, suffice it to say that net neutrality regulation can help or hurt, depending on the precise circumstances.

Thierer opposes net neutrality regulation. He seems especially worried that neutrality might require BSPs to treat all customers the same, regardless of how much network traffic they generate. If a few customers use lots of bandwidth this will leave less for everybody else, or alternatively will require the BSP to upgrade the network and pass on the cost neutrally to all users. It’s better, he argues, to let BSPs price differentially based on bandwidth usage.

It’s hard to argue with that proposition. I don’t think any reasonable net neutrality advocate would object to a BSP discriminating or pricing based solely on bandwidth usage. They would of course object if a BSP blocked a particular app and rationalized that act with vague excuses about saving bandwidth; but a real bandwidth limit ought to be uncontroversial.

(Technically, customers already have bandwidth limits, in the sense that a given class of service limits the maximum instantaneous bandwidth that a customer can use. What we’re talking about here are limits that are defined over a longer period, such as a day or a week.)

It’s already the case that some customers use much more bandwidth than others. Thierer quotes a claim that fewer than 10% of Time-Warner customers use more than 75% of bandwidth; and another BSP makes an even stronger claim. This isn’t a surprise – this kind of business is often subject to an 80/20 rule (80% of the resources used by 20% of the customers) or even a 90/10 rule.

But will ISPs actually apply bandwidth limits? Here’s Thierer:

This raises the most interesting issue in this entire debate: Why is it that BSPs are not currently attempting to meter broadband usage and price it to account for demand and “excessive” usage by some users? In my opinion, this would be the most efficient and least meddlesome way of dealing with this problem. Per-minute or per-bit pricing schemes could help conserve pipe space, avoid congestion, recover costs and enable BSPs to plow the savings into new capacity / innovation. Despite this, no BSP seems willing to engage in any sort of metering of the pipe. Why is that?

I think there are two reasons that BSPs have so far been unwilling to price discriminate. Frist broadband operators are probably concerned that such a move would bring about unwanted regulatory attention. Second, and more importantly, cable and telco firms are keenly aware of the fact that the web-surfing public has come to view “all you can eat” buffet-style, flat-rate pricing as a virtual inalienable right. Internet guru Andrew Odlyzko, has correctly argued that “People react extremely negatively to price distrimination. They also dislike the bother of fine-grained pricing, and are willing to pay extra for simple prices, especially flat-rate ones.”

So if BSPs aren’t willing to bandwidth-discriminate now, and doing so would anger customers, why would we expect them to start discriminating in the future? It’s not enough to point to a 90/10 rule of bandwidth usage. If, as seems likely, a 90/10 rule has been operating for a while now, and BSPs have not responded with differential pricing, then it’s not clear why anything would change in the future. Perhaps there is data showing that the customer-to-customer imbalance is getting worse; but I haven’t seen it.

Ultimately, BSPs’ general refusal to bandwidth-discriminate would seem to contradict claims that bandwidth discrimination is necessary. Still, even net neutrality advocates ought to support BSPs’ freedom to bandwidth-discriminate.

Alert readers have surely noticed by this point that I haven’t said whether I support net neutrality regulation. The reason is pretty simple: I haven’t made up my mind yet. Both sides make plausible arguments, and the right answer seems to depend on what assumptions we make about the markets and technology of the near future. I’ll probably be talking myself through the issue in occasional blog posts here over the next few weeks. Maybe, with your help, I’ll figure it out.

Comments

  1. Old Smokie looks like the same commenter as Moogle over at Om Malik’s site: http://gigaom.com/2006/01/06/att-verizon-bellsouth-google/. He is equally uninformed, as well as a coward for posting anonymously.

  2. As far as Odlyzko and his arguing for a tiered internet goes,
    take a look at US patent number: 6,295,294 http://tinyurl.com/gcxke

    “Inventors: Odlyzko; Andrew M. (Berkeley Heights, NJ)
    Assignee: AT&T Corp. (New York, NY)”
    …..
    “The network is partitioned into logical channels and a user incurs a cost for use of each of the logical channels. The logical channels differ primarily with respect to the cost to the user. ”

    So he’s an AT&T employee that filed the patent for the tiered internet. No wonder he’s all for it.

  3. For a start ISP’s should be neutral on ports. For an ISP the port is just two bytes in the packet.

    They could still offer QoS, but the customer, not the ISP, should make all QoS decisions.

  4. I think (in part because I’ve read the next post) that in thinking about this it’s necessary to separate (as Roger Weeks touches on in passing) network-congestion and total-transfer-limit issues from monopoly-rent issues. Anyone who owns the last-mile franchise is going to be deeply interested in degrading services provided by others for which they offer competition (and in the case of VOIP that’s regular phone services as well as inhouse or partnered VOIP services). Network congestion, on the other hand (unless it loses large numbers of customers) is a great marketing tool.

    Transfer-limit restrictions seem to work fairly well for hosting companies, so it seems likely that it’s not the technical or even (in the medium term) the user-acceptance issues that keep them from being rolled out on the last-mile side. I think it’s more a combination of (a) the fact that simpleminded restrictions wouldn’t really solve many congestion problems (if everyone just happens to want to use their allocation right this minute) and (b) the precedent that such restrictions would set for transparency and neutrality, thus limiting broadband providers’ market-control options.

  5. Wes Felter says

    Sounds like Chris Smith should talk to Mark Cuban, who also proposed uncapped, unlimited local traffic: http://www.blogmaverick.com/entry/1234000523062839/

  6. David Harmon says

    First of all, it’s great to see somebody who’s *capable* of reserving their decision on these issues! I agree that the question isn’t played out far enough to make final decisions.

    I would also like to point out that “bandwidth usage” isn’t the only issue to consider here! VOIP and video feeds aren’t just high-bandwidth, they *also* demand low latency. “Backhaul” transfers such as Usenet, E-mail, and file transfer (including much P2P activity), can send a lot of bits, but they don’t really need to proceed at maximum speed, all the time. IM and Web want something in between,, where seconds might count, but milliseconds certainly don’t. There ought to be a tiering opportunity there, but yes, it involves weakening the end-to-end principle. At least, endpoints would need to specificially ask for low latency (if they care).

    Oh yes, and Earthlink already does some bandwidth limiting — if a consumer-level high-speed account goes over its (high) monthly limit for transfers, it gets throttled to 64Kbs. IIRC, the customer can buy off the limit or upgrade to a business account. Not as sophisticated as it could be, but livable.

  7. Chris Smith says

    There is a third option, which I have occasionally seen in the Toronto market – time-of-day tiering.

    This recognizes the idea that network congestion is a problem only if many other users are affected. So, the ISP uses volume caps (5GB, 30GB, etc) but traffic during certain time periods (such as 1AM to 6AM) does not contribute to the volume metering. Interestingly — even if there is congestion at these times, since the congestion is only between similar classes (high-usage) users, it is less likely to be a business problem. This option turns the usual negative customer reaction on its head — rather than charge you more when it is expensive, I charge you less when it is low-cost.

    Another option I would *like* to see, but never have, is to recognize that the congestion usually is not within an ISPs network, but is congestion for the relatively scarce external bandwidth. Ideally a subscriber would be charged only for their external bandwidth usage, and – more importantly – there would be some way to discriminate between the internal/external usage. This would effectively allow low-cost communication to “nearby” users in the network, and if appropriate apps can measure the cost to various nodes (which apps like BitTorrent already do to some extent) and route accordingly, then the pricing model of the ISP would contribute to more effective and efficient usage of all bandwidth.

    This is a deeply complex topic, as a connection “to the Internet” is composed of multiple links of varying speed, cost, and over-subscription. A perfect pricing model would suffer from the high transaction costs needed to determine what the “correct” price is, so some pricing inefficiencies will always exist.

  8. Metering and cutting off has been happening on Hughes’ satellite internet network for years.

  9. In the UK bandwidth caps are fairly common. My 2Mbps ADSL service has a 30GB monthly limit, and when I hit that I get shunted down to 128Kbps with the option to buy additional “credits”.

    In theory, if I downloaded stuff at maximum speed then I’d hit my cap fairly quickly, but in practice I rarely come close to using it all in a month and for “normal” users I imagine it’s never going to be an issue. It seems like a reasonable compromise to stop the hogs consuming all the bandwidth without interefering whilst still allowing the moderate users to get the speed they’re paying for…

  10. If blocking an app is considered to violate a ‘basic liberty’ in Rawls’ Theory of Justice (principle 1), it is unjust to trade this ‘freedom’ for a lower priority ‘social good’ such as lower pricing or more bandwidth.

    Rawlsians would argue (principle 2) that discriminatory throttling / pricing is acceptable as long as a) everyone can access the ‘opportunity’ equally, and b) it benefits the least advantaged in society (the most).

    Assuming the least disadvantaged in cyberspace are the lowest bandwidth users, they can be benefited by decreasing their cost per bit, or decreasing delays in their traffic (ie. raising priority of their packets) relative to more advantaged users.

  11. The biggest problem I see with metering is its incompatibility with the internet architecture. Anyone can send me packets even if I never wanted them in the first place; it is unacceptable for me to be charged for those packets, as Cecil points out.

    That hasn’t stopped many ISPs in India from metering bandwidth and charging differential prices based on actual usage.

  12. Frankly I would oppose neutrality regulation. The way I see, it can only help as much as hurt. For every ISP out there that wants to block Vonage, there’s another one that wants to set itself apart by using sophisticated QoS to speed that traffic along. I would rather allow the latter ISP to do what it wants, and setup antitrust law to prevent the former ISP from doing what it wants. This feels a little like No Child Left Behind – it means No Child Gets Ahead, Either.

    On another note, Jeff, I don’t think either of us can speak with Roger’s experience, but I imagine he’s right. I don’t think he’s talking about technical limitations, I think he means: how do you explain to someone that they’ve used too much bandwidth this month, and you’ve throttled back their connection? Then they yell at you because they have a big whatever coming up, and you give them their one-time-only grace period… etc.

    The one thing I would love would be able to buy my download and upload speeds separately. I’ve never seen an ISP that doesn’t peg one to the other, but I think that would be excellent.

  13. Jeffrey W. Baker says

    Roger Weeks says this is hard, but I can imagine numerous implementations. A simple SNMP polling system which monitors the 5 minute, 1 hour, and 24 hour transfer, and adjusts the QoS or delay on that line card or port would do the trick. I could believe that the gigantic telcos have stone-age billing systems that might be difficult to accomodate, but I doubt it. As noted above, wireless voice providers already implement this kind of billing.

    Wes Felter thinks bandwidth tiers imply transfer demand, but I disagree. I buy the fastest service I can get, but I am not a bulk transfer user. I buy the service because I place a premium on time, and if a 6 megabit service allows me to send photos to the office at 1 minute instead of 10 minutes, I’m going to pay the difference. Similarly, I appreciate downloading software instantly instead of over the course of hours. But my long-term average transfer rate is in the single-digit kilobits per second.

    In short, I think there exists room in the economic model for pricing of bandwidth and transfer as independent variables.

  14. Roger Weeks says

    As a network admin for a small ISP, I can tell you point-blank why we don’t sell DSL, cable modem or wireless in a “metered” mode:

    It’s a giant pain in the ass. Sure, enforcing a hard bandwidth cap on a user is easy, but enforcing “User x has used 200mb of his allotted 150mb this week, we need to limit his usage or shut him off” is a much harder proposition, especially for a small business.

    However, let’s say that we decided that our in-house VoIP service that we started selling recently is the ONLY VoIP service that our broadband customers can use. I can block Vonage traffic at our border router in a matter of seconds, no problem.

    Now multiply this issue by the size of Comcast or Earthlink. Do you see why no one is selling “metered” broadband, but why people are still worried about blocking application traffic? The two items are inherently different.

  15. Bob Jonkman says

    Broaden the scope of the problem to include cel phones and PDAs — here we already have a pricing model based on consumption. SMS is based on a per packet (message) pricing scheme, voice and data access is priced by the minutes (in bundles like $40 for 500 minutes or $60 for 1000 minutes). This pricing model hasn’t kept those customers away in droves. Indeed, I suspect that the customers paying the most for cel and PDA services are the very same customers using the highest broadband bandwidth. I don’t think ISPs will have any problem selling price-discriminated bandwidth; it’s all in how they market it.

  16. Wes Felter says

    ISPs already price-discriminate using multiple service tiers, and often these tiers are implemented on the same network with the same equipment (e.g. I think all cable modems run at some large bit rate like 10Mbps, but the end equipment throttles packets to produce a 5Mbps tier or a 3Mbps tier). In a network with multiple service tiers, it seems disingenuous to further restrict customers — the reason I’m paying more money for the 5Mbps tier is because I intend to use it; if I wasn’t planning on using the bandwidth then I would have signed up for a lower tier!

    I don’t see this as a net neutrality issue as much as a truth in advertising issue. If you’re advertising “unlimited”, it better be unlimited. If there is a transfer limit, then tell me exactly what it is upfront and let me pay for a transfer limit tier that can accomodate my usage. Thus I think it’s possible to have both net neutrality and pricing freedom at the same time.

  17. I wonder how things would look if you had fine-grained metering and “bid” for bandwidth. You would expect the price to rise during peak hours and lower during off hours. When the total traffic is below the capacity for the BSP, the bandwidth would be available at its marginal cost (basically free).

    Users could set a cap for their $/h and the isp throttles back bandwidth to meet the cap. Users would only be charged for bandwidth they use. Unfortunatly that would include things like the DHCP lease and unwanted scanner traffic, like Cecil mentioned.

    It might not be neccissarily better, but it could make an interesting experiment.

  18. Exactly this sort of limit is already commonplace in the UK, with many plans being priced by monthly allowance rather than speed. Some ISPs have also terminated the contracts of some customers they’ve deemed to be using grossly excessive bandwidth.

  19. If you’re going to charge me by the byte/kbyte/mbyte on a metered connection, then I want the fastest connection technically possible. Why should I pay for double limits? As I view it, the price point is based on either connection speed or transferred data. Right now I pay based on d/u (downstream/upstream), and t (transferred) is limited only by d/u. For a d/u/t price model, I would expect d and u to be modified upward to match the decrease in t for the same price point.

    There is an obvious problem with the metered approach, what about traffic I do not want and that I have not requested? For example, my firewall logs are full of scanning traffic that I have no control of, other than not responding to any of it. How about that 6 hour dhcp lease? That also generates traffic from my site, but that’s only because the ISP requires it (static IP’s are not an option for home users). So how do you keep things network neutral and account for such traffic in a metered approach?

  20. I think there are additional complications. One problem is that until the limits are approached, bandwidth is “free”, so it is not so much a problem of consumption as congestion. I share (legal) files over P2P using Azureus, and use a speed scheduler to throttle back consumption when my better half wishes to use the net. So it isn’t just a matter of pricing — it’s bidding, for a very perishable commodity.

    The second problem is that as long as the providers are selling “bandwidth” (DSL vs Cable vs FIOS vs WiMax), they cannot easily turn it into a market for “bandwidth-but…”. Just for example, it might be fun to triple my upstream bandwidth, I could share stuff more easily. But if it is triple-bandwidth, but no file sharing, I get no value, so I do not upgrade. The customers most interested in the upgrades are the customers who are using the bandwidth.

    I would predict that nothing will change until (1) congestion is a problem and (2) the bandwidth providers actually lose customers because of it. I also think that the customers who are most attentive to bandwidth, are actually the ones that are consuming most of it. “Doing something” would drive away the very same bandwidth-margin-sensitive customers who are causing the problem; better not to rock the boat, and take money from those guys until there is actually a problem. (and who knows how long that might actually be?)

    Of course, sooner or later someone will invent the bandwidth-hogging killer app, and all this will change. It would be kind of fun to design peer-to-peer protocols that would incorporate bandwidth “bidding” and similar things.