One of the standard arguments one hears against network neutrality rules is that network providers need to provide Quality of Service (QoS) guarantees to certain kinds of traffic, such as video. If QoS is necessary, the argument goes, and if net neutrality rules would hamper QoS by requiring all traffic to be treated the same, then net neutrality rules must be harmful. Today, I want to unpack this argument and see how it holds up in light of computer science research and engineering experience.
First, I need to make clear that guaranteeing QoS for an application means more than just giving it lots of bandwidth or prioritizing its traffic above other applications. Those things might be helpful, but they’re not QoS (or at least not the kind I’m talking about today). What QoS mechanisms (try to) do is to make specific performance guarantees to an app over a short window of time.
An example may clarify this point. If you’re loading a web page, and your network connection hiccups so that you get no traffic for (say) half a second, you may notice a short pause but it won’t be a big deal. But if you’re having a voice conversation with somebody, a half-second gap will be very annoying. Web browsing needs decent bandwidth on average, but voice conversations needs better protection against short delays. That protection is QoS.
Careful readers will protest at this point that a good browsing experience depends on more than just average bandwidth. A half-second hiccup might not be a big problem, but a ten-minute pause would be too much, even if performance is really snappy afterward. The difference between voice conversations and browsing is one of degree – voice conversations want guarantees over fractions of seconds, and browsing wants them over fractions of minutes.
The reason we don’t need special QoS mechanisms for browsing is that the broadband Internet already provides performance that is almost always steady enough over the time intervals that matter for browsing.
Sometimes, too, there are simple tricks that can turn an app that cares about short delays into one that cares only about longer delays. For example, watching prerecorded audio or video streams doesn’t need QoS, because you can use buffering. If you’re watching a video, you can download every frame ten seconds before you’re going to watch it; then a hiccup of a few seconds won’t be a problem. This is why streaming audio and video work perfectly well today (when there is enough average bandwidth).
There are two other important cases where QoS isn’t needed. First, if an app needs higher average speed than the Net can provide, than QoS won’t help it – QoS makes the Net’s speed steadier but not faster. Second – and less obvious – if an app needs much less average speed than the Net can provide, then QoS might also be unnecessary. If speed doesn’t drop entirely to zero but fluctuates, with peaks and valleys, then even the valleys may be high enough to give the app what it needs. This is starting to happen for voice conversations – Skype and other VoIP systems seem to work pretty well without any special QoS support in the network.
We can’t say that QoS is never needed, but experience does teach that it’s easy, especially for non-experts, to overestimate the importance of QoS. That’s why I’m not convinced – though I could be, with more evidence – that QoS is a strong argument against net neutrality rules.
The NetEqualizer is perfect for when you have to send or receive VOIP over an
Internet link. Most tools that do QOS assume you own both sides of the link. The reason why the NetEqualizer is different is that it sets priority based on
behavior. It can be tuned to see “VOIP” traffic as good or well behaved when compared to other traffic based on its foot print. In this sense it does not need to tag bits and explicitly identify voip.
-art
I’ve tried the Netequalizer you mentioned. It’s basically just “plug-and-go” and it works great keeping your important processes from getting too bogged down when things get busy, but you can barely notice any changes. Anyone else tried this? I’m still open some other method to solve this problem as well.
Has anyone tried products that specifically work to maintain the quality of VoIP, like netequalizer? Wouldn’t that solve the latency problem?
As far as Odlyzko and his arguing for a tiered internet goes,
take a look at US patent number: 6,295,294 http://tinyurl.com/gcxke
“Inventors: Odlyzko; Andrew M. (Berkeley Heights, NJ)
Assignee: AT&T Corp. (New York, NY)”
…..
“The network is partitioned into logical channels and a user incurs a cost for use of each of the logical channels. The logical channels differ primarily with respect to the cost to the user. ”
So he’s an AT&T employee that filed the patent for the tiered internet. No wonder he’s all for it.
Two things …
Firstly the “ATM paradox” – why (as Keven says) do you need to prevent people using guaranteed bandwidth? If I have, say, a 1mb pipe and guarantee 8 customers 128Kb each, then why shouldn’t one customer get 512kb?
Total bandwidth must exceed guaranteed QoS bandwidth, otherwise if everyone tries to use their guarantee at the same time there’ll be trouble, but why can’t you just give priority to any stream that’s using less than its QoS guarantee? That way, non-QoS (or over-quota) traffic just gets forced to wait until there is spare bandwidth.
Secondly, I’ve seen some proofs that adding bandwidth can increase congestion 🙂
Cheers,
Wol
… so far it always ends up being cheaper and more effective to over-provision bandwidth than it is to manage QoS.
Oren,
I take it you would broadly endorse Gregory Bell’s view in Failure to Thrive: QoS and the Culture of Operational Networking (Aug 2003; p.4):
Our network engineering team’s experience with years of looking at QoS mechanisms for the Internet is that so far it always ends up being cheaper and more effective to over-provision bandwidth than it is to manage QoS.
That approach, of course, doesn’t give the telcos or cable companies any leverage to charge more for a different class of service.
We do have some legitimate very high bandwidth research applications that require dedicated availability of lots of bandwidth – and we are working on ways to provide them that with virtual lambdas on our own fiber – see the National Lambda Rail initiative.
Keep up the great work!
QoS is also significant from a legal perspective because its an excuse for network service providers to discriminate against certain data — traditionally viruses and spam — without incurring liability. As I understand the law, ISPs are granted safe harbor from liability for the content they carry if they take a hands-off approach. The exception is that they can intervene to preserve QoS. Of course the definition of which packets are bad and which are good isn’t always the same for the ISP and the end user.
What QoS mechanisms (try to) do is to make specific performance guarantees to an app over a short window of time.
From RFC 2990 (Informational) “Next Steps for QoS Architecture”, Huston, Nov 2000, pp.17-18:
RFC 2990 further states, “It is critical to bear in mind that none of these responses can be addressed in isolation within any effective QoS architecture.”
But these “responses” to the “QoS intention” don’t really seem to answer the question, “What is the precise nature of the problem that QoS is attempting to solve?” Instead, these responses appear to form part of what Andrew Odlyzko calls “the puzzling behavior of the telecommunications industry, as well as of the networking research community.” In his paper The Evolution of Price Discrimination in Transportation and its Implications
for the Internet (Sep 2004; p.336 / p.14 in PDF), Odlyzko identifies the “basic motive” behind QoS as the economic “incentive to price discriminate.”
Following Odlyzko, then, what QoS mechanisms (try to) do is to make money.
Two points…
The first is that Net Neutrality and QOS are not necessarily opposing requirements. If anyone is allowed to request QOS for their services, then the network remains neutral. It is only when QOS is restricted to the incumbent carrier and not offered to all players that the network neutrality will be violated.
If BigISP wants to use QOS for their VOIP, it is fine as long as the same QOS service is offered to SmallVOIP for the same price.
The second is that some of the comments from ILECs involve the opposite of favored service. Instead of allowing non-QOS traffic to share the “best effort” category, the ILECs appear to want to introduce deliberate service degradation, if content providers or competing service providers do not pay extra. This is sort of an “evil” QOS, where instead of guaranteeing bandwidth for special services, this is forcing packet loss and/or packet latency onto competing VOIP providers or search services who do not pay their (blackmail) fee.
Even if it becomes necessary for webcams and VoIP to be given extra bandwidth to preserve QoS, that’s not a reason to ban Net Neutrality rules altogether, it’s a reason to modify them to serve that specific purpose. Net Neutrality is intended to prevent optimizing service performance, it’s intended to combat anti-consumer behavior that certain internet providers are leaning towards, and should be written with that in mind. Perhaps it may even be possible to allow favoritism by protocol.
The entire issue of QoS and running out of bandwidth is probably bogus. Of course we’re using more bandwidth, just like we’re using more cell phone towers and more of the radio spectrum. We’re transferring more data. Charging special fees will not solve that problem. Building more bandwidth will. This is an investment, yes, but it’s one that will allow ISP’s to continue to make a profit from their services for years to come.
The real distinction is between one-way and two-way communication. If it’s 2-way, you need low latency, otherwise you can buffer.
Thus streaming video is always worse than downloading, unless you are having a conversation.
Why live TV is dead: http://epeus.blogspot.com/2006_01_01_epeus_archive.html#113657665445571085
Stuart Cheshire nailed the core issue ten years ago:
http://www.stuartcheshire.org/rants/Networkdynamics.html
or, as he said about ATM in 1998: http://www.stuartcheshire.org/rants/ATMParadox.html
ATM’s big feature is guaranteed quality of service. When you set up a TCP/IP connection, the Internet does not reserve network bandwidth for you to guarantee that your data will not suffer network congestion or loss. ATM does offer guaranteed reserved bandwidth. This is its big advantage.
Or is it? If you reserve bandwidth for one user, then you have to refuse to let anyone else use that bandwidth. Everyone always talks about reservations in the context that you are the one who gets the bandwidth and it is everyone who is refused. What about when you are the one being refused? Reservations suddenly doesn’t seem so wonderful any more, do they? The only way to make sure no one is refused service is to engineer your network so that you have enough bandwidth for everyone — but if you have enough for everyone then why do they have to keep making reservations? That’s the ATM paradox.