December 3, 2024

Discrimination, Congestion, and Cooperation

I’ve been writing lately about the nuts and bolts of network discrimination. Today I want to continue that discussion by talking about how the Internet responds to congestion, and how network discrimination might affect that response. As usual, I’ll simplify the story a bit to spare you a lengthy dissertation on network management, but I won’t mislead you about the fundamental issues.

I described previously how network congestion causes Internet routers to discard some data packets. When data packets arrive at a router faster than the router can forward those packets along the appropriate outgoing links, the packets will pile up in the router’s memory. Eventually the memory will fill up, and the router will have to drop (i.e., discard) some packets.

Every dropped packet has some computer at the edge of the network waiting for it. Eventually the waiting computer and its communication partner will figure out that the packet must have been dropped. From this, they will deduce that the network is congested. So they will re-send the dropped packet, but in response to the probable congestion they will slow down the rate at which they transmit data. Once enough packets are dropped, and enough computers slow down their packet transmission, the congestion will clear up.

This is a very indirect way of coping with congestion – drop packets, wait for endpoint computers to notice the missing packets and respond by slowing down – but it works pretty well.

One interesting aspect of this system is that it is voluntary – the system relies on endpoint computers to slow down when they see congestion, but nothing forces them to do so. We can think of this as a kind of deal between endpoint computers, in which each one promises to slow down if its packets are dropped.

But there is an incentive to defect from this deal. Suppose that you defect – when your packets are dropped you keep on sending packets as fast as you can – but everybody else keeps the deal. When your packets are dropped, the congestion will continue. Then other people’s packets will be dropped, until enough of them slow down and the congestion eases. By ignoring the congestion signals you are getting more than your fair share of the network.

Despite the incentive to defect, most people keep the deal by using networking software that slows down as expected in response to congestion. Why is this? We could debate the reasons, but it seems safe to say that there is a sort of social contract by which users cooperate with their peers, and software vendors cooperate by writing software that causes users to keep the deal.

One of the reasons users comply, I think, is a sense of fairness. If I believe that the burdens of congestion control fall pretty equally on everybody, at least in the long run, then it seems fair to me to slow down my own transmissions when my turn comes. One time I might be the one whose packets get dropped, so I will slow down. Another time, by chance, somebody else’s packets may be dropped, so it will be their turn to slow down. Everybody gets their turn.

(Note: I’m not claiming that the average user has thought through these issues carefully. But many software providers have made decisions about what to do, and those decisions factor in users’ wants and needs. Software developers act as proxies for users in making these decisions. Perhaps this point will get more discussion in the comments.)

But now suppose that the network starts singling out some people and dropping their packets first. Now the burden of congestion control falls heavily on them – they have to slow down and others can just keep going. Suddenly the I’ll-slow-down-if-you-do deal doesn’t seem so fair, and the designated victims are more likely to defect from the deal and just keep sending data even when the network tells them to slow down.

The implications for network discrimination should now be pretty clear. If the network discriminates by sending misleading signals about congestion, and sending them preferentially to certain machines or certain applications, the incentive for those machines and applications to stick to the social contract and do their share to control congestion, will weaken. Will this lead to a wave of defections that destroys the Net? Probably not, but I can’t be sure. I do think this is something we should think about.

We should also listen to the broader lesson of this analysis. If the network discriminates, users and applications will react by changing their behavior. Discrimination will have secondary effects, and we had better think carefully about what they will be.

[Note for networking geeks: Yes, I know about RED, and congestion signaling, non-TCP protocols that don’t do backoff, and so on. I hope you’ll agree that despite all of those real-world complications, the basic argument in this post is valid.]

Comments

  1. Well put. The Net Neutrality debate is hard to explain — partly because the “hands off” viewpoint is the “regulate” viewpoint and vice-versa. I worry a lot about paid performance, partly because (as a Canadian) we have a single-tiered healthcare system but a lot of the clinics that own MRIs want to let people jump the line for money. Same problem.
    I’m more concerned about my ISP’s bill showing up saying, “420 Google Searches this month @$.05”. It’s analogous to my electrical company sending me a bill that includes charges for each of the shows I watch or the bags of popcorn I made; sure, they’re powered by the power company; but they’re not what I bought from the power company. So I’m more concerned about the end of end-to-end and how that will impact our ability to treat all packets the same without differential billing. That has dire consequences when ISPs want to force us to use their search, or their streaming video, for example.

  2. Responding to Ned Ulbricht’s question about circuit switching vs packet switching:

    Yes, I am implying that there are some useful aspects of nailing down a path (MPLS and ATM are of that ilk). It’s sort of like buying in bulk – if you know what you are going to need and are willing to buy a lot of it you get better prices. Similarly if you can inform the infrastructure of the net of your intentions, and the net can push back in some way, it is possible for everyone involved to make better resource allocation decisions. Notice that I said that the net ought to be able to push back – it would be interesting to extend the Socket API to allow applications to set a socket option that says “creation of this connection can be deferred for up to N hours”.

    But please don’t read into my comments any longing to return to the world of telco circuits. MPLS is about as close as I want to go. And for VOIP I suspect that we will be seeing a lot of MPLS pathways being computed, along with fallback paths that can be picked up within about 50 milliseconds so that voice won’t be badly affected by route changes.

    When I was at Cisco I put together the beginnings of an experimental protocol to help do some of this path evaluation in a quick and inexpensive way. It did require cooperation from all the routers (but I was inside Cisco so that wasn’t a problem ;-). You can take a look at the partial design at: http://www.cavebear.com/fpcp/fpcp-sept-19-2000.html

  3. This reminds me of a memorable classic paper:

    E.G. Coffman and L. Kleinrock. “Computer scheduling methods and their countermeasures.” In AFIPS conference proceedings, volume 32, pages 11–21, 1968.

    The paper points out that regardless of what scheduling algorithm you use, there will be some way for users to change their behavior so that they get better service. Since this is inevitable, their advice is to pick the change of behavior that you want to encourage, and then use a scheduling algorithm for which that is the best countermeasure.

    Jim H.

  4. Ned Ulbricht says

    Ooops–

    Missed bytes to bits conversion: 1 trillion bytes per month is about 3000 bits per second. Not 400. Sorry.

  5. Ned Ulbricht says

    […] when they say they throttle p2p downloads, is this a net neutrality issue -does this mean they are discriminating?

    Mr Rat,

    Perhaps I should let someone more squarely in one of the partisan camps take the first swing at your question. But, since else no seems to be stepping up to the plate yet…

    In a word, “Maybe.”

    Your internet service provider (ISP) is in the business of selling bandwidth, typically advertised by “speed” in thousands or millions of bits per second (Kbps or Mbps). Now there’s a difference between a peak burst “speed” and sustained “speed” —like the difference between how fast you can go in a hundred-yard dash versus how fast you can go in a marathon. No one expects a runner’s marathon time to be as quick as a whole bunch of sprints laid end-to-end! So, if your ISP wants to advertise, for example, 8Mbps, while limiting you in the fine print of the contract to one trillion bytes per month (around 400 bits per second), then I don’t have a problem with that if the FTC doesn’t.

    But, if your ISP discriminates against certain sites or applications, well, that’s discrimination. Let’s suppose that your ISP only lets you download updates to your Linux distro at 400 bits per second (over BitTorrent), while allowing you to access Windows Update at 8Mbps. In that case, not only do I think the FTC should be concerned, but also Judge Colleen Kollar-Kotelly— who’s overseeing Microsoft’s antitrust violations.

    So, again in a word, “Maybe”.

    HTH.

  6. I am completely ignorant about all this type of stuff but read this site in the home that oneday it may become clearer – can someone just help with this – when they say they throttle p2p downloads, is this a net neutrality issue -does this mean they are discriminating?

    thanks & sorry for being ignorant

  7. Anonymous Coward says

    With fair queueing, a noncompliant flow will only cause problems for itself; so, because of fair queueing, it’s on everyone’s interest to behave. If I recall correctly, that was the way it was designed from the beginning. While currently a lot of people use other queueing disciplines, if we start seeing too many noncompliant flows, people will just start switching to some variant of fair queueing.

  8. David Harmon says

    It sounds to me like the ISP really ought to be considered a direct party to that “social contract”. The question is, if an ISP is messing with the social contract, just what can the customer do by way of punishment, *aside* from taking their business elsewhere? (What with the increasing consolidation, there may not *be* a practical competitor!)

  9. John Hainsworth says

    On the other hand, the backoff convention COULD be enforced. Here is one possible enforcement strategy:

    A pass-through next-to-endnode keeps track of packet IDs from one or more endnodes for a few seconds to detect resent packets. If the second or subsequent resend of a packet is too soon according to some backoff rule, then the violator packet gets dropped. Punishment might also be included: all traffic from that port, that source IP (unpacking packets from a network address translator), or the entire endnode might get dropped for a few seconds.

    If protocol violation becomes widespread, ISPs might start doing this — maybe to protect their own bandwith, but also maybe to avoid similar punishment by the backbone providers.

    Of course, QOS premium subscribers would have different rules.

  10. Since this particular social contract is mediated by software, which are we more likely to see: free versions of apps that abide by the contract, with the availability of “turbo” versions on the for-pay market, or vice versa?

  11. Ned Ulbricht says

    It seems to me is that “fairness”, whatever that means, tends to push for for pre-arranged (if only by a few seconds) connections and push against the kind of ad hoc “I will send now!” approach we’ve used in network applications up to now.

    Karl,

    What I just read you as saying was: “In any packet-switched network, virtual-circuits are inherently better (well, more ‘fair’) than datagrams.”

    Please clarify.

  12. @Jim Lyon: I’d like to add one comment on yours that “should the ISPs artificially introduce congestion inside their networks for non-preferred traffic, peoples’ willingness to back off altruistically will decline rapidly”.

    My ISP has put in place a QoS mechanism on some of its currently bandwidth-limited links, in order for its customer to at least be able to surf and e-mail, and do other things if traffic permits. A customer of the ISP, aware of the QoS, brags regularly that he bypasses the QoS by changing the ports that his P2P software uses, and advises others to do the same. I have been trying for *weeks* to convinve him that his behaviour harms other customers, but he won’t admit it because he thinks the bandwidth restriction is articifial, and there will be no way to make him see it otherwise.

    So the question here is not that the ISP *would actually* put any artificial congestion in place, but that a customer *would believe it to have*.

  13. There is a concept that has been floating around for a few years – the bandwidth broker. The basic idea is pretty simple (the details are not): If you have a need for network bandwidth either now or some time in the future (5 seconds from now or 5 days from now) you contact the broker (a chunk of software someware) and indicate what you need. The broker cogetates on it for a while and says “yes”, “no”, “good luck”, or “I’ve arranged for your needs to be satisfied in X minutes.”

    It is that last response that is perhaps the most intriguing from a point of view of network flexibility – a lot of applications (such as email or backups) have a lot of time flexibility and can readily back off – just like lower level protocols do. Unfortunately we have engineered internet applications on the 1980’s yuppie model of “I want it all and I want it now!”.

    (At Cisco I worked on a DARPA sponsored project in which we dynamically modeled bandwidth requests, turned those into provisioning requirements for primary and fallback MPLS routing and router queue priority management, and went out and made it happen. It has been my feeling for a long time that this kind of management of the internet as a complex distributed system makes a lot more sense than the traditional kind of management of individual routers and end points.)

    Surprisingly, over at Enron before the collapse, they had begun to establish a business around something rather similar to the bandwidth broker. I suspect it kinda died with Enron, but it does seem to be an idea that has merit and could be a useful tool in congestion management of the net.

    And, as you say, it all depends on each of us taking a constructive attitude – “what’s best for everyone will, over the long term, be best for me”. But I don’t know how one inserts a kind of internet Golden Rule into mandatory internet policy. Nor do I know how brokering bandwidth can avoid turning into a losing proposition for those who may be rich in some things but not in money.

    What I’m getting to in all of these comments over the last few days is that the inside of an individual router is a very poor place – due to lack for information, context, or time to think – to make or perform anything but the most basic of policy choices. It seems to me is that “fairness”, whatever that means, tends to push for for pre-arranged (if only by a few seconds) connections and push against the kind of ad hoc “I will send now!” approach we’ve used in network applications up to now.

  14. Comandante Gringo says

    This is exactly the same inner logic of class society — which produces ‘criminal’ behavior. And the solution for society, too, is the same: _even-handed fairness_.

  15. I suspect that the majority of the time, in today’s networks the major point of congestion is the link between the client’s premeses and the ISP. If this is true, then altruistic backoff is benefitting your family members, coworkers or dormmates. People should be very willing to do this.

    However, you’re right that should the ISPs artificially introduce congestion inside their networks for non-preferred traffic, peoples’ willingness to back off altruistically will decline rapidly.

  16. Quarthinos says

    Because then they can ask end-nodes to pay money for priority, which is the entire problem…

  17. Democratic priorisation:

    One of the things that jumped into my mind after reading your very clear explanation about congestion control is that the endnodes can decide that certain traffic has higher priority than others. If a protocol backs off slower than TCP it wil gain a relatively higher share of the bandwidth in a congested network, consequently a protocol that backs off easily will get a lower priority.
    Why should the telco’s decide packet priorities when the end-nodes know better?