December 15, 2024

Comcast's Disappointing Defense

Last week, Comcast offered a defense in the FCC proceeding challenging the technical limitations it had placed on BitTorrent traffic in its network. (Back in October, I wrote twice about Comcast’s actions.)

The key battle line is whether Comcast is just managing its network reasonably in the face of routine network congestion, as it claims, or whether it is singling out certain kinds of traffic for unnecessary discrimination, as its critics claim. The FCC process has generated lots of verbiage, which I can’t hope to discuss, or even summarize, in this post.

I do want to call out one aspect of Comcast’s filing: the flimsiness of its technical argument.

Here’s one example (p. 14-15).

As Congresswoman Mary Bono Mack recently explained:

The service providers are watching more and more of their network monopolized by P2P bandwidth hogs who command a disproportionate amount of their network resources. . . . You might be asking yourself, why don’t the broadband service providers invest more into their networks and add more capacity? For the record, broadband service providers are investing in their networks, but simply adding more bandwidth does not solve [the P2P problem]. The reason for this is P2P applications are designed to consume as much bandwidth as is available, thus more capacity only results in more consumption.

(emphasis in original). The flaws in this argument start with the fact that the italicized segment is wrong. P2P protocols don’t aim to use more bandwidth rather than less. They’re not sparing with bandwidth, but they don’t use it for no reason, and there does come a point where they don’t want any more.

But even leaving aside the merits of the argument, what’s most remarkable here is that Comcast’s technical description of BitTorrent cites as evidence not a textbook, nor a standards document, nor a paper from the research literature, nor a paper by the designer of BitTorrent, nor a document from the BitTorrent company, nor the statement of any expert, but a speech by a member of Congress. Congressmembers know many things, but they’re not exactly the first group you would turn to for information about how network protocols work.

This is not the only odd source that Comcast cites. Later (p. 28) they claim that the forged TCP Reset packets that they send shouldn’t be called “forged”. For this proposition they cite some guy named George Ou who blogs at ZDNet. They give no reason why we should believe Mr. Ou on this point. My point isn’t to attack Mr. Ou, who for all I know might actually have some relevant expertise. My point is that if this is the most authoritative citation Comcast can find, then their argument doesn’t look very solid. (And, indeed, it seems pretty uncontroversial to call these particular packets “forged”, given that they mislead the recipient about (1) which IP address sent the packet, and (2) why the packet was sent.)

Comcast is a big company with plenty of resources. It’s a bit depressing that they would file arguments like this with the FCC, an agency smart enough to tell the difference. Is this really the standard of technical argumentation in FCC proceedings?

Comments

  1. More “good deeds” from Comcast:
    http://www.dslreports.com/shownews/Comcast-HD-Image-Quality-Vs-FiOS-92969

    I can’t understand how anyone could defend them?

    Still they finally agreed to stop “throttling”:
    http://www.dslreports.com/shownews/Comcast-Claims-Theyll-Stop-BitTorrent-Throttling-93022

  2. “Vote with your wallets people. Pick the ISPs which have fair TOS, ditch those that work only for their greedy selves.”

    Problem is, we rarely have a choice. Where I live, there are two options for broadband: the phone company (DSL) and the cable company. At the last place I lived, I couldn’t even get DSL, so it was either Comcast or no broadband at all.

  3. George Ou says: “Read this definition first…”

    I don’t have to read anything George. You know why?

    Because my ISP doesn’t use the term PEAK. They guarantee the bandwidth they sell to me 24/7/365 even though my country has considerably poorer communication infrastructure than the U.S.

    But I see some of you are being too anally retentive to stand up for your own rights even though your country has more mechanisms for consumer rights protection than mine.

    My advice:

    Vote with your wallets people. Pick the ISPs which have fair TOS, ditch those that work only for their greedy selves.

    And you George, you should advise people to defend their own interests, not to give up on their rights.

    ISPs must always invest sizeable chunk of their profit into expanding capacities, instead of being content with just lining their pockets and keeping the service at the same miserable level in hope that people won’t use what they paid for.

  4. Igor says: “If I pay for a service from an ISP and if in the contract they say I have 10Mbit download and 1Mbit upload capacity then that’s it.”

    Read this definition first so that you can understand the difference between PEAK and guaranteed throughput.
    http://en.wikipedia.org/wiki/Committed_Information_Rate

  5. @George Ou:
    Are you on Comcast’s payroll when you are defending their sorry arses so vigorously here?

    If I pay for a service from an ISP and if in the contract they say I have 10Mbit download and 1Mbit upload capacity then that’s it. No IFs no BUTs — that is what they should DELIVER or I will stop delivering my money to them and find someone who will provide me (as in Internet Service PROVIDER) with the advertised level of Internet Service.

    In essence this is false advertising. They claim to deliver more than they do.

    There is simply no excuse — they should not be able to mess with your traffic unless you are breaking the law and no law has been broken so far.

    Few more points:

    1. Not allowing people to serve other people outside of Comcast network is discrimination. What if other providers did the same? The “Inter” part of the word “Internet” comes from ‘Interconnected” — by denying it, they are denying the Internet as a whole.

    2. If a person from Russia or Japan or China has a file I need and is able to serve it faster to me than someone who is on the same network as I am then such connection is a relief for my ISP, not a burden as you are trying to paint it here.

    3. Ever heard of super-seeding? It means I seed a file in such a way that different people get different instead of the same parts so that they finish the exchange amongst themselves. Point is that a lot of people using BT doesn’t know about super-seeding much less how to turn it on. That is why they end up seeding the whole file several times which is equally ineffective as HTTP, FTP or email. Right way would be to educate them, not to ban P2P alltogether.

    4. If you followed the case of Mediasentry you are probably aware which tricks are being used to reduce the efficiency of P2P networks. I am not debating the legality of the shared content here, just the legality of using such tricks. If Comcast has an issue with P2P efficiency they should complain to those IP “watchers” because they are the ones flooding the network with fake data and erroneous packets, not seeders.

    You are full of bullsh*t George, get back to your crappy zdnet bloghole and stay there.

  6. Jesse, in your post http://www.freedom-to-tinker.com/?p=1256#comment-383293 I think you have captured it perfectly. And those like George, here, seem completely unable to comprehend that there can possibly be any reason to object other than “I want to continue my illegal activities.”

    And there are many of us who do not take part in these illegal activities, yet who object to the specific methods chosen to handle bandwidth problems. I don’t download music illegally. I don’t download movies illegally. I don’t share illegally. I have no interest in doing any of these illegal activities, and I am teaching my children the same.

    I notice that many schoolmates of my children have no hesitation to illegally share music or games. (I don’t see any illegal sharing of movies, at least not yet.) To them it’s a simple matter of “I want it and it’s easy to do, so what’s wrong with it?” I find this deeply disappointing. Many parents are not doing their job of teaching morality or the law. There *are* too many out there who have no problem with piracy. I understand this.

    But that does not mean that all who reject Comcast’s chosen mechanism (not all who defect bittorrent) are immature, greedy, nefarious people who are just interesting in taking as much as they can without paying for it. Far from it. And it’s sad and it’s disappointing that so many in the other camp are incapable of even imagining this truth.

  7. Nathanael Nerode says

    “No matter how clever P2P programmers might be, a general limit on bandwidth is undefeatable — and completely fair. (Not to mention cheaper.)”

    That about sums it up, doesn’t it?

  8. By the way:

    “What you sound like is a little kid upset that he’s being told that they can’t download his movies for free anymore and you’re coming up with an infinite set of implausible excuses.”

    I think you’ve perfectly captured the mentality behind this filtering. Anti-BitTorrent crusaders aren’t motivated by conserving bandwidth, because if they were, they’d just cap bandwidth use and be done with it. They’re motivated by contempt for one particular activity they don’t approve of — copyright infringement — and the people who do it. That contempt is deep enough that they don’t care about stepping on other applications like Vuze and World of Warcraft that happen to get caught by the filters.

    Comcast at least has a financial incentive: they don’t want to spend the money to build up their network, but demand for bandwidth has gotten higher than their oversubscription model can handle. If they raise rates or lower the cap, they lose their marketing advantage over DSL, so instead of actually harmonizing their prices with the level of service they can provide, they’ve decided to shoo away the customers who demand the most.

    It’s sneaky, but at least it’s understandable coming from them because it’s putting money in their pockets. But when someone who isn’t on the Comcast payroll is still defending these filters, as far as I can tell, the only possible motivation is contempt for the people who are affected by them.

    (And for the record, I’m not affected by the filters.)

  9. George, you wrote, “P2P is simply too big to handle on a case-by-case basis.”

    There seems to have been a miscommunication: I’m not asking for P2P to be handled on a “case-by-case basis”. No one is.

    We’re asking for it to be treated the same as anything else, bit for bit. An ISP has no need to distinguish between a megabyte of P2P traffic and a megabyte of email. What matters is the number of megabytes, and it’s easy to measure and/or restrict that number without regard to which program is sending them.

    You wrote, “P2P = BANDWIDTH HOG”

    That’s an oversimplification. As you and I both know, any individual P2P user may or may not be a “bandwidth hog”, depending on his usage pattern and rate limit — but the ISP doesn’t need to worry about that. What really matters is how much bandwidth he uses, not what programs he’s running.

    You wrote, “and it requires a systematic response.”

    I agree, the problem of excessive bandwidth use needs a systematic response, but interfering with the BitTorrent protocol isn’t a systematic response at all. Comcast’s response is a very narrowly targeted one.

    Not only is it unfair to the responsible users, but it’s also ineffective in the long term. The most Comcast can hope to do is convince their customers to use a different protocol: either a next-generation BitTorrent that the filters can’t stop, or some other P2P protocol that the filters simply aren’t set up to stop.

    Bandwidth, however, is something that Comcast has complete control over. No matter how clever P2P programmers might be, a general limit on bandwidth is undefeatable — and completely fair. (Not to mention cheaper.)

  10. Jesse says: “Not true. Why, exactly, would those smaller ISPs have to throttle P2P specifically, instead of just throttling everyone who uses excessive bandwidth?”

    I know the operators of these smaller ISPs first hand and I can tell you this is what they do. ISPs throttle P2P specifically because P2P is single handedly responsible for hogging the bandwidth. The data from the Japanese Ministry of Internal Affairs and Communications proves this and it mirrors everyone else’s data. P2P = BANDWIDTH HOG and it requires a systematic response. If there are any non-P2P applications hogging the network, the ISP will simply contact these people by phone and verbally warn them and send them an email or snail mail.

    What you sound like is a little kid upset that he’s being told that they can’t download his movies for free anymore and you’re coming up with an infinite set of implausible excuses.

  11. Jesse says: “Why should the guy who spends all day sending attachments or uploading to YouTube — as rare as he might be — get a free pass, while the BitTorrent user who sets his upstream cap at 128 kbps gets punished instead?”

    There’s no “free pass” here and you’re grasping at straws.

    If there was such a person uploading to YouTube 24×7 with an automated script that constantly feeds the upstream with 100 MB files, Comcast will absolutely NOT give them a free pass. They WILL call this person up over the phone and give them a warning to stop violating the AUP (Acceptable Use Policy).

    P2P is simply too big to handle on a case-by-case basis. Just look at Japan where 1% of all users comprise 47% of all Internet traffic in Japan.
    http://blogs.zdnet.com/Ou/?p=1063

    You’re desperately grasping at straws Jesse and it’s rather pathetic. P2P file trading is a bandwidth hog, period, end of discussion.

  12. Alan Martin says

    @ Tel:

    Grouping similar users is an interesting idea, but a one-month granularity is too large: just share two connections with a neighbor, and use them one at a time in alternate months!

    A per-user token bucket filter would achieve a similar effect with a quicker (and configurable) response time, if the non-conforming packets are allowed to contend for leftover bandwidth:
    http://en.wikipedia.org/wiki/Token_bucket

    This is a much simpler and fairer solution than filtering applications and forbidding servers, and just as effective. But we won’t see it without regulation, because it would be less profitable for the ISPs – especially when they compete with the very applications that they interfere with.

  13. And it looks like he’s just going to keep on refusing to address the central issue here. That’s a shame.

  14. George, you wrote: “That means seeding is allowed most of the day when network conditions permit but they’ll restrict seeding when the network latency goes too high. That seems like a perfectly reasonable compromise.”

    A reasonable compromise would be to lower the upstream cap, for all applications, when the network is crowded. Why should the guy who spends all day sending attachments or uploading to YouTube — as rare as he might be — get a free pass, while the BitTorrent user who sets his upstream cap at 128 kbps gets punished instead?

    You wrote: “However, there are some serious ramifications to consider in the overall scheme of things if the Government prohibits throttling or blocking of P2P. […] If the Government tells them they have to carry P2P traffic and they can’t throttle it, then those smaller ISPs go out of business.”

    Not true. Why, exactly, would those smaller ISPs have to throttle P2P specifically, instead of just throttling everyone who uses excessive bandwidth?

    That’s the point you keep ignoring. No one would be complaining if Comcast would just treat all applications equally, but you act as if that’s impossible.

  15. Jesse, if you read your own quote, Comcast is saying P2P DOES in fact constitute a server. HOWEVER, they’re not saying it’s completely impermissible and that they’re willing to make an exception for it with some conditions. You need to also read the following quote.

    “And, for years, the AUP has required customers to ensure that their “use of the Service does not restrict, inhibit, interfere with, or degrade any other user’s use of the Service nor represent … an overly large burden on the network.

    Comcast is dealing with market-place reality and they’re caving in to consumer demand for the use of BitTorrent so they’re willing to allow an exception to the no-server clause for BitTorrent Seeding but with reasonable restrictions. That means seeding is allowed most of the day when network conditions permit but they’ll restrict seeding when the network latency goes too high. That seems like a perfectly reasonable compromise.

    Now I will admit that Comcast’s position is confusing and the fact that they’re going to try and cover it up is the main reason they’re in the trouble that they’re in now. Most people even in the same industry don’t have a lot of sympathy for them given the way Comcast has behaved in all of this. However, there are some serious ramifications to consider in the overall scheme of things if the Government prohibits throttling or blocking of P2P.

    There are plenty of smaller wireless ISPs that are barely surviving and they strictly prohibit P2P operation because they simply can’t afford it. They’re often the only game in town for rural America because the big guys don’t want to service them. If the Government tells them they have to carry P2P traffic and they can’t throttle it, then those smaller ISPs go out of business.

    The wireless companies don’t have a lot of capacity to spare because they’re a shared-medium network like Cable broadband. Those companies offer “unlimited” wireless plans that are only unlimited for certain protocols that follow typical usage patterns. Buffets offer “all-you-can-eat” but we all know that there is some reasonable limit.

    Now if you want to dispute the usage of the word “unlimited” or “all-you-can-eat”, we can certainly have a discussion about that. What I think would be a terrible mistake is if the Government comes in and implements a no-throttle or no-block rule on P2P traffic. Note that Comcast doesn’t use the word “unlimited” anywhere in their TOS or their advertising.

    http://blogs.zdnet.com/Ou/?p=1039
    Even Japan has to deal with the P2P congestion problems despite the fact that they’ve got 100 Mbit fiber to the home capacity.

  16. I’ve designed, implemented, and supported network architectures for the Fortune 100. Statistical models have everything to do with the way phone systems and networks are provisioned and built.

    That’s true. And it’s also not what we are talking about. Given that a certain pattern of network utilization is occuring it is irrelevant what protocol causes said utilization.

    Being a seeder for 15-30 minutes is a totally unrealistic and contrived usage scenario since you’ll be of far less value than what Comcast allows you to do.

    Not contrived but, again, besides the point. One last time: Given that it is as easy, indeed easier, to treat all high volume traffic identically, and given that network utilization is network utilization, regardless of protocol which generates it, what possible rationale can there be for treating throttling a specific protocol?

    You know what the most satisfying part of this discussion is? It’s all academic. You should know by know how this story goes.

  17. George, the PDF actually says the opposite. From footnote 107 on the page you referenced (emphasis added):

    “Although even service providers that use P2P protocols recognize that P2P “seeding” allows a “user’s computer [to] act[] as a server to other users,” Vuze Petition at 8, Comcast does not assert that P2P seeding is impermissible under the TOS.”

  18. Jesse says: “I don’t think even Comcast has claimed that seeding violates the terms of service. Do you have a cite for that?”

    Yes they have, read Comcast’s FCC filing (page 43 in the PDF but labeled page 40) defending themselves against Free Press and Vuze. Comcast also said that users who seek to be servers can buy the commercial-grade service from Comcast which has no no-server clause in the terms of service.

  19. @George Ou: “What you don’t have the right to do is knowingly buy a cheaper residential-grade service and then violate the terms of service.”

    I don’t think even Comcast has claimed that seeding violates the terms of service. Do you have a cite for that?

  20. Jon says: “This has nothing to do with ‘odds’. If 25 people are saturating their upstream with Bittorrent and 1 is saturating it doing something else then that 1 person is contributing the same amount the instantaneous network congestion.”

    Jon I’ll put this bluntly: You do not know what you are talking about.

    I’ve designed, implemented, and supported network architectures for the Fortune 100. Statistical models have everything to do with the way phone systems and networks are provisioned and built.

    Being a seeder for 15-30 minutes is a totally unrealistic and contrived usage scenario since you’ll be of far less value than what Comcast allows you to do. You in essence do get to seed for a couple of hours WHILE you’re downloading the file without ever having to worry about TCP resets. When you’re done with that download, chances are you’ll be allowed to be a dedicated server-mode seeder most of the day when the network isn’t busy.

    If you’re trying to argue that you have the right to host smaller files, you do. You can do that for FREE with the gigabyte of HTTP web space Comcast offers you which actually works at least 10 times faster than your seed and you don’t have to worry about jamming your own broadband uplink and you don’t have to leave the computer on or slow it down. If you want to host the file in your home, go buy a commercial-grade account. What you don’t have the right to do is knowingly buy a cheaper residential-grade service and then violate the terms of service.

  21. That isn’t correct. The odds of 26 people using the upstream at the same time for a typical short duration upload like email or YouTube upload is extremely unlikely

    .

    This has nothing to do with ‘odds’. If 25 people are saturating their upstream with Bittorrent and 1 is saturating it doing something else then that 1 person is contributing the same amount the instantaneous network congestion. There is no rational reason why the first 25 should be degraded and not the last — Unless you are arguing that priority should be inversely related to average network utilization but again the particular protocol is irrelevant. For, in that case, a Bittorrent user only seeding for 15-30 minutes (and not 24×7) should not be degraded while someone uploading full-steam on protocol X for 12 hours should.

    It is completely baseless to argue that a particular type of traffic is at fault and not the network usage patterns themselves.

  22. @George Ou: “Ok Jesse, name one application OTHER than P2P that will not only saturate the upstream, but do it continuously 24×7.”

    An FTP server. A web server or webcam. An all-night Xbox Live party. Remote Desktop and/or Slingbox with the right usage patterns.

    Not to mention all the P2P apps that Comcast *isn’t* interfering with, like eMule, Kazaa, and Freenet.

    Furthermore, even BitTorrent is hardly guaranteed to saturate one’s upstream. Every BT client has a setting for the maximum upload rate, and the same network that only supports 26 users uploading at 384 kbps can support twice as many users if they’re uploading at half that speed. Clients can also drop down to a lower upstream rate once the download is finished, and stop seeding entirely once a given share ratio has been reached.

    If Comcast wants to limit excessive bandwidth usage, it’s within their power to measure it directly, and treat all excessive usage equally. Instead, they’ve chosen to single out one particular application that *can* cause excessive use (but doesn’t necessarily) and ignore the rest.

  23. Jon says: “But I thought the problem was instantaneous network usage. That is, after all, the only thing that would impact another user’s ‘experience’.”

    That isn’t correct. The odds of 26 people using the upstream at the same time for a typical short duration upload like email or YouTube upload is extremely unlikely. It’s even unlikely between a pool of 200 users in a local cable loop. A P2P seeder fully saturates the upstream 24×7 which means 26 seeders WILL simultaneously jam the network and there is absolutely no disputing that.

    “Youtube video uploading (etc.) will cause the same congestion as a Bittorrent upload at a given time and there is no rational reason why one should be prioritized over the other.”

    Only if 26 people are uploading to YouTube 24×7. Tell me how many people you know upload to YouTube 24×7? How often are you uploading to YouTube? The fact of the matter is that there are practically no YouTube users uploading 24×7. A heavy YouTube contributor might spend 4 hours uploading to YouTube per day and that’s spread out randomly and dispersed throughout the day. Therefore your average throughput is cut by 1/8th.

  24. But I thought the problem was instantaneous network usage. That is, after all, the only thing that would impact another user’s ‘experience’. In fact, the ISPs have been saying the only times that matter are peak usage hours so your average usage metrics are irrelevant. Youtube video uploading (etc.) will cause the same congestion as a Bittorrent upload at a given time and there is no rational reason why one should be prioritized over the other.

  25. Jesse says: Er… you’re comparing Vuze (a P2P video service) to other video services like iTunes, Xbox Live Marketplace, and Netflix, which is fine for an article about video services, but there *are* other applications for the internet besides watching video.

    Ok Jesse, name one application OTHER than P2P that will not only saturate the upstream, but do it continuously 24×7.

    Jon says: Or uploading a video to Youtube; or sending large email attachments; or uploading some files to the company work server; or high quality video conferencing; or ad infinitum.

    Jon, if you ignore the duration factor, you might have a point. I looked at the total sent email I sent in 2 months. While I peaked at 384 Kbps while sending the attachment, I only averaged 0.05 Kbps sending email over the 2 month period of time. If I were a VERY heavy YouTube uploader and I spend 4 hours a day uploading to YouTube, my in-use upload throughput would be 384 Kbps but I would only average 48 Kbps which isn’t a bandwidth hog.

    It is a universal fact that P2P is THE bandwidth hog. Japan had studied this on their 100 Mbps per home Fiber network and even they have a problem with P2P usage dominating their infrastructure and causing congestion.

  26. So in reality, the only application that will continuously saturate the upstream is a P2P seed which primarily consists of Video data.

    Or uploading a video to Youtube; or sending large email attachments; or uploading some files to the company work server; or high quality video conferencing; or ad infinitum.

    ‘Probability’ has nothing to do with it. Why, for the same network utilization, should one of the above uses be ‘okay’ while Bittorrent is relegated to second-class service? Why argue against a particular protocol when ostensibly the problem is a pattern of network utilization which can also occur by other means?

  27. @George: “So in reality, the only application that will continuously saturate the upstream is a P2P seed which primarily consists of Video data.”

    Er… you’re comparing Vuze (a P2P video service) to other video services like iTunes, Xbox Live Marketplace, and Netflix, which is fine for an article about video services, but there *are* other applications for the internet besides watching video.

    I mentioned some of them in my previous comment, but another notable one is Xbox Live – for gaming, not for downloading videos. Multiplayer games on Xbox Live are hosted by the individual players’ consoles, so for every game of Halo 3 being played online, someone’s using most of his home capacity to host it.

  28. To Jesse:

    “Let me rephrase that: it takes less than 26 users of *any* application who saturate the upstream at 384 kbps 24×7 to kill a DOCSIS 1.1 network.”

    Technically true, although EXTREMELY improbable. ITunes HD, Xbox Live Market Places HD, Netflix, and Youtube put practically zero load on the upstream. Skype Voice puts about 35 Kbps when in use but more realistically an 8th of that on average since it’s not constantly in use. So in reality, the only application that will continuously saturate the upstream is a P2P seed which primarily consists of Video data.

    http://blogs.zdnet.com/Ou/?p=1031

  29. Comcast is also blocking e-mails. I’m no longer able to receive notifications from my NetFlix or GM Card accounts. Calls and e-mails to both ends have been pointless. Verizon FIOS is looking better and better…

  30. @George Ou:

    “It takes less than 26 24×7 BitTorrent seeders who saturate the upstream at 384 kbps to kill a DOCSIS 1.1 network.”

    Let me rephrase that: it takes less than 26 users of *any* application who saturate the upstream at 384 kbps 24×7 to kill a DOCSIS 1.1 network.

    It doesn’t matter whether they’re taking up that much bandwidth because they’re using BitTorrent, or eMule, or Gnutella, or Freenet, or an FTP/web server, or because they spend all day uploading YouTube videos of their cats. What matters is how much bandwidth they use.

    “If we do this, are we not saying that any network that can’t support 24×7 upload deserves to be fined/regulated out of existence?”

    Only if they’re not honest about what it is they’re selling.

    “Do we tell the Wireless ISPs with scare resources that they can’t prevent BitTorrent seeders from killing their network?

    You mean BitTorrent seeders specifically? Yes, I think we should tell them that. On the other hand, if they want to go after everyone who’s using excessive bandwidth, then that’s fine.

    “I hear a lot of people being ignorant in this forum saying that we need to go to some sort of tiered or metered Internet service. You all do realize that this effectively prices BitTorrent usage out of the mainstream because it becomes cost prohibitive to operate right?”

    There’s nothing ignorant about that. If BT uses so much of a scarce resource, it *should* be expensive, shouldn’t it? Send me a bill for the bandwidth I use, and I’ll decide whether seeding is worth the cost.

  31. To all you people citing games and/or VoIP as a high-bandwidth culprit, you people do not know what you are talking about. Online games (I play them and have measured them) use about 40 kbps up/down and even a resource-challenged network like Comcast DOCSIS 1.1 network can handle that kind of load 24×7 for nearly every user on the network. VoIP uses about 80 kbps up/down and it’s mostly sporadic usage which means you add about 4 kbps up/down to the base load on the network on average. It takes less than 26 24×7 BitTorrent seeders who saturate the upstream at 384 kbps to kill a DOCSIS 1.1 network.

    Now Vuze says too bad, build a new network like Verizon FiOS to support our freeloading of server bandwidth so we don’t have to pay any money unlike YouTube paying millions of dollars for server bandwidth. If we do this, are we not saying that any network that can’t support 24×7 upload deserves to be fined/regulated out of existence? What do we do about Wireless ISPs who have very scarce spectrum? You all realize that Cable plays under the same limitations of Wireless ISPs right? Cable providers will be forced to carry analog TV well beyond February 2009 when TV broadcasters no longer have to carry analog TV. Do we tell the Wireless ISPs with scare resources that they can’t prevent BitTorrent seeders from killing their network? Do you all realize that without Cable broadband, there’d be less competition/reason for Verizon to put out FiOS or less reason for them to do reasonable prices for FiOS?

    I hear a lot of people being ignorant in this forum saying that we need to go to some sort of tiered or metered Internet service. You all do realize that this effectively prices BitTorrent usage out of the mainstream because it becomes cost prohibitive to operate right? Maybe you all need to be careful what you wish for because you might just get it.

  32. Internet bandwidth is just as important to the US economy as oil, go ahead and start placing limitations on “certain” things….I’d like to kick comcast in the balls #1 for its crappy service #2 for its never even close bandwidth #3 for being a bunch of pompous bigots that want to eliminate or hinder the sharing of free information. Nazis suck.

  33. @spudz:

    This wordpress installation reacted with “too much load” (or somethin), and put my comment here instead of http://www.freedom-to-tinker.com/?p=1257 🙁
    I related to the research page http://citp.princeton.edu/memory/media/ last picture, wallpaper above the microwave 😉

  34. Many people here are missing the point (e.g. with “residential” vs. “commercial” service based arguments).

    Others have hit the nail on the head: it’s a supply and demand problem.

    The solution to supply and demand problems has been known for centuries: a thriving, competitive marketplace and the Invisible Hand.

    It’s high time we had such a marketplace for broadband in North America.

    Fix the lack of broadband competition and watch all the other problems disappear in the space of a few months or years.

    One sign that the current lack of competition is causing market failure is the observable lack of a tier in between $20/month consumer broadband with major limits on upstream speed, total transfer per month, and server running and $400/month full-blown commercial-grade service. Where’s the $30/month or $50/month or whatever “consumer premium” that allows everything the commercial-grade does but has lower up and down caps and lacks the things like business-grade support, serious hosting stuff, a static IP, and suchlike? Why are there packages for surfers and e-mailers, packages for businesses running heavily-loaded Web servers with PHP and CGI doing complex e-commerce and the like, but no packages for the heavy home user that games, YouTubes, VOIPs, file-shares, and does other high-usage things with their net connection? The answer can only be a lack of competition, because there isn’t a lack of demand, and there really isn’t even a lack of supply. (Cable network topologies have problems supplying uplink, because the network was originally designed for broadcast, but this doesn’t apply to DSL, which uses a phone network designed for two-way traffic from the get-go. This may be why we see DSL providers pulling fewer traffic-shaping shenanigans and having more generous upload caps. Room here for some company to lease capacity here and there and offer a hybrid service that uses cable to supply a fat down-pipe and the phone line to supply individual up-pipes, hm? If only the market were more competitive.)

  35. Jack Rodgers says

    Ultimately the issue is for all providers, who controls the data stream. Do the providers have the right to limit useage to certain types of service and deny it to others or do they not.

    I doubt that anyone would argue against the providers being able to deny their datastreams to spammers or malware so there are grounds for limiting use.

    Everyday some new use of the internet is created. Telephony, pirating video from home tvs to office, video and most recently renting movies over the internet. Renting movies over the internet should really bog down the net when a million or so people decide to rent a video at the same instant.

    Perhaps the real decision will be to create separate networks for video, phone, data, and so on rather than trying to squeeze it all through one pipe?

    Or maybe a tiered system where data streams become more expensive with more use. Why should anyone pay the same for basic email and browsing as the gamer who burns up the bits 16 hours a day or the guy piping his home TV to his office over the internet. One may use a gigabyte a month and the other 20 Gigs a day, or whatever the number is.

  36. To Bob Schmidt:

    ” As a consumer maybe I want an account with throttled, bit-limited, shaped, and balanced traffic and maybe I want an account without it. I should have a choice, so should you, and carriers need to be prepared to provide it and not just unilaterally shove things onto the market. ”

    You DO have that choice. It’s called a commercial grade account from Comcast or some other provider where the terms of service permit you to have servers and saturate the uplink 24×7. What you don’t have the right to do is buy the cheaper consumer-grade service meant for data consumption not data distribution to the whole Internet, and then violate the terms and harm other customers. It isn’t about “feeling sorry” for Comcast, it’s about protecting the rights of 95% of Comcast’s customers to have steady surfing, VoIPing, and gaming.

  37. Bob Schmidt says

    There are many conflicting issues at work here. I will comment on a couple that seem to come up over and over again, year after year over the past decade.

    First of all, we shouldn’t feel any more sorry for the ISP or the Carrier contending with bandwidth demand by customers than we feel sorry for the phone company contending with usage by teenagers who use the landline phones 24 hours a day talking to each other or swamping the local radio station call in number. As anyone who works for a phone company knows, teenagers will tie up all the phone lines they can come into contact with.

    Nevertheless, it is the phone company’s job to increase capacity as required. It’s the law. They of course are common carrier, and so should be the Internet provider (in my opinion both the ISP/Carrier as well as the backbone providers, but in any case at a minimum the backbone providers). That would subject them to the kind of performance requirements that have made landline phone service reliable for more than 100 years.

    The distinction between phone service and “information services” is now without question a distinction without a difference. Proof: you can have voice over ip over a voice line and it is indistinguishable from voice over ip over a data line. You can have voice over data over voice or you can have data over voice over data. (The same is true for wirless and cable.) The presumed technical difference that was the original and that remains the most highly vaunted justification for exempting ip traffic from common carrier status is now clearly bogus and it needs to be relegated to the heap of history. Ergo, net traffic providers need to be common carrier, either de jure or de facto. If not for rates then at least for carriage and capacity. You can regulate the technology without regulating rates. See FCC Part 15.

    As to the technical issues of traffic management, routing, shaping, allocation, balancing, throttling, call it what you will, to the extent that the carriers feel the need to create new methods and protocols to accomplish this, then when these methods are being applied to Internet traffic, and as long as the carriers are not common carrier, then they should participate in an RFC standards process and gain acceptance for their methods before implementing. Congress-FCC-Industry triangulation is not a replacement for the long established Internet procedures. Carriers need to participate and agree to comply.

    At the same time, any regulatory framework must enable the carriers to seek new ways to achieve higher capacity and higher efficiency while also increasing consumer choice. As a consumer maybe I want an account with throttled, bit-limited, shaped, and balanced traffic and maybe I want an account without it. I should have a choice, so should you, and carriers need to be prepared to provide it and not just unilaterally shove things onto the market.

    With a variety of service types and levels available to customers, the issues raised by Comcast’s actions, its defense and its critics all become moot.

  38. It was implied that the direction of the traffic was the same as whatever the ISP was concerned with.
    So: “Isn’t it enough to say it uses a lot of bandwidth in a given direction and therefore needs no more justification for de-prioritization?” Why are you trying to carve out a special case for Bittorrent when, at the end of the day, you want to justify managing any heavy traffic in a given direction, regardless of protocol? If all Bittorrent users became FTP uploaders (with the same upstream traffic patterns) next week would you go and try to make a special case for why FTP should be throttled? Or when it changes the week after to protocol X? Why not just argue the root issue?

    Regardless, I’m still interested in some empirical evidence of the claimed pathological behaviour of Bittorrent.

  39. @jon: you say: “Isn’t it enough to say it uses a lot of bandwidth and therefore needs no more justification for de-prioritization?”

    No, it’s not. Compare BitTorrent traffic to FTP traffic. A client inside the Comcast network doing an FTP download only creates traffic on the DOCSIS downstream side, where there is plenty of capacity. The same client doing the same transfer using BitTorrent creates traffic on both the downstream and the upstream side, and the upstream side is severely constrained by the cable topology. And in addition to the raw data, the BT client also has to field (typically) thousands of connection requests per minute from peers testing his speed and get peer list updates from the tracker over UDP.

    FTP uploads over symmetrical links, BitTorrent uploads over asymmetrical links. That alone is a huge difference, so it annoys me to read people like Templeton claiming “efficiency” for BitTorrent.

    That’s simply a lie.

  40. @Richard Bennett:

    It places more load on an ISP’s internal network than conventional downloading methods because it uses lots of short packets and generates a great deal of protocol chatter.

    You have made this claim several times but you have never provided convincing supporting evidence: you only speculate that this is the case. In particular, the two papers that you have cited in the past by Jim Martin do not support this specific claim. One paper shows that Docsis is vulnerable to a syn flood attack by a malicious attacker that chooses packet timing, a poor model for a Bittorrent client. The other merely shows that highly utilizing a link with Bittorrent will slow down other users: This is an unsurprising claim and is also true of other protocols. There is no comparison to heavy HTTP traffic to demonstrate that it has a comparatively more burdensome effect on the network.

    Again, you have given no evidence of the claim that Bittorrent, as a protocol, has some kind of extraordinary adverse effect on networks compared to other protocols.

    Finally, I fail to understand why you spend so much time picking on Bittorrent. Don’t you support traffic management of any heavy network usage regardless of protocol? Isn’t it enough to say it uses a lot of bandwidth and therefore needs no more justification for de-prioritization?

  41. Emmanuel,

    Comcast does not sell “unlimited” data service for their consumer residential product. Their terms of service says no servers and no 24×7 pipe saturation. Why are you saying Comcast is not being honest when you don’t even know the terms of service?

    As for metered Internet service, that would certainly solve the supply problem but it would effectively price BitTorrent usage out of existence since very few users would use it and they’d use it sparingly. Metered pricing is hostile towards consumers and innovation and I am happy that the US market has rejected it. Even the cell phone market is moving to an “all you can eat within reason” model. That all you can eat model doesn’t work if users violate the terms of service. You can’t go to an all you can eat buffet and feed 30 of your friends out the side door just like Comcast can’t afford to let residential users turn offer commercial-grade bandwidth to the rest of the Internet.

    As for “Traffic shaping (based on quota, time of day, etc)”, they are using the only mechanism available to them on a shared medium network. Normal Layer 3 network traffic shaping mechanisms do not work as effectively since the damage is already done on the first mile before you even get to the router where traffic shaping is done. Comcast is using TCP RSTs to shut down excessive 24×7 upstream connections that violate the terms of service and they’re only using the TCP RSTs when the network is slammed.

  42. (I’m assuming that Comcast are selling connectivity based on link speed
    rather than transfer volume. I don’t live in the US)

    The alleged bandwidth supply costs problem that Comcast is suffering
    from is going to be a natural consequence of a fixed supply of bandwidth
    and having no caps on the demand of the end user.

    Instead of pissing off their users they really should see it as supply vs
    demand problem rather than a p2p problem.

    The solution is then going to be:

    a) Invest more resources to keep up with bandwidth demand:
    i) Be less profitable;
    ii) Charge the big bandwidth users more money;
    iii) Be more cost efficient with bandwidth supply.

    b) Be honest in their limited bandwidth supply capacity and set about
    to limit the demand through either:
    i) Data transfer quotas
    ii) Traffic shaping (based on quota, time of day, etc)

    What am I missing here ?

  43. To Spudz:

    “George and Richard: nobody is asking ISPs to “foot the bill” for distributing something. The ISPs’ customers will foot the bill, in one way or another, since ISPs pass their costs on to their customers like most businesses.”

    The customer has paid for RESIDENTIAL broadband service. Residential broadband service is built upon the fact that not everyone uses their bandwidth at the same time. A Cable DOCSIS 1.1 network has a total of 10 mbps upstream per second shared among 450 users and it is actually closer described as a wireless ISP with very limited shared resources than a DSL let alone a FiOS network. Just 26 users saturating their upstream at 384 kbps will hit the theoretical peak capacity of an entire neighborhood and the network fundamentally does not support 24×7 BitTorrent seeders or other servers.

    That’s why the terms of service say you can’t host servers and you can’t saturate the pipe 24×7. BitTorrent seeders and servers operate 24×7 at or near peak capacity and it fundamentally breaks the network. This is not as if Comcast created some rules stating that VoIP can’t be used. These terms of service were NOT implemented for anti-competitive reasons; they were implemented to keep the network from melting down which clearly falls under the category of REASONABLE network management. They were implemented to prevent 5% of the users from saturating the network and preventing the 95% of users from accessing the broadband service they paid for.

    If you want a pipe that you can host servers and saturate the pipe, buy a commercial-grade connection such as a T1 for $400/month where you can saturate up and down at 1.554 mbps.

  44. Ed writes:
    “But even leaving aside the merits of the argument, what’s most remarkable here is that Comcast’s technical description of BitTorrent cites as evidence not a textbook, nor a standards document, nor a paper from the research literature, nor a paper by the designer of BitTorrent, nor a document from the BitTorrent company, nor the statement of any expert, but a speech by a member of Congress.”

    Since the FCC is a creature of Congress, I’m not surprised to see a Member cited as an expert in Comcast’s pleading. They know full well this is a political contest, not a technical one.

  45. Ingo: who is Jacob? I see a Seth but no Jacobs among the commenters to this one.

    George and Richard: nobody is asking ISPs to “foot the bill” for distributing something. The ISPs’ customers will foot the bill, in one way or another, since ISPs pass their costs on to their customers like most businesses. My ISP provides flat rate net access up to $60GB a month; any usage above that in any given month costs extra that month, $1.50 extra for 60-61GB, $3.00 for 61-62, and so forth. A higher flat rate is another possibility, and you yourself mentioned metering.

  46. Very good point Richard,

    Ed Felton also makes the same mistake as many others like Brad Templeton when he describes how “efficient” BitTorrent is. They look “efficiency” purely from the selfish standpoint of the content distributor and not the havoc it wreaks on the Internet in terms of excessive and unnecessary load. P2P is often so stupid that it will download files all the way from China or Europe rather than something in the same region. If it didn’t do that, we would not have a need for the P4P working group that wants to implement network-aware P2P.

    It’s financially “efficient” to BitTorrent and Vuze in the sense that they can offload their server bandwidth costs on to the broadband companies and freeload off their infrastructure to serve the Internet. When Comcast enforces their terms of service that prohibit servers or 24×7 upstream saturation and shuts down this freeloading, Vuze will simply complain to the FCC and DEMAND that Comcast should spend their own money to upgrade the network to something like FiOS. Now of course it’s easy make demands like that when you’re not flipping the bill.

    What I find ironic is that Templeton’s EFF says metered Internet service is better when such a system would completely price BitTorrent or P2P usage out of existence.

  47. Brad Templeton’s description of BitTorrent is misleading and inaccurate. It places more load on an ISP’s internal network than conventional downloading methods because it uses lots of short packets and generates a great deal of protocol chatter. And many BT sessions are abandoned midway because the seeder is shut down. P4P is a much better approach.

    Interestingly, Mr. Templeton is the chairman of the Electronic Frontier Foundation, the non-profit that’s been among the harshest critics of Comcast’s efforts to control BitTorrent’s bandwidth appetite, and he’s also a board member of the for-profit BitTorrent, Inc.

    I think that’s a problem.

  48. Ed, it’s rather silly to claim Comcast’s entire defense is predicated on the few things you paraphrase and attribute to me which weren’t actual quotations. Here is a proper framing of my position on Comcast.
    http://blogs.zdnet.com/Ou/?p=1001

    This comment you made below is flat out deceptive.
    “P2P protocols don’t aim to use more bandwidth rather than less.”

    P2P uses less bandwidth for the content distributor and offloads that bandwidth by multiples to others. It uses less bandwidth for Vuze such that Vuze no longer needs to spend the millions of dollars that Youtube spends on server bandwidth. Vuze would rather offload that to the broadband companies and let them pay the entire freight of not only the downstream, but it uses other people’s infrastructure to deliver content to the rest of the world.

    P2P is so inefficient in terms of excessive network load that the P4P working group is working to make it more network-aware and avoid unnecessary and overly long paths. This fantasy that P2P is “efficient” is getting out of hand. Sure it’s “efficient” for the freeloader like Vuze who wants to avoid paying distribution costs, but it’s not efficient for the entities Vuze freeloads from.

  49. “Die moderne Frau
    kocht ohne Sau.”
    I disagree with Seth and Jacob 😉

  50. A COUPLE THOUGHTS:

    (1) P2P like bittorrent only uses bandwidth equal to whatever is being transferred. Thus, if everyone was sharing Textfiles (example books), P2P would not even be a blip on the radar, because textfiles are teeny-tiny things (just a few kilobytes per book).

    (2) Therefore if the pipes are getting clogged, it is not the fault of the protocol, but the CONTENT being transferred. Most P2P content is video and said video takes-up huge amounts of space (1 gig for a SD movie; 5-10 gigs for a HD movie).

    (3) Which means even if P2P was outlawed, comcast and other providers would still have a problem with other video-related programs such as:

    – AppleTV (downloadable video)
    – Itunes (ditto)
    – NBC.com, FOX.com, CW.com and other streaming tv sites
    – youtube.com, googlevideo.com, et cetera
    – and on and on

    Banning P2P protocols is NOT going to solve the problem.
    There are other sources that also provide video,
    and eat a lot of bandwidth, thus the problem still exists.

    Comcast needs to deal with the ACTUAL problem (insufficient pipe to transfer on-demand videos). Comcast needs to deal with the congestion, else risk their users not being able to access Itunes video, NBC.com video, youtube.com video, and on and on.

    (4) Forgery

    – I have an envelope here stamped with Federal Aviation Administration. If I sent-out that envelope, pretending to be the government’s FAA representative, I’d find myself in a lot of trouble.

    It’s forgery to pretend to be somebody else.

    Which is what Comcast is doing when they send-out packets pretending to be a remote person that I’m trying to talk to. They are forging somebody’s else’s address/letterhead (just as if they used an FAA-stamped envelope), and pretending to be somebody else.

    It’s identity theft.

  51. @ Gabriel J. Michael

    http://www.freedom-to-tinker.com/?p=1218#comment-379421

    This has already been run through once already and I believe my proposed solution is implementable on their existing platform (or would require only minor changes), it doesn’t require a change of business model, it doesn’t involve packet forging and within that business model it is about as “fair” as you can get.

  52. If I have a 150amp 110V power system hookup, I can run it flat out, and it is the power companies job to find me the power I am demanding. When the aggregate load gets too high, the power company may brown me out, cutting the power I can draw.

    If you are running a data center, they can’t brown you out. The computer supplies will pull constant power until the input voltage dips and the UPS + backup generator kicks in. In effect, the power grid can either supply the full demand of the data center or cut it off an supply nothing (which is why any decent size data center does need a backup generator). When the data center drops off the grid, everyone else on the grid will see a massive surge as that power redistributes itself.

    Every power company I’ve ever heard of charges for usage. It would be unthinkable to hand out a fixed-size power connection and tell the consumer to “just go nuts on this”.

  53. It seems to me the solutions we see being proposed basically break down into four categories:

    1. Implement the network infrastructure necessary to provide the level of service that is being offered: i.e., upgrade so that everyone, or at least more people, can use their connection fully.

    2. Reduce the level of service being offered to a point where the current infrastructure can handle it: i.e., stop offering 16 Mbps down / 1 Mbps up connections (Comcast actually advertises this) if you can’t deliver it.

    3. Redefine the pricing model: e.g., switch to a $x / GB model, or offer tiered services (web/e-mail/IM only is $x / mo.; “full” internet access is $3x / mo.), etc.

    4. Use questionable methods, such as these RST packets or completely blocking particular protocols, to avoid having to implement any of the above solutions.

    And it would seem that Comcast has gone this latter route.

  54. Isn’t it the case that the only way to cleanly interrupt a TCP stream is to send a packet claiming to be from the other side?

    Hal,

    No, see RFC 792 “Internet Control Message Protocol” (STD 5: ICMP). And see RFC 1122 “Requirements for Internet Hosts — Communication Layers” (STD 3).

    Note, though, that despite the requirement that some ICMP errors MUST be passed up to the transport layer, actual deployed code may work differently. Some stacks may silently discard ICMP for a variety of reasons, both semi-justifiable and clueless. Middleboxes may also have an effect.

    Fwiw, I’d call everyone’s attention to the well-known case of ICMP Source Quench.

  55. We’ve already gone through this with the electrical power companies. For years, they argued against co-generation and alternative power sources because system stability required a central power generating core and a power using periphery. The power companies lost, and the entire business has been restructured with generation and transmission entities being separate and distinct. The internet has implicitly done this restructuring for the cable companies, but the corporations still have their active fantasy lives.

    It comes down to a consumer issue. What is the definition of a 1Mb/s download, 0.5Mb/s upload line? Any naive interpretation recognizes that I can download 86,400,000,000 bits per day, assuming there is something out there that can pump me those bits. If I have a 150amp 110V power system hookup, I can run it flat out, and it is the power companies job to find me the power I am demanding. When the aggregate load gets too high, the power company may brown me out, cutting the power I can draw. If I am a big power user, like an aluminum smelter, I might have a contract to cut my demand when asked, in exchange for a better overall rate.

    If Comcast and the other cable companies cannot provide the bit rates they advertise for technical reasons, they need to refine their terms of service and explain exactly what they are delivering. Forging packets to drop connections is the same as randomly imposing black outs. A lot of work was done on computer utility metrics back in the 1960s and 1970s. My satellite provider states explicit maximums and explains its throttling policy quite clearly. It isn’t great, but they only have so many satellites and I have to share. Maybe someone should drag out some of that work, apply for a few patents, now that the ideas have become novel and non-obvious, and we can develop a forward looking model for ISPs.

  56. @Hal: “So what else can someone do who interrupts the channel than engage in this so-called ‘forgery’?”

    They can drop packets, which in fact is what happens naturally when a connection is overloaded (and Comcast claims overloaded connections are what inspired this mess in the first place). TCP is designed to back off when the packets start dropping.

  57. There’s a glaring flaw in Comcast’s argument that no one seems to have noticed so far: the amount of bandwidth that’s “available” to an individual P2P program is *not* what people have asked Comcast to increase.

    Yes, it may be true that if you increase a customer’s upstream cap from 768 kbps to 1.5 Mbps, he’ll just start using twice as much upstream bandwidth. That’s what the previous commenters are arguing about, but it’s irrelevant.

    Comcast sells a service with a certain amount of upstream bandwidth — let’s say 1 Mbps — and they count on the fact that most people won’t want to use the whole 1 Mbps that they’re paying for, which means they can put 20 customers on a shared line that has less than 20 Mbps of capacity. But now as P2P becomes more popular, more people want to use their whole 1 Mbps, and the capacity of that shared line has to increase.

    Mary Bono Mack responds as if people were demanding that Comcast increase the 1 Mbps figure, which is flat-out wrong. The problem isn’t that individual caps are too low, it’s that Comcast doesn’t have enough real bandwidth to provide the advertised 1 Mbps to everyone who wants it. If they build up their network *without* increasing caps, that problem will be solved.

  58. it’s going to be even more depressing when the FCC accepts this crappy argument to rule “no harm, no foul”

  59. I’ve wondered about this forgery claim. Isn’t it the case that the only way to cleanly interrupt a TCP stream is to send a packet claiming to be from the other side? AFAIK there is nothing in the protocol that lets a packet say, this is from someone other than a given peer, but we are interrupting the connection to that peer, so don’t bother trying to send any more packets there. Rather, the IP address and port number serve to the TCP client software as identifiers for a particular logical channel. The only way to send channel-specific information to the TCP software is by putting the IP address and port number into the “from” fields of the packet, isn’t that the case? So what else can someone do who interrupts the channel than engage in this so-called “forgery”?

  60. I’ll jump in on the side that P2P does use all the bandwidth available to it. I’ll admit that I’ve not tested the limiting case of infinite bandwidth, I doubt anyone else has or will in the near future either. In the more commonplace situation, upgrading a finite link with a larger finite link results in:

    * more peers discovering this node is fast
    * the users discovering that files move through faster, so they request more files
    * requests for larger files, since human patience is the constant variable

    … and now the evil truth. Replace the P2P with any internet technology, and that statement remains true. That’s right — gaming applications, instant messaging, web applications, video on demand services, streaming radio — they’re all designed to use as much bandwidth as is available.

    All told, yes I’d agree that as bandwidth becomes more widely available, people find stuff to fill it up faster. With things like games and web, they are limited by game designers and web designers who usually have some approximate knowledge of efficiency, plus they are aiming for a wide market and don’t want to make their application too much of a hog. With streaming radio, you still only can listen to one channel at a time.

    P2P is worse because people just search out big lists of “want to have” files and fill up their queue and walk away leaving it chugging.

    At the end of the day, if you sell someone a fat pipe and you advertise it as “unlimited” then you would expect people to use it as advertised. Suppose you offer a fast car and you keep paying for gas no matter how far they drive… suddenly they long tours across the country where before they only drove to work and back. Kind of obvious what is going to happen.

  61. Is this the same congressional Bono who wants perpetual copyright?

  62. “Anyone who has worked at an ISP knows that torrents and most P2P software does indeed use as much bandwidth as they can.”

    There is a difference between a program using as much bandwidth as it _can_ versus using as much bandwidth as is _available_, which is what the original claim was (“P2P applications are designed to consume as much bandwidth as is available”).

    The latter claim really doesn’t make any sense when you think about it. Reductio ad absurdum: it implies that if you gave BitTorrent infinite bandwidth, it would use it all. In order for that to be true, there would have to be infinite demand for whatever is being distributed, which is not true.

  63. Anyone who has worked at an ISP knows that torrents and most P2P software does indeed use as much bandwidth as they can. I don’t know how anyone can claim otherwise. When you are seeding a file and there are enough leechers, you will max out your upload and continue to do so until every leecher is running at maximum download. With the ratio of seeds to leechers 100:1 or worse, its a joke to say that p2p does not eat up any bandwidth you give it.

    Having said that comcasts still doesn’t have a defense. So what if p2p eats up bandwidth. If you are going to sell internet with unlimited download tough. In Australia we have to pay for how many gigs we download, so we never have problems like this, people pay for what they use, which is fair.

  64. Perhaps Comcast’s argument is disingenuous rather than just false…

    If you look at this from a purely technical perspective, their argument is clearly wrong. They say they can’t increase bandwidth because the p2p _protocols_ will consume it all. But as discussed above, p2p is inherently more efficient and doesn’t work like this.

    But if you look at this from a political perspective, it probably is fair to say that if they increase bandwidth, p2p _users_ will consume more bandwidth. It’s not the protocol, but the users.

    This is inherently a political problem. Comcast are hiding behind a false technical argument, but their actual argument is political – they’re just not prepared to admit it.

    I know this is a technical blog, but there’s no denying this is one area where technology and politics collide.

  65. Robb Topolski wrote:

    “It gets worse — the quote again, Alex … “P2P applications are designed to consume as much bandwidth as is available” … and now the evil truth. Replace the P2P with any internet technology, and that statement remains true. That’s right — gaming applications, instant messaging, web applications, video on demand services, streaming radio — they’re all designed to use as much bandwidth as is available. There is nothing special that P2P applications do — they all open sockets using their various TCP/IP stacks. Give any one of those applications 1 KB/s of additional pipe, and it will consume it.”

    I don’t think that’s at all accurate. It would be more accurate to say that programs use as much bandwidth as is necessary. Excluding BitTorrent for the moment, all of the applications you cite – gaming, IM, video and audio streaming – all have limited bandwidth needs. Playing Counter-Strike requires a finite amount of bandwidth. If you provide twice the necessary amount, it will not magically expand to use all that capacity. This is even more obvious with IM, an extremely low-bandwidth application, comparatively. Streaming needs the amount of bandwidth to handle the stream – it will not expand to use more unless the stream is higher quality.

    BitTorrent is bandwidth-intensive. If you provide it more bandwidth, AND there are more peers available, AND it is allowed to initiate further connections, AND it is not bandwidth throttled in the actual client, then it will use more bandwidth. But even here, it will not use an infinite amount, or even indefinitely expand. Obviously, it will only receive about as much as the file size being downloaded. As for uploading, it will continue to upload until a ratio is met, or until all connected “leechers” have the file as well, or until you tell it to stop uploading.

    Furthermore, BitTorrent’s bandwidth usage is limited by the bandwidth of all the connected peers – even if you have a 10 Mbps connection, unless there are hundreds or thousands of peers, it is unlikely you will be able to saturate it, simply because there are not enough fast peers to connect to.

    “There is nothing special that P2P applications do — they all open sockets using their various TCP/IP stacks.”

    That doesn’t make any sense. I’m no computer scientist, but I’m under the impression that there is typically one IP stack implemented per operating system, and that various programs use various protocols in the protocol suite to do what they need.

  66. “P2P applications are designed to consume as much bandwidth as is available”

    I wonder what the “bandwidth throttle” in Bittorrent, Azureus, etc. does.

    I know that downloads, streaming and P2P take a big bite out of a provider’s bandwidth. I support an ISP that throttles excess traffic the proper way, by dropping packets. Sending RSTs stinks, as the lotus notes users have found out. Comcast severely messed things up and their denial didn’t help the issue.
    “Never ascribe to malice that which is adequately explained by incompetence.” I suggest adding “Comcast competence” to the English language.

  67. It gets worse — the quote again, Alex … “P2P applications are designed to consume as much bandwidth as is available” … and now the evil truth. Replace the P2P with any internet technology, and that statement remains true. That’s right — gaming applications, instant messaging, web applications, video on demand services, streaming radio — they’re all designed to use as much bandwidth as is available. There is nothing special that P2P applications do — they all open sockets using their various TCP/IP stacks. Give any one of those applications 1 KB/s of additional pipe, and it will consume it. –funchords

  68. Actually, the P2P systems, working at their best, use less bandwidth than a central delivery system.

    Consider a 1gb linux distro hosted at ubuntu.com and two neighbours on the same ISP who both want it. They could download it the old way, by connecting to ubuntu.com. In this case it sends out 2gb, 2gb comes into the ISP and 1gb goes down each pipe to each user.

    Now consider a bittorent download. In this case 1gb goes out of ubuntu.com (very good for them) and 1gb comes into the ISP (good for them.) As before, 1gb goes down each pipe to each user.

    The other difference is that 500mb goes up each pipe from each user, and the 1gb that didn’t come in the main pipe to the ISP still does transit the ISP’s internal backbone. The 500mb that came up from each user is upstream bandwidth that otherwise largely sits unused.

    Well, almost. On a DSL ISP this is pretty true. Under Docsis, the upstream is badly managed so they don’t handle it as well as DSL. On a wireless ISP, the upstream and downstream come from the same pool, so it does have an effect.

  69. James Bailey says

    George Ou explaining that Mac OS X users have their computers frequently compromised but just don’t know it:

    http://talkback.zdnet.com/5208-10533-0.html?forumID=1&threadID=18366&messageID=356678&start=101

    This is not a rational person. For him to be used as a reliable source is sad.

    James

  70. As much as I disagree with what Comcast was doing, I tend to agree with dr2chase. Get this out of the FCC!

    I first started working on congestion control in datagram networks in 1978 when I took over managing the DECnet architecture. It didn’t take me long to recognize that the problem was more a research problem than an engineering problem and one of my first steps was to engage some of Digital Equipment’s researchers, including Raj Jain and K.K. Ramakrishnan.
    Thirty years have gone by. Networks are more than 1000 times faster. Demand still exceeds capacity on occasion. With the benefit of progress in theory and practice, I would characterize the problem today as primarily an engineering problem, but there are still aspects that are far from being completely understood.

    What disturbs me is that greedy special interests of all flavors are converting this into a political problem. As one who has watched decades of technological progress and decades of political decline, this doesn’t strike me as a favorable development.

  71. “If you build it, they will come” only applies up to a point. To use a traffic analogy (as Comcast does in their testimony), building a bigger highway might attract more people to use it, and thus not permanently reduce congestion, but that’s no reason not to build bigger highways. Without investing in greater capacity, how does Comcast plan to handle the massive bandwidth consumption of movie streaming services such as Netflix’s “Watch Instantly” or movie rental services as in iTunes? These things are only going to get more popular.

    Secondly, regardless of whether Comcast wants to call these RST packets “forged” or not, people are reporting being unable to seed torrents to non-Comcast users. If the reason you use BitTorrent is to distribute a file efficiently, Comcast’s practice makes it nigh impossible to do so. This is not simply traffic shaping, this is breaking the functionality of one particular program.

  72. And then it will have to explain to its shareholders why it lost so many customers.

    Oh, wait, they already have to do that.

  73. Anything more authoritative than necessary to convince the (current) FCC is wasted effort. Comcast has a duty to their stockholders not to waste excess money on this.

  74. Sigh … I don’t know if this is really a technical issue, and I’m again getting the feeling that …

    Oh, why bother, what am I thinking, of course it’s not a technical issue, and I’m only going to get in trouble by not going along with the ISP-ARE-CENSORS!!!!!!! party line 🙁 🙁 🙁

  75. I would not go so far as to say that they can tell the difference, or that they will even check. Look at some of the more ridiculous patents — and it is the patent office’s *job* to check these things..

  76. One way of avoiding wasted bandwidth would be to abolish copyright.

    Then the instantaneous diffusion of the Internet could be better harnessed for distribution of new art instead of compensating for a ridiculous prohibition against unauthorised copies of old art.

  77. Sadly, there’s truth, and then there’s political truth. The difference is in the amount of money each will put into one’s campaign coffers over the next two, four, or six years. Which one do you think Mary Bono Mack is interested in?