December 22, 2024

Comcast Gets Slapped, But the FCC Wisely Leaves its Options Open

The FCC’s recent Comcast action—whose full text is unavailable as yet, though it was described in a press release and statements from each comissioner—is a lesson in the importance of technological literacy for policymaking. The five commissioners’ views, as reflected in their statements, are strongly correlated to the degree of understanding of the fact pattern that each commissioner’s statement reveals. Both dissenting commissioners, it turns out, materially misunderstood the technical facts on which they assert their decisions were based. But the majority, despite technical competence, avoided a bright line rule—and that might itself turn out to be great policy.

Referring to what she introduces as the “BitTorrent-Comcast controversy,” dissenting Commissioner Tate writes that after the FCC began to look into the matter, “the two parties announced on March 27 an agreement to collaborate in managing web traffic and to work together to address network management and content distribution.” Where private parties can agree among themselves, Commissioner Tate sensibly argues, regulators ought to stand back. But as Ed and others have pointed out before, this has never been a two-party dispute. BitTorrent, Inc., which negotiated with Comcast, doesn’t have the power to redefine the open BitTorrent protocol whose name it shares. Anyone can write client software to share files using today’s version of the Bittorrent protocol – and no agreement between Comcast and BitTorrent, Inc. could change that. Indeed, if the protocoal were modified to buy overall traffic reductions by slowing downloads for individual users, one might expect many users to decline to switch. For this particular issue to be resolved among the parties, Comcast would have to negotiate with all (or at least most of) the present and future developers of Bittorrent clients. A private or mediated resolution among the primary actors involved in this dispute has not taken place and isn’t, as far as I know, currently being attempted. So while I share Ms. Tate’s wise preference for mediation and regulatory reticence, I don’t think her view in this particular case is available to anyone who fully understands the technical facts.

The other dissenting commissioner, Robert McDowell, shares Ms. Tate’s confusion about who the parties to the dispute are, chastising the majority for going forward after Comcast and BitTorrent, Inc. announced their differences settled. He’s also simply confused about the technology, writing that “the vast majority of consumers” “do not use P2P software to watch YouTube” when (a) YouTube isn’t delivered over P2P software, so its traffic numbers don’t speak to the P2P issue and (b) YouTube is one of the most popular sites on the web, making it very unlikely that the “vast majority of consumers” avoid the site. Likewise, he writes that network management allows companies to provide “online video without distortion, pops, and hisses,” analog problems that aren’t faced by digital media.

The majority decision, in finding Comcast’s activities collectively to be over the line from “reasonable network management,” leaves substantial uncertainty about where that line lies, which is another way of saying that the decision makes it hard for other ISPs to predict what kinds of network management, short of what Comcast did, would prompt sanctions in the future. For example, what if Comcast or another ISP were to use the same tools only to target BitTorrent files that appear, after deep packet inspection, to violate copyright? The commissioners were at pains to emphasize that networks are free to police their networks for illegal content. But a filter designed to impede transfer of most infringing video would be certain to generate a significant number of false positives, and the false positives (that is, transfers of legal video impeded by the filter) would act as a thumb on the scales in favor of traditional cable service, raising the same body of concerns about competition that the commissioners cite as a background factor informing their decision to sanction Comcast. We don’t know how that one would turn out.

McDowell’s brief highlights the ambiguity of the finding. He writes: “This matter would have had a better chance on appeal if we had put the horse before the cart and conducted a rulemaking, issued rules and then enforced them… The majority’s view of its ability to adjudicate this matter solely pursuant to ancillary authority is legally deficient as well. Under the analysis set forth in the order, the Commission apparently can do anything so long as it frames its actions in terms of promoting the Internet or broadband deployment.”

Should the commissioners have adopted a “bright line” rule, as McDowell’s dissent suggests? The Comcast ruling’s uncertainty guarantees a future of envelope-pushing and resource intensive, case-by-case adjudication, whether in regulatory proceedings or the courts. But I actually think that might be the best available alternative here. It preserves the Commission’s ability to make the right decision in future cases without having to guess, today, what precise rule would dictate those future results. (On the flip side, it also preserves the Commission’s ability to make bad choices in the future, especially if diminished public interest in the issue increases the odds of regulatory capture.) If Jim Harper is correct that Martin’s support is a strategic gambit to tie the issue up while broadband service expands, this suggests that Martin believes, as I do, that uncertainty about future interventions is a good way to keep ISPs on their best behavior.

Comments

  1. Er, blocks like 74.14.* are blocks of 65536 addresses, to be precise. Blocks like 74.* have 16,777,216 addresses, enough to cover a major city’s contribution to an ISP’s customer base, including suburbs and outlying towns. The ISP would hypothetically use a block of that size for a big metropolitan area, a large number of rural and semi-urban townships, or even an entire small state (with the usual proviso about getting an address from a designated location-associated pool if possible but any address at all if necessary). Within that, the smaller blocks of 65K would go to townships, boroughs, or similarly with tens of thousands of customers, and the blocks of 256 to blocks, individual apartment buildings, or neighborhoods, depending on customer density in these. Again with the proviso that these determine allocation preference for those customers’ IPs, but if necessary the customer will be assigned any IP from a larger enclosing pool, up to an including the ISP’s total allocation of IPs. (Perhaps such “out-of-town” IPs would be given with shorter-than-usual lease times.)

  2. “Making P2P, VoIP, and HTTP share wires efficiently is a hard problem.”

    That’s no skin off the customer’s nose. It’s the provider’s problem, and they need to solve it to deliver what they promised to their customers.

    If they find they are unable to do so, then the customers have a cause of action against them for promising more than they could deliver.

    It’s that simple.

    As for P4P, am I the only one who gets the heebie-jeebies? Inviting ISPs to have a more active role in P2P activity seems like a disaster waiting to happen. Not only do ISPs have vested interests (e.g. often being a division of a company that also is in the content business in some way, either delivery or production) that lead them to want to sabotage P2P, but it also opens up the ISP to greater liability to or leverage from the **AA and their ilk.

    P2P clients could just prefer sources more the more initial octets the source IP has in common with the host IP, i.e. if your net-facing IP is 186.117.11.95 it would prefer 186.117.11.* to 186.117.non-11.*, the latter to 186.non-117.*, and those to non-186.* addresses.

    This should typically result in favoring other users of the same ISP as sources. My own DSL provider tends to hand out 74.14.* addresses to its users, for example, and maybe a few other 74.* blocks. Blocks of 65536 addressees.

    ISPs could aid such a scheme by disbursing IPs in a manner that clusters blocks of 256 and blocks of 65536 addresses according to their network topology, WITHOUT any direct involvement in P2P activity. For example, if my DSL provider put people in my neck of the woods in 74.14.* (the cluster of towns here has maybe 64K people total) and in my neighborhood (say) 74.14.106.*, a hypothetical P2P client with the outlined behavior would look for sources among my neighbors first, and then regionally, probably corresponding to the DSL network’s topology and relieving gateway bottlenecks. (The ISP’s DHCP would show a PREFERENCE for assigning particular IPs in particular geographical areas; needless to say, it would get a customer an IP from successively wider pools if it needed to due to heavy local usage at the time. Similarly the P2P app would show a PREFERENCE for using “nearby” IPs, but if it had to it would use sources from successively wider address groupings.)

    The P2P app would need to know its network-facing IP, but it needs to anyway; modern P2P apps mostly can get it from the router by UPnP, in the case that the computer’s own self-reported IP isn’t the network-visible address, and otherwise need manual configuration of the app and of the router’s port-forwarding to work well. This configuration issue wouldn’t get any worse; it just wouldn’t change at all.

  3. Speaking about errors of fact, I’ll call your bluff on a few.

    The ICMP mechanism for overload management, Source Quench, never worked. This mechanism requires the router that drops packets to send a message to the sender of each dropped packet, but when the network is congested these messages contribute to the overload condition.

    From RFC-792:

    If a gateway discards a datagram, it may send a source quench message to the internet source host of the datagram. A destination host may also send a source quench message if datagrams arrive too fast to be processed.

    Note the use of the word “MAY”, there is no requirement for one ICMP per dropped packet, leaving a range of strategies open.

    So that mechanism was discarded in 1987 when the Jacobson Algorithm was patched into the entire Internet.

    Van Jacobson’s work was only published in 1988 (check here http://ee.lbl.gov/papers/congavoid.pdf ) and RFC-1122 made it a “MUST” for endpoints to implement in 1989 but certainly never precluded the use of other mechanisms for congestion control. As for the “entire internet” being patched, I’m sure it took many years for all endpoints to adopt the algorithm. If you want to look at RFC-2001:

    The assumption of the algorithm is that packet loss caused by damage is very small (much less than 1%), therefore the loss of a packet signals congestion somewhere in the network between the source and destination. There are two indications of packet loss: a timeout occurring and the receipt of duplicate ACKs.

    This is quite a big assumption. For any channel, you can increase speed at a penalty of lower reliability or you can improve the signal at a penalty of lower speed. Thus, there is always a tradeoff for a design, where some finite packet loss will give the best overall throughput. Thanks to the above assumption being built into TCP stacks all over the word, designers now have no choice but to set that tradeoff in favour of very low packet loss (thus set their speed to be conservative). Yes, I’m aware that suitable FEC makes up some of the difference but it is also widely accepted that endpoints should be able to adapt their congestion control algorithms. Linux supports a pluggable system of algorithms (vegas, westwood, hybla, etc) all minor variations on a theme.

    It is common practice on satellite links to insert a silent TCP proxy that completely takes over congestion control and window control, for the purpose of increasing throughput.

    Jacobson has a number of problems of its own, including cycling, flow unfairness, and the inability to use links to more than 75% of capacity. So it was replaced by ECN to correct all these evident flaws in 2000, but Microsoft hasn’t seen fit to enable it.

    ECN is far from standard, and Van Jacobson’s work is far from being replaced. The default in Linux (/proc/sys/net/ipv4/tcp_ecn) is to switch it off.

    I’ve certainly seen the standard Linux TCP stack (i.e. not using ECN) run a link at something close to 95% of capacity (providing the link has low intrinsic packet loss). Is there a reference for where the 75% figure comes from?

  4. The congestion problems caused by P2P on networks designed for downloading are not going to be resolved by some hare-brained scheme requiring changes in the Windows TCP/IP stack, they’re going to be resolved primarily at layer two and by economic means.

    May I humbly point out that RST injection is not a layer two solution. It is a layer three solution trying to hack a result that might be similar to a proper layer two solution. In other words a kludgy workaround for hardware that isn’t up to the job.

    Their network is sized and provisioned to handle a given volume of web traffic, which has a particular signature in terms of traffic direction, quantity, and duty cycle. P2P has a very different traffic signature than web browsing.

    So did Comcast make it clear to their customers that what they were selling was only suitable for web browsing? Did they sell the service as a general purpose Internet connection?

    DOCSIS was never really designed for Internet in the first place. It was designed for the set-top-box revolution (the revolution that never came). We were all going to be entertained by interactive video with fat streams leaving the hub and only the button-press from the remote controller going back on the inroute. Then DOCSIS got adapted for web browsing because that’s what the customers seemed to want to do. But did anyone actually bother to go out and ask the customer?

    Should the buyer take the risk when buying a service in good faith that (on face value) looks like a general purpose service, or should the supplier take the risk on making a promise that includes a lot of guesswork?

    My feeling is that Comcast is reaping the rewards, so Comcast should be taking the risks (on the basis that risk and reward should be linked). The consumer never got a choice when it came to network design and provisioning, so why should they take the hit when things don’t fit together?

  5. The idea being that this way if two peers within the same ISP’s network each have data the other is looking for, they will get it from each other rather than both trying to access a higher-bandwidth remote site. This limits the effect of P2P clients on the backbone, so it’s good if your bottleneck is the ISP’s outside link.

    A good idea, especially for ADSL providers. However, useless for Comcast and their shared-medium cable network. The bottleneck is not the backbone link, it is the cable in the street.

  6. If the gateway sends outgoing packets from each subscriber at a fixed maximum rate, dropping packets that overflow a 1MB or so buffer, there will be no way any customer, no matter how well or badly behaved, can put more traffic on the outside network than is allowed through the gateway. The customer will achieve optimal throughput if the amount of unacknowledged packet data is kept below that buffer size, but even if the traffic overflows the buffer there’s no way an even-remotely-reasonably-behaving TCP client is going to put much more data on the local network than is going to make it through the gateway. The most likely bad situation for a poorly-written client would be bursts of communication, followed by packet loss, followed by a pause (waiting for acknowledgments that don’t arrive), followed by another burst, etc. Bad for the client, but relatively harmless to the overall network bandwidth, since each retransmitted cluster of packets will be preceded by almost a second of dead time.

    The flow-control management packets would provide a way for customers to optimize their own throughput. If they ignore them and their many-simultaneous-socket TCP performance suffers as a result, that’s their problem.

    I’m not clear why you think it’s better for a gateway to [i]deliberately break[/i] client software than for it to simply limit the speed at which packets are delivered? The latter course of action encourages P2P software authors to fit their software to resource constraints, whereas the RST approach encourages them to work around that method of attack in such a way as to diminish useful purposes RST packets serve.

    BTW, I’m unaware of any NAT routers that will send RST packets on existing connections when the buffers full; I thought they simply responded to a SYN with an RST, effectively indicating “connection refused” rather than “existing connection reset”. As for dial-up gateways, a loss of carrier would imply the computer no longer exists to the outside world. That situation isn’t applicable either to ComCast’s RST attacks.

  7. So many people are making so many idiotic comments I don’t have near enough time to respond to them, so let me just try to clarify a few issues that seem to be over the heads of the mass audience (I don’t include Bryan Feir in the idiotic commenter category.)

    Comcast, like all cable Internet services, runs a DOCSIS network. DOCSIS has some unique problems relative to multiple users transmitting upstream data at the same time, related to the protocol for network acquisition. Their network is sized and provisioned to handle a given volume of web traffic, which has a particular signature in terms of traffic direction, quantity, and duty cycle. P2P has a very different traffic signature than web browsing. Making P2P, VoIP, and HTTP share wires efficiently is a hard problem.

    Jumping up and down and screaming “forgery” while getting all red in the face is not helpful. Network engineers care about what works, not about what sounds good in the typical media sound bite. If RST injection solves a particular problem better than other methods, we’re going to use it, and if it doesn’t, we aren’t. When NATs overflow their mapping tables, we inject RST packets. When dial-up connections are taken down, we inject RST packets. When middleboxes have their TCP state tables overflow, we inject RST packets. Deal with it.

    The congestion problems caused by P2P on networks designed for downloading are not going to be resolved by some hare-brained scheme requiring changes in the Windows TCP/IP stack, they’re going to be resolved primarily at layer two and by economic means. Upgrading the Windows TCP/IP stack is going to be necessary in the long run, but MS has proved itself reluctant to take action on that front.

  8. Bryan Feir says

    @Spudz:

    Just a couple of explanations, other people feel free to correct me:

    P4P: Proactive network Provider Participation for P2P. A protocol update that includes the ISP as part of the P2P setup; I believe the primary focus is to include more network layout information in the peer selection process, so that P2P transfers will tend to favour more ‘local’ connections for data that can be obtained locally. The idea being that this way if two peers within the same ISP’s network each have data the other is looking for, they will get it from each other rather than both trying to access a higher-bandwidth remote site. This limits the effect of P2P clients on the backbone, so it’s good if your bottleneck is the ISP’s outside link.

    BOF: Birds Of a Feather. A relatively informal discussion group. The IETF tends to do a lot of its initial design work in BOFs, where people with similar interests get together to hash out ideas before things actually get written down. Most IETF ideas are based on ‘rough consensus (and running code)’, so the idea of the BOFs is to get enough of an agreement to actually start moving forward.

  9. It’s a huge fallacy to assert that the collection of RFCs that we currently have addresses all possible problems with Internet engineering. New RFCs are being written as we speak because there are always new challenges, and the Internet Way is to address challenges by experimentation, and only to write RFCs based on successful experiements.

    Most other ISPs can handle the congestion problem. Using “fair queue” buffers (Cisco has them, Linux has them, probably everyone else too) helps a lot, also making an effort to collect similar types of users together into contention groups, plus applying a bit of sensible QOS to lower the priority of the really “go nuts” type users. It’s a pretty harsh “experiment” to just decide to start spoofing packets and trying to clobber customer connections. In all honesty, I’m willing to bet that there never will be an RFC that ever advocates spoofed RST as a congestion management strategy (and I think we all know that).

    If a TCP stack shrinks the outgoing window size in the presence of congestion, the delayed responses to packets would serve to establish the speed of transmission even without ongoing packet loss.

    Packeteer devices (and similar) will mangle packets to artificially shrink the window in order to throttle a source. More polite than an RST, still a protocol violation. You could use a combination of window size mangling and deliberately delaying the ACK packets to gracefully slow P2P users (but it would require a reasonably advanced box). You could also count the total number of open connections and just drop SYN packets after it hit a maximum (still RFC compliant, but slightly nasty). There’s lots of things that can be done.

    If a particular customer is transferring an excessive amount of data, it would be entirely proper (from a standards-compliance perspective) for a gateway to buffer those packets and retransmit them at an acceptable speed; if the backlog of packets in the gateway gets too large, the gateway can drop packets.

    Yes, this is the completely correct way to do the job. It requires an actively managed rate limiter at the customer’s end of the link, and the rate limiter needs to be fed knowledge of the current network capacity.

    We went through this at length some months back. Comcast oversold their bandwidth by a large ratio (and failed to make this clear to customers) so everyone got given big headroom, despite the network having nowhere near the capacity to handle that. In practice, somewhere between 10:1 and 20:1 bandwidth contention is the workable limit.

    The crappy modems that Comcast used could not properly manage QOS and fair division of a shared medium. Since the inroute was the most limited and most congested, adding QOS and rate limiting at Comcast’s end of the pipe was useless, because the damage was already done by blasting the shared media. Put in simple terms, their tools were not up to the job, and they had promised more than they could deliver. The RST packets were a kludgy workaround.

    However, there is no need for regulation to fix this, all it needs is a diverse market in telcos (so customers have a choice) and the ability for users who discover problems to tell other users about the problems. Free Market to the rescue.

  10. Two different groups held BOFs on P2P …

    BOFs?

  11. BTW, the only way TCP flow control would start to have a problem if the router buffered up to 1MB of data and delivered outgoing packets at whatever rate was allowable, would be if the total window size on active connections exceeded 1MB. If a TCP stack shrinks the outgoing window size in the presence of congestion, the delayed responses to packets would serve to establish the speed of transmission even without ongoing packet loss. Further, even if a particular protocol would start performing poorly under certain circumstances, could it perform any worse than if a non-conformant router started forging RST packets?

  12. If a person’s outgoing Internet traffic flows through a pipe that delivers them at a fixed maximum rate, the amount of traffic the person puts onto the outside net isn’t going to exceed that rate no matter what the person does. If the person is using protocol stacks which can effectively deal with the congestion via whatever means, the person will manage to achieve net throughput close to the allowable bandwidth. If the person is using protocol stacks which do not handle congestion well, the person’s net throughput may be far below optimum, thus providing an incentive to use a better protocol stack.

    Personally, I suspect that the optimum approach would be for Comcast to send UDP packets on a reserved port number for flow-control purposes; clients could ignore this data if they wanted to, but their throughput would not be as good as what they could achieve by honoring it. Sending a flow-control packet for every incoming packet would obviously be wasteful, but if the router had a 1MB buffer for each subscriber, and sent an “XOFF” when the buffer was 70% full and another at 90% and period ones at 100%, and sent an “XON” when the buffer dropped below 30%, another at 10%, and periodic XON packets at 0%, the buffer would tend to stay between 30% and 70%, with an average of two flow control packets every ~400K of data.

    The key point is that nothing in the protocols authorize anyone except the endpoint of a connection to send forged RST packets, and the design intention of such packets is to break protocols rather than merely slow them down.

  13. Supercat, you’re way out of the loop. The ICMP mechanism for overload management, Source Quench, never worked. This mechanism requires the router that drops packets to send a message to the sender of each dropped packet, but when the network is congested these messages contribute to the overload condition. So that mechanism was discarded in 1987 when the Jacobson Algorithm was patched into the entire Internet.

    Jacobson has a number of problems of its own, including cycling, flow unfairness, and the inability to use links to more than 75% of capacity. So it was replaced by ECN to correct all these evident flaws in 2000, but Microsoft hasn’t seen fit to enable it.

    ECN fails to address the problem of multiple flows, as it’s a flow-based mechanism like Jacobson. For that reason, Briscoe proposed Re-ECN to enable better per-user fairness and congestion control. That’s still an area of study, with no RFC yet adopted.

    It’s a huge fallacy to assert that the collection of RFCs that we currently have addresses all possible problems with Internet engineering. New RFCs are being written as we speak because there are always new challenges, and the Internet Way is to address challenges by experimentation, and only to write RFCs based on successful experiements.

    Two different groups held BOFs on P2P at the recent Dublin meeting of the IETF, and that would not have happened if the consensus were that P2P has already been solved in existing RFCs. People who argue that Jacobson elegantly solves the problem of congestion aren’t paying attention. People who argue that ICMP solves it are simply negligent.

  14. The RFC documents upon which Internet protocols are based specify how gateways and routers are supposed to behave. If a particular customer is transferring an excessive amount of data, it would be entirely proper (from a standards-compliance perspective) for a gateway to buffer those packets and retransmit them at an acceptable speed; if the backlog of packets in the gateway gets too large, the gateway can drop packets. Further, there are standard protocols by which a gateway can use ICMP messaging to tell a machine to slow down even without it having to guess, based upon packet loss, that it should do so.

    Properly-written protocol suites will detect the slowdown caused by congestion and transmit less aggressively. Improperly-written suites may not back off so nicely, but a suite which backs off properly will net better performance for the user than one which doesn’t (the latter suite would have a much higher rate of packet loss than the former; trying to compensate for packet loss with aggressive retransmission would cause the customer’s bandwidth to get diluted with duplicate packets).

    If Comcast’s goal was to manage bandwidth, the proper approach would have been to use the techniques provided in Internet standards. Sending RST packets for other machine’s packets is not authorized in any RFC I’m aware of.

  15. Kevin and the Dems contradicted themselves multiple times in their press statements, and committed numerous errors of fact.

    Who do you think you’re kidding?

    (and BTW: Vuze is one of the parties who filed petitions against Comcast, David, and the combination of Free Press, Public Knowledge, and random others was the other party. The FCC ruled on both parties’ petitions.)

  16. David Robinson says

    Richard: To clarify, my post reacts to the FCC documents released on August first, which address themselves to Comcast and do not discuss the Vuze petition.

  17. This post is a troll, and a crude one at that. Kevin & the Dems said many things about the Vuze petition, none of which demonstrated “technical competence.” In fact, their statements are a maze of contradictions.

    But it was funny, so I have to give David Robinson some credit.

  18. There are certain copyrighted works, such as the music copyrighted by RIAA member companies, for which it may be the case that no license has been granted for the distribution of the work via a peer-to-peer network.

    I would expect that for just about every copyrighted work, it MAY be the case that no license has been granted. Since every work is copyrighted automatically these days, what you are saying is that each packet needs an explicit license. Forgive me for saying, but that is an unworkable suggestion.

    In such cases, recognizing a copy of a given work provides prima facie indication of a possible copyright violation.

    Again, this applies as much for any packet, unless you can explicitly recognise that you do have a license for each particular packet.

    That means a copyright owner could, as a result of deep packet inspection that led to a certain work being recognized, form and act on a good faith belief that certain packets flowing over the network constitute copyright infringement.

    A copyright owner might be able to recognise a work that they have ownership of and might further (in principle) have a full list of all licenses for that work (let’s pretend that fair use is dead, shall we) but a network operator could never achieve such a thing.

    The good faith belief that infringement is occurring may, in a given case, turn out not to be correct. But my understanding is that the belief itself is enough to justify an effort to force the user to explain the copying (i.e., to force the user to assert a fair use or other defense against infringement, if such a defense is available.)

    Again, what you are demanding is an explicit license for every packet, which is simply not going to happen.

  19. P4P?

  20. Don’t forget the contractual problem. Comcast’s customers contracted unrestricted (vs walled garden) internet service, and Comcast not only failed to supply this per contract but lied about it. Their initial story was network management at peak times, and finally it turned out that P2P was throttled all the time.

  21. Do you think Comcast and BitTorrent’s working “together to address network management and content distribution” is really about switching BT’s applications over to P4P instead of P2P?

    Both parties have clear incentives for doing that. Comcast gets lower costs, BT gets potentially higher throughput. A lot is riding on the technical details of P4P and how it’s implemented, of course, but it’s at least possible that P4P might be a viable alternative which makes everybody happier. Bootstrapping a new protocol from scratch is hard, so the best way to get momentum behind P4P would be to get a major P2P client like BitTorrent signed up and using it.

  22. David Robinson says

    Albert — There are certain copyrighted works, such as the music copyrighted by RIAA member companies, for which it may be the case that no license has been granted for the distribution of the work via a peer-to-peer network. In such cases, recognizing a copy of a given work provides prima facie indication of a possible copyright violation. That means a copyright owner could, as a result of deep packet inspection that led to a certain work being recognized, form and act on a good faith belief that certain packets flowing over the network constitute copyright infringement.

    The good faith belief that infringement is occurring may, in a given case, turn out not to be correct. But my understanding is that the belief itself is enough to justify an effort to force the user to explain the copying (i.e., to force the user to assert a fair use or other defense against infringement, if such a defense is available.)

  23. Albert ARIBAUD says

    I am puzzled by those “BitTorrent files that appear, after deep packet inspection, to violate copyright”. As far as I understand, no content inspection can ascertain the legality of a file, because that legality comes from licensing, and deep packet inspection cannot take (possibly ad hoc) licencing into account. What am I missing?

  24. “Both dissenting commissioners, it turns out, materially misunderstood the technical facts on which they assert their decisions were based. But the majority, despite technical competence …”

    Sigh. I’m not even going to try. It won’t do any good :-(.

    Someone is wrong on the Internet …