November 25, 2024

Discrimination, Congestion, and Cooperation

I’ve been writing lately about the nuts and bolts of network discrimination. Today I want to continue that discussion by talking about how the Internet responds to congestion, and how network discrimination might affect that response. As usual, I’ll simplify the story a bit to spare you a lengthy dissertation on network management, but I won’t mislead you about the fundamental issues.

I described previously how network congestion causes Internet routers to discard some data packets. When data packets arrive at a router faster than the router can forward those packets along the appropriate outgoing links, the packets will pile up in the router’s memory. Eventually the memory will fill up, and the router will have to drop (i.e., discard) some packets.

Every dropped packet has some computer at the edge of the network waiting for it. Eventually the waiting computer and its communication partner will figure out that the packet must have been dropped. From this, they will deduce that the network is congested. So they will re-send the dropped packet, but in response to the probable congestion they will slow down the rate at which they transmit data. Once enough packets are dropped, and enough computers slow down their packet transmission, the congestion will clear up.

This is a very indirect way of coping with congestion – drop packets, wait for endpoint computers to notice the missing packets and respond by slowing down – but it works pretty well.

One interesting aspect of this system is that it is voluntary – the system relies on endpoint computers to slow down when they see congestion, but nothing forces them to do so. We can think of this as a kind of deal between endpoint computers, in which each one promises to slow down if its packets are dropped.

But there is an incentive to defect from this deal. Suppose that you defect – when your packets are dropped you keep on sending packets as fast as you can – but everybody else keeps the deal. When your packets are dropped, the congestion will continue. Then other people’s packets will be dropped, until enough of them slow down and the congestion eases. By ignoring the congestion signals you are getting more than your fair share of the network.

Despite the incentive to defect, most people keep the deal by using networking software that slows down as expected in response to congestion. Why is this? We could debate the reasons, but it seems safe to say that there is a sort of social contract by which users cooperate with their peers, and software vendors cooperate by writing software that causes users to keep the deal.

One of the reasons users comply, I think, is a sense of fairness. If I believe that the burdens of congestion control fall pretty equally on everybody, at least in the long run, then it seems fair to me to slow down my own transmissions when my turn comes. One time I might be the one whose packets get dropped, so I will slow down. Another time, by chance, somebody else’s packets may be dropped, so it will be their turn to slow down. Everybody gets their turn.

(Note: I’m not claiming that the average user has thought through these issues carefully. But many software providers have made decisions about what to do, and those decisions factor in users’ wants and needs. Software developers act as proxies for users in making these decisions. Perhaps this point will get more discussion in the comments.)

But now suppose that the network starts singling out some people and dropping their packets first. Now the burden of congestion control falls heavily on them – they have to slow down and others can just keep going. Suddenly the I’ll-slow-down-if-you-do deal doesn’t seem so fair, and the designated victims are more likely to defect from the deal and just keep sending data even when the network tells them to slow down.

The implications for network discrimination should now be pretty clear. If the network discriminates by sending misleading signals about congestion, and sending them preferentially to certain machines or certain applications, the incentive for those machines and applications to stick to the social contract and do their share to control congestion, will weaken. Will this lead to a wave of defections that destroys the Net? Probably not, but I can’t be sure. I do think this is something we should think about.

We should also listen to the broader lesson of this analysis. If the network discriminates, users and applications will react by changing their behavior. Discrimination will have secondary effects, and we had better think carefully about what they will be.

[Note for networking geeks: Yes, I know about RED, and congestion signaling, non-TCP protocols that don’t do backoff, and so on. I hope you’ll agree that despite all of those real-world complications, the basic argument in this post is valid.]

Where to Go, and What to Read

We don’t have a “real” post today, just plugs for two good things.

(1) The NYU/Princeton interdisciplinary workshop on spyware will be next Thursday (evening) and Friday (day), in New York. It’s free and open to the public. Please let us know if you plan to come.

(2) Students in my course on Information Technology and Public Policy are writing lots of good stuff on the course blog. Every student has to post once a week. They deserve more readers; and we welcome comments and discussion. Non-students are welcome to join the course vicariously, by doing the same weekly reading as the students, and participating in a weekly open discussion thread about the reading.

RIAA Says Future DRM Might "Threaten Critical Infrastructure and Potentially Endanger Lives"

We’re in the middle of the U.S. Copyright Office’s triennial DMCA exemption rulemaking. As you might expect, most of the filings are dry as dust, but buried in the latest submission by a coalition of big copyright owners (publishers, Authors’ Guild, BSA, MPAA, RIAA, etc.) is an utterly astonishing argument.

Some background: In light of the Sony-BMG CD incident, Alex and I asked the Copyright Office for an exemption allowing users to remove from their computers certain DRM software that causes security and privacy harm. The CCIA and Open Source and Industry Association made an even simpler request for an exemption for DRM systems that “employ access control measures which threaten critical infrastructure and potentially endanger lives.” Who could oppose that?

The BSA, RIAA, MPAA, and friends – that’s who. Their objections to these two requests (and others) consist mostly of lawyerly parsing, but at the end of their argument about our request comes this (from pp. 22-23 of the document, if you’re reading along at home):

Furthermore, the claimed beneficial impact of recognition of the exemption – that it would “provide an incentive for the creation of protection measures that respect the security of consumers’ computers while protecting the interests of the record labels” ([citation to our request]) – would be fundamentally undermined if copyright owners – and everyone else – were left in such serious doubt about which measures were or were not subject to circumvention under the exemption.

Hanging from the end of the above-quoted excerpt is a footnote:

This uncertainty would be even more severe under the formulations proposed in submissions 2 (in which the terms “privacy or security” are left completely undefined) or 8 [i.e., the CCIA request] (in which the boundaries of the proposed exemption would turn on whether access controls “threaten critical infrastructure and potentially endanger lives”).

You read that right. They’re worried that there might be “serious doubt” about whether their future DRM access control systems are covered by these exemptions, and they think the doubt “would be even more severe” if the “exemption would turn on whether access controls ‘threaten critical infrastructure and potentially endanger lives’.”

Yikes.

One would have thought they’d make awfully sure that a DRM measure didn’t threaten critical infrastructure or endanger lives, before they deployed that measure. But apparently they want to keep open the option of deploying DRM even when there are severe doubts about whether it threatens critical infrastructure and potentially endangers lives.

And here’s the really amazing part. In order to protect their ability to deploy this dangerous DRM, they want the Copyright Office to withhold from users permission to uninstall DRM software that actually does threaten critical infrastructure and endanger lives.

If past rulemakings are a good predictor, it’s more likely than not that the Copyright Office will rule in their favor.

Nuts and Bolts of Net Discrimination, Part 2

Today I want to continue last week’s discussion of how network discrimination might actually occur. Specifically, I want to talk about packet reordering.

Recall that an Internet router is a device that receives packets of data on some number of incoming links, decides on which outgoing link each packet should be forwarded, and sends packets on the appropriate outgoing links. If, when a packet arrives, the appropriate outgoing link is busy, the packet is buffered (i.e., stored in the router’s memory) until the outgoing link is available.

When an outgoing link becomes available, there may be several buffered packets that are waiting to be transmitted on that link. You might expect the router to send the packet that has been waiting the longest – a first-come, first-served (FCFS) rule. Often that is what happens. But the Internet Protocol doesn’t require routers to forward packets in any particular order. In principle a router can choose any packet it likes to forward next.

This suggests an obvious mechanism for discriminating between two categories of traffic: a network provider can program its routers to always forward high-priority packets before low-priority packets. Low-priority packets feel this discrimination as an extra delay in passing through the network.

Recall that last week, when the topic was discrimination by packet-dropping, I distinguished between minimal dropping, which drops low-priority packets first but only drops a packet when necessary, and non-minimal dropping, which intentionally drops some low-priority packets even when it is possible to avoid dropping anything. The same kind of distinction applies to today’s discussion of discrimination by delay. A minimal form of delay discrimination only delays low-priority packets when it is necessary to delay some packet – for example when multiple packets are waiting for a link that can only transmit one packet at a time. There is also a non-minimal form of delay discrimination, which may delay a low-priority packet even when the link it needs is available. As before, a net neutrality rule might want to treat minimal and non-minimal delay discrimination differently.

One interesting consequence of minimal delay discrimination is that it hurts some applications more than others. Internet traffic is usually bursty, with periods of relatively low activity punctuated by occasional bursts of packets. If you’re browsing the Web, for example, you generate little or no traffic while you’re reading a page, but there is a burst of traffic when your browser needs to fetch a new page.

If a network provider is using minimal delay discrimination, and the high-priority traffic is bursty, then low-priority traffic will usually sail through the network with little delay, but will experience noticeable delay whenever there is a burst of high-priority traffic. The technical term for this kind of on-again, off-again delay is “jitter”.

Some applications can handle jitter with no problem. If you’re downloading a big file, you care more about the average packet arrival rate than about when any particular packet arrives. If you’re browsing the web, modest jitter will cause, at worst, a slight delay in downloading some pages. If you’re watching a streaming video, your player will buffer the stream so jitter won’t bother you much.

But applications like voice conferencing or Internet telephony, which rely on steady streaming of interactive, realtime communication, can suffer a lot if there is jitter. Users report that VoIP services like Vonage and Skype can behave poorly when subjected to network jitter.

And we know that residential ISPs are often phone companies or offer home phone service, so they may have a special incentive to discriminate against competing Internet phone services. Causing jitter for such services, whether by minimal or non-minimal delay discrimination, could be an effective tactic for an ISP that wants to drive customers away from independent Internet telephone services.

There is some anecdotal evidence to suggest that Comcast’s residential Internet customers may be having trouble using the Vonage Internet phone service because of jitter problems.

Let’s assume for the sake of argument that these reports are accurate – that Comcast’s network has high jitter, and that this is causing problems for Vonage users. What might be causing this? One possibility is that Comcast is using delay discrimination, either minimal or non-minimal, with the goal of causing this problem. Many people would want rules against this kind of behavior.

(To be clear: I’m not accusing Comcast of anything. I’m just saying that if we assume that Comcast’s network causes high jitter, and if we assume that high jitter does cause Vonage problems, then we should consider the possibility that Comcast is trying to cause the jitter.)

Another possibility is that Comcast isn’t trying to cause problems for Vonage users, and Comcast’s management of its network is completely reasonable and nondiscriminatory, but for reasons beyond Comcast’s control its network happens to have higher jitter than other networks have. Perhaps the jitter problems are temporary. In this case, most people would agree that net neutrality rules shouldn’t punish Comcast for something that isn’t really its fault.

This most challenging possibility, from a policy standpoint, (still assuming that the jitter problem exists) is that Comcast didn’t take any obvious steps to cause the problem but is happy that it exists, and is subtly managing its network in a way that fosters jitter. Network management is complicated, and many management decisions could impact jitter one way or the other. A network provider who wants to cause high jitter can do so, and might have pretextual excuses for all of the steps it takes. Can regulators tell this kind of strategem apart from fair and justified engineering decisions that happen to cause a little temporary jitter?

Surely some discriminatory strategies are so obvious, and the offered engineering pretexts so weak, that we could block or punish them without worrying about being wrong. But there would be hard cases too. Net neutrality regulation, even if justified, will inevitably lead to some difficult line-drawing.

USACM Policy Statement on DRM

I’m pleased to post here a new policy statement on DRM, issued by USACM, the U.S. public policy committee of ACM, the leading professional society for computer scientists. It’s a balanced yet strong statement of principles that can be applied to many public policy questions relating to DRM. I helped to draft it, and I support it. USACM has posted the original document in PDF format.

USACM is a valuable resource on infotech policy issues. And they have a useful policy blog too.

USACM Policy Recommendations on Digital Rights Management

February 2006

BACKGROUND:

New technologies have remade the consumer entertainment landscape, allowing creative content – such as movies, television, and radio programming – to be delivered in digital form. Because exact copies of digital content can be widely and quickly distributed, some content distributors are employing technical protection systems to manage consumer uses of copyrighted content, often characterized as “digital rights management (DRM)” technology. DRM systems are intended to enable distributors to manage consumer uses of content. In theory, this may prevent the making and distribution of infringing copies of digital works. However, use of these technologies has created controversy, especially as regards issues of “fair use” and public interest. In some cases, DRM technologies have been found to undermine consumers’ rights, infringe customer privacy, and damage the security of consumers’ computers. One notable example was the software distributed with compact discs in 2005 by Sony BMG. Sony subsequently withdrew the product, which had created security and privacy vulnerabilities for consumers’ computers, because of resulting public criticism and legal action.

The marketplace should determine the success or failure of DRM technologies but, increasingly, content distributors are turning to legislatures or the courts to erect new legal mandates to replace long-standing copyright regimes. DRM systems should be mechanisms for reinforcing existing legal constraints on behavior, not as mechanisms for creating new legal constraints. Striking a balance among consumers’ rights, public interest, and protection of valid copyright interests is no simple task for technologists or policymakers. For this reason, USACM has developed the following recommendations on this important issue.

RECOMMENDATIONS:

Competition: Public policy should enable a variety of DRM approaches and systems to emerge, should allow and facilitate competition among them, and should encourage interoperability among them. No proprietary DRM technology should be mandated for use in any medium.

Copyright Balance: Because lawful use (including fair use) of copyrighted works is in the public’s best interest, a person wishing to make lawful use of copyrighted material should not be prevented from doing so. As such, DRM systems should be mechanisms for reinforcing existing legal constraints on behavior (arising from copyright law or by reasonable contract), not as mechanisms for creating new legal constraints. Appropriate technical and/or legal safeguards should be in place to preserve lawful uses in cases where DRM systems cannot distinguish lawful uses from infringing uses.

Consumer Protection: DRM should not be used to interfere with the rights of consumers. Neither should DRM technologies interfere with any technology or use of consumer systems that are unrelated to the copyrighted items being managed. Policymakers should actively monitor actual use of DRM and amend policies as necessary to protect these rights and interests.

Privacy and Consent: Public policy should ensure that DRM systems may collect, store, and redistribute private information about users only to the extent required for their proper operation, that they follow fair information practices, and that they are subject to informed consent by users.

Research and Public Discourse: DRM systems and policies should not interfere with legitimate research, with discourse about research results, or with other matters of public concern. Laws and regulations concerning DRM should contain explicit provisions to protect this principle.

Targeted Policies: Public policies meant to reinforce copyright should be limited to applications where copyright interests are actually at stake. Laws and regulations concerning DRM should have limited scope, applying only where there is a realistic risk of copyright infringement.