April 19, 2024

Archives for March 2006

RIAA Says Future DRM Might "Threaten Critical Infrastructure and Potentially Endanger Lives"

We’re in the middle of the U.S. Copyright Office’s triennial DMCA exemption rulemaking. As you might expect, most of the filings are dry as dust, but buried in the latest submission by a coalition of big copyright owners (publishers, Authors’ Guild, BSA, MPAA, RIAA, etc.) is an utterly astonishing argument.

Some background: In light of the Sony-BMG CD incident, Alex and I asked the Copyright Office for an exemption allowing users to remove from their computers certain DRM software that causes security and privacy harm. The CCIA and Open Source and Industry Association made an even simpler request for an exemption for DRM systems that “employ access control measures which threaten critical infrastructure and potentially endanger lives.” Who could oppose that?

The BSA, RIAA, MPAA, and friends – that’s who. Their objections to these two requests (and others) consist mostly of lawyerly parsing, but at the end of their argument about our request comes this (from pp. 22-23 of the document, if you’re reading along at home):

Furthermore, the claimed beneficial impact of recognition of the exemption – that it would “provide an incentive for the creation of protection measures that respect the security of consumers’ computers while protecting the interests of the record labels” ([citation to our request]) – would be fundamentally undermined if copyright owners – and everyone else – were left in such serious doubt about which measures were or were not subject to circumvention under the exemption.

Hanging from the end of the above-quoted excerpt is a footnote:

This uncertainty would be even more severe under the formulations proposed in submissions 2 (in which the terms “privacy or security” are left completely undefined) or 8 [i.e., the CCIA request] (in which the boundaries of the proposed exemption would turn on whether access controls “threaten critical infrastructure and potentially endanger lives”).

You read that right. They’re worried that there might be “serious doubt” about whether their future DRM access control systems are covered by these exemptions, and they think the doubt “would be even more severe” if the “exemption would turn on whether access controls ‘threaten critical infrastructure and potentially endanger lives’.”

Yikes.

One would have thought they’d make awfully sure that a DRM measure didn’t threaten critical infrastructure or endanger lives, before they deployed that measure. But apparently they want to keep open the option of deploying DRM even when there are severe doubts about whether it threatens critical infrastructure and potentially endangers lives.

And here’s the really amazing part. In order to protect their ability to deploy this dangerous DRM, they want the Copyright Office to withhold from users permission to uninstall DRM software that actually does threaten critical infrastructure and endanger lives.

If past rulemakings are a good predictor, it’s more likely than not that the Copyright Office will rule in their favor.

Nuts and Bolts of Net Discrimination, Part 2

Today I want to continue last week’s discussion of how network discrimination might actually occur. Specifically, I want to talk about packet reordering.

Recall that an Internet router is a device that receives packets of data on some number of incoming links, decides on which outgoing link each packet should be forwarded, and sends packets on the appropriate outgoing links. If, when a packet arrives, the appropriate outgoing link is busy, the packet is buffered (i.e., stored in the router’s memory) until the outgoing link is available.

When an outgoing link becomes available, there may be several buffered packets that are waiting to be transmitted on that link. You might expect the router to send the packet that has been waiting the longest – a first-come, first-served (FCFS) rule. Often that is what happens. But the Internet Protocol doesn’t require routers to forward packets in any particular order. In principle a router can choose any packet it likes to forward next.

This suggests an obvious mechanism for discriminating between two categories of traffic: a network provider can program its routers to always forward high-priority packets before low-priority packets. Low-priority packets feel this discrimination as an extra delay in passing through the network.

Recall that last week, when the topic was discrimination by packet-dropping, I distinguished between minimal dropping, which drops low-priority packets first but only drops a packet when necessary, and non-minimal dropping, which intentionally drops some low-priority packets even when it is possible to avoid dropping anything. The same kind of distinction applies to today’s discussion of discrimination by delay. A minimal form of delay discrimination only delays low-priority packets when it is necessary to delay some packet – for example when multiple packets are waiting for a link that can only transmit one packet at a time. There is also a non-minimal form of delay discrimination, which may delay a low-priority packet even when the link it needs is available. As before, a net neutrality rule might want to treat minimal and non-minimal delay discrimination differently.

One interesting consequence of minimal delay discrimination is that it hurts some applications more than others. Internet traffic is usually bursty, with periods of relatively low activity punctuated by occasional bursts of packets. If you’re browsing the Web, for example, you generate little or no traffic while you’re reading a page, but there is a burst of traffic when your browser needs to fetch a new page.

If a network provider is using minimal delay discrimination, and the high-priority traffic is bursty, then low-priority traffic will usually sail through the network with little delay, but will experience noticeable delay whenever there is a burst of high-priority traffic. The technical term for this kind of on-again, off-again delay is “jitter”.

Some applications can handle jitter with no problem. If you’re downloading a big file, you care more about the average packet arrival rate than about when any particular packet arrives. If you’re browsing the web, modest jitter will cause, at worst, a slight delay in downloading some pages. If you’re watching a streaming video, your player will buffer the stream so jitter won’t bother you much.

But applications like voice conferencing or Internet telephony, which rely on steady streaming of interactive, realtime communication, can suffer a lot if there is jitter. Users report that VoIP services like Vonage and Skype can behave poorly when subjected to network jitter.

And we know that residential ISPs are often phone companies or offer home phone service, so they may have a special incentive to discriminate against competing Internet phone services. Causing jitter for such services, whether by minimal or non-minimal delay discrimination, could be an effective tactic for an ISP that wants to drive customers away from independent Internet telephone services.

There is some anecdotal evidence to suggest that Comcast’s residential Internet customers may be having trouble using the Vonage Internet phone service because of jitter problems.

Let’s assume for the sake of argument that these reports are accurate – that Comcast’s network has high jitter, and that this is causing problems for Vonage users. What might be causing this? One possibility is that Comcast is using delay discrimination, either minimal or non-minimal, with the goal of causing this problem. Many people would want rules against this kind of behavior.

(To be clear: I’m not accusing Comcast of anything. I’m just saying that if we assume that Comcast’s network causes high jitter, and if we assume that high jitter does cause Vonage problems, then we should consider the possibility that Comcast is trying to cause the jitter.)

Another possibility is that Comcast isn’t trying to cause problems for Vonage users, and Comcast’s management of its network is completely reasonable and nondiscriminatory, but for reasons beyond Comcast’s control its network happens to have higher jitter than other networks have. Perhaps the jitter problems are temporary. In this case, most people would agree that net neutrality rules shouldn’t punish Comcast for something that isn’t really its fault.

This most challenging possibility, from a policy standpoint, (still assuming that the jitter problem exists) is that Comcast didn’t take any obvious steps to cause the problem but is happy that it exists, and is subtly managing its network in a way that fosters jitter. Network management is complicated, and many management decisions could impact jitter one way or the other. A network provider who wants to cause high jitter can do so, and might have pretextual excuses for all of the steps it takes. Can regulators tell this kind of strategem apart from fair and justified engineering decisions that happen to cause a little temporary jitter?

Surely some discriminatory strategies are so obvious, and the offered engineering pretexts so weak, that we could block or punish them without worrying about being wrong. But there would be hard cases too. Net neutrality regulation, even if justified, will inevitably lead to some difficult line-drawing.

USACM Policy Statement on DRM

I’m pleased to post here a new policy statement on DRM, issued by USACM, the U.S. public policy committee of ACM, the leading professional society for computer scientists. It’s a balanced yet strong statement of principles that can be applied to many public policy questions relating to DRM. I helped to draft it, and I support it. USACM has posted the original document in PDF format.

USACM is a valuable resource on infotech policy issues. And they have a useful policy blog too.

USACM Policy Recommendations on Digital Rights Management

February 2006

BACKGROUND:

New technologies have remade the consumer entertainment landscape, allowing creative content – such as movies, television, and radio programming – to be delivered in digital form. Because exact copies of digital content can be widely and quickly distributed, some content distributors are employing technical protection systems to manage consumer uses of copyrighted content, often characterized as “digital rights management (DRM)” technology. DRM systems are intended to enable distributors to manage consumer uses of content. In theory, this may prevent the making and distribution of infringing copies of digital works. However, use of these technologies has created controversy, especially as regards issues of “fair use” and public interest. In some cases, DRM technologies have been found to undermine consumers’ rights, infringe customer privacy, and damage the security of consumers’ computers. One notable example was the software distributed with compact discs in 2005 by Sony BMG. Sony subsequently withdrew the product, which had created security and privacy vulnerabilities for consumers’ computers, because of resulting public criticism and legal action.

The marketplace should determine the success or failure of DRM technologies but, increasingly, content distributors are turning to legislatures or the courts to erect new legal mandates to replace long-standing copyright regimes. DRM systems should be mechanisms for reinforcing existing legal constraints on behavior, not as mechanisms for creating new legal constraints. Striking a balance among consumers’ rights, public interest, and protection of valid copyright interests is no simple task for technologists or policymakers. For this reason, USACM has developed the following recommendations on this important issue.

RECOMMENDATIONS:

Competition: Public policy should enable a variety of DRM approaches and systems to emerge, should allow and facilitate competition among them, and should encourage interoperability among them. No proprietary DRM technology should be mandated for use in any medium.

Copyright Balance: Because lawful use (including fair use) of copyrighted works is in the public’s best interest, a person wishing to make lawful use of copyrighted material should not be prevented from doing so. As such, DRM systems should be mechanisms for reinforcing existing legal constraints on behavior (arising from copyright law or by reasonable contract), not as mechanisms for creating new legal constraints. Appropriate technical and/or legal safeguards should be in place to preserve lawful uses in cases where DRM systems cannot distinguish lawful uses from infringing uses.

Consumer Protection: DRM should not be used to interfere with the rights of consumers. Neither should DRM technologies interfere with any technology or use of consumer systems that are unrelated to the copyrighted items being managed. Policymakers should actively monitor actual use of DRM and amend policies as necessary to protect these rights and interests.

Privacy and Consent: Public policy should ensure that DRM systems may collect, store, and redistribute private information about users only to the extent required for their proper operation, that they follow fair information practices, and that they are subject to informed consent by users.

Research and Public Discourse: DRM systems and policies should not interfere with legitimate research, with discourse about research results, or with other matters of public concern. Laws and regulations concerning DRM should contain explicit provisions to protect this principle.

Targeted Policies: Public policies meant to reinforce copyright should be limited to applications where copyright interests are actually at stake. Laws and regulations concerning DRM should have limited scope, applying only where there is a realistic risk of copyright infringement.

Nuts and Bolts of Network Discrimination

One of the reasons the network neutrality debate is so murky is that relatively few people understand the mechanics of traffic discrimination. I think that in reasoning about net neutrality it helps to understand how discrimination would actually be put into practice. That’s what I want to explain today. Don’t worry, the details aren’t very complicated.

Think of the Internet as a set of routers (think: metal boxes with electronics inside) connected by links (think: long wires). Packets of data get passed from one router to another, via links. A packet is forwarded from router to router, until it arrives at its destination.

Focus now on a single router. It has several incoming links on which packets arrive, and several outgoing links on which it can send packets. When a packet shows up on an incoming link, the router will figure out (by methods I won’t describe here) on which outgoing link the packet should be forwarded. If that outgoing link is free, the packet can be sent out on it immediately. But if the outgoing link is busy transmitting another packet, the newly arrived packet will have to wait – it will be “buffered” in the router’s memory, waiting its turn until the outgoing link is free.

Buffering lets the router deal with temporary surges in traffic. But if packets keep showing up faster than they can be sent out on some outgoing link, the number of buffered packets will grow and grow, and eventually the router will run out of buffer memory.

At that point, if one more packet shows up, the router has no choice but to discard a packet. It can discard the newly arriving packet, or it can make room for the new packet by discarding something else. But something has to be discarded.

(This is one illustration of the “best effort” principle, which is one of the clever engineering decisions that made the Internet feasible. The Internet will do its best to deliver each packet promptly, but it doesn’t make any guarantees. It’s up to software that uses the Internet Protocol to detect dropped packets and recover. The software you’re using to retrieve these words can, and probably often does, recover from dropped packets.)

When a router is forced to discard a packet, it can discard any packet it likes. One possibility is that it assigns priorities to the packets, and always discards the packet with lowest priority. The technology doesn’t constrain how packets are prioritized, as long as there is some quick way to find the lowest-priority packet when it becomes necessary to discard something.

This mechanism defines one type of network discrimination, which prioritizes packets and discards low-priority packets first, but only discards packets when that is absolutely necessary. I’ll call it minimal discrimination, because it only discriminates when it can’t serve everybody.

With minimal discrimination, if the network is not crowded, lots of low-priority packets can get through. Only when there is an unavoidable conflict with high-priority packets is a low-priority packet inconvenienced.

Contrast this with another, more drastic form of discrimination, which discards some low-priority packets even when it is possible to forward or deliver every packet. A network might, for example, limit low-priority packets to 20% of the network’s capacity, even if part of the other 80% is idle. I’ll call this non-minimal discrimination.

One of the basic questions to ask about any network discrimination regime is whether it is minimal in this sense. And one of the basic questions to ask about any rule limiting discrimination is how it applies to minimal versus non-minimal discrimination. We can imagine a rule, for example, that allows minimal discrimination but limits or bans non-minimal discrimination.

This distinction matters, I think, because minimal and non-minimal discrimination are supported by different arguments. Minimal discrimination may be an engineering necessity. But non-minimal discrimination is not technologically necessary – it makes service worse for low-priority packets, but doesn’t help high-priority packets – so it could only be justified by a more complicated economic argument, for example that non-minimal discrimination allows forms of price discrimination that increase social welfare. Vague arguments that you have to reserve some fraction of capacity for some purpose won’t cut it.

[Postscript for networking geeks: You might complain that it matters not only which packets are dropped but also which packets are forwarded first, and so on. True enough. I simplified things a bit to fit within a blog post; but it should be fairly obvious how to expand the principle I’m describing here to deal with the issues you’re raising.]