November 27, 2024

Archives for 2006

RFID Virus Predicted

Melanie Rieback, Bruno Crispo, and Andy Tanenbaum have a new paper describing how RFID tags might be used to propagate computer viruses. This has garnered press coverage, including a John Markoff story in today’s New York Times.

The underlying technical argument is pretty simple. An RFID tag is a tiny device, often affixed to a product of some sort, that carries a relatively small amount of data. An RFID reader is a larger device, often stationary, that can use radio signals to read and/or modify the contents of RFID tags. In a retail application, a store might affix an RFID tag to each item in stock, and have an RFID reader at each checkout stand. A customer could wheel a shopping cart full of items up to the checkout stand, and the RFID reader would determine which items were in the cart and would charge the customer and adjust the store’s inventory database accordingly.

Simple RFID tags are quite simple and only carry data that can be read or modified by readers. Tags cannot themselves be infected by viruses. But they can act as carriers, as I’ll describe below.

RFID readers, on the other hand, are often quite complicated and interact with networked databases. In our retail example, each RFID reader can connect to the store’s backend databases, in order to update the store’s inventory records. If RFID readers run complicated software, then they will inevitably have bugs.

One common class of bugs involves bad handling of unexpected or diabolical input values. For example, web browsers have had bugs in their URL-handling code, which caused the browsers to either crash or be hijacked when they encountered diabolically constructed URLs. When such a bug existed, an attacker who could present an evil URL to the browser (for example, by getting the user to navigate to it) could seize control of the browser.

Suppose that some subset of the world’s RFID readers had an input-processing bug of this general type, so that whenever one of these readers scanned an RFID tag containing diabolically constructed input, the reader would be hijacked and would execute some command contained in that input. If this were the case, an RFID-carried virus would be possible.

A virus attack might start with a single RFID tag carrying evil data. When a vulnerable reader scanned that tag, the reader’s bug would be triggered, causing the reader to execute a command specified by that tag. The command would reconfigure the reader to make it write copies of the evil data onto tags that it saw in the future. This would spread the evil data onto more tags. When any of those tags came in contact with a vulnerable reader, that reader would be infected, turning it into a factory for making more infected tags. The infection would spread from readers to new tags, and from tags to new readers. Before long many tags and readers would be infected.

To demonstrate the plausibility of this scenario, the researchers wrote their own RFID reader, giving it a common type of bug called an SQL injection vulnerability. They then constructed the precise diabolical data needed to exploit that vulnerability, and demonstrated that it would spread automatically as described. In light of this demo, it’s clear that RFID viruses can exist, if RFID readers have certain types of bugs.

Do such bugs exist in real RFID readers? We don’t know – the researchers don’t point to any – but it is at least plausible that such bugs will exist. Our experience with Web and Internet software is not encouraging in this regard. Bugs can be avoided by very careful engineering. But will engineers be so careful? Not always. We don’t know how common RFID viruses will be, but it seems likely they will exist in the wild, eventually.

Designers of RFID-based systems will have to engineer their systems much more carefully than we had previously thought necessary.

Discrimination, Congestion, and Cooperation

I’ve been writing lately about the nuts and bolts of network discrimination. Today I want to continue that discussion by talking about how the Internet responds to congestion, and how network discrimination might affect that response. As usual, I’ll simplify the story a bit to spare you a lengthy dissertation on network management, but I won’t mislead you about the fundamental issues.

I described previously how network congestion causes Internet routers to discard some data packets. When data packets arrive at a router faster than the router can forward those packets along the appropriate outgoing links, the packets will pile up in the router’s memory. Eventually the memory will fill up, and the router will have to drop (i.e., discard) some packets.

Every dropped packet has some computer at the edge of the network waiting for it. Eventually the waiting computer and its communication partner will figure out that the packet must have been dropped. From this, they will deduce that the network is congested. So they will re-send the dropped packet, but in response to the probable congestion they will slow down the rate at which they transmit data. Once enough packets are dropped, and enough computers slow down their packet transmission, the congestion will clear up.

This is a very indirect way of coping with congestion – drop packets, wait for endpoint computers to notice the missing packets and respond by slowing down – but it works pretty well.

One interesting aspect of this system is that it is voluntary – the system relies on endpoint computers to slow down when they see congestion, but nothing forces them to do so. We can think of this as a kind of deal between endpoint computers, in which each one promises to slow down if its packets are dropped.

But there is an incentive to defect from this deal. Suppose that you defect – when your packets are dropped you keep on sending packets as fast as you can – but everybody else keeps the deal. When your packets are dropped, the congestion will continue. Then other people’s packets will be dropped, until enough of them slow down and the congestion eases. By ignoring the congestion signals you are getting more than your fair share of the network.

Despite the incentive to defect, most people keep the deal by using networking software that slows down as expected in response to congestion. Why is this? We could debate the reasons, but it seems safe to say that there is a sort of social contract by which users cooperate with their peers, and software vendors cooperate by writing software that causes users to keep the deal.

One of the reasons users comply, I think, is a sense of fairness. If I believe that the burdens of congestion control fall pretty equally on everybody, at least in the long run, then it seems fair to me to slow down my own transmissions when my turn comes. One time I might be the one whose packets get dropped, so I will slow down. Another time, by chance, somebody else’s packets may be dropped, so it will be their turn to slow down. Everybody gets their turn.

(Note: I’m not claiming that the average user has thought through these issues carefully. But many software providers have made decisions about what to do, and those decisions factor in users’ wants and needs. Software developers act as proxies for users in making these decisions. Perhaps this point will get more discussion in the comments.)

But now suppose that the network starts singling out some people and dropping their packets first. Now the burden of congestion control falls heavily on them – they have to slow down and others can just keep going. Suddenly the I’ll-slow-down-if-you-do deal doesn’t seem so fair, and the designated victims are more likely to defect from the deal and just keep sending data even when the network tells them to slow down.

The implications for network discrimination should now be pretty clear. If the network discriminates by sending misleading signals about congestion, and sending them preferentially to certain machines or certain applications, the incentive for those machines and applications to stick to the social contract and do their share to control congestion, will weaken. Will this lead to a wave of defections that destroys the Net? Probably not, but I can’t be sure. I do think this is something we should think about.

We should also listen to the broader lesson of this analysis. If the network discriminates, users and applications will react by changing their behavior. Discrimination will have secondary effects, and we had better think carefully about what they will be.

[Note for networking geeks: Yes, I know about RED, and congestion signaling, non-TCP protocols that don’t do backoff, and so on. I hope you’ll agree that despite all of those real-world complications, the basic argument in this post is valid.]

Where to Go, and What to Read

We don’t have a “real” post today, just plugs for two good things.

(1) The NYU/Princeton interdisciplinary workshop on spyware will be next Thursday (evening) and Friday (day), in New York. It’s free and open to the public. Please let us know if you plan to come.

(2) Students in my course on Information Technology and Public Policy are writing lots of good stuff on the course blog. Every student has to post once a week. They deserve more readers; and we welcome comments and discussion. Non-students are welcome to join the course vicariously, by doing the same weekly reading as the students, and participating in a weekly open discussion thread about the reading.

RIAA Says Future DRM Might "Threaten Critical Infrastructure and Potentially Endanger Lives"

We’re in the middle of the U.S. Copyright Office’s triennial DMCA exemption rulemaking. As you might expect, most of the filings are dry as dust, but buried in the latest submission by a coalition of big copyright owners (publishers, Authors’ Guild, BSA, MPAA, RIAA, etc.) is an utterly astonishing argument.

Some background: In light of the Sony-BMG CD incident, Alex and I asked the Copyright Office for an exemption allowing users to remove from their computers certain DRM software that causes security and privacy harm. The CCIA and Open Source and Industry Association made an even simpler request for an exemption for DRM systems that “employ access control measures which threaten critical infrastructure and potentially endanger lives.” Who could oppose that?

The BSA, RIAA, MPAA, and friends – that’s who. Their objections to these two requests (and others) consist mostly of lawyerly parsing, but at the end of their argument about our request comes this (from pp. 22-23 of the document, if you’re reading along at home):

Furthermore, the claimed beneficial impact of recognition of the exemption – that it would “provide an incentive for the creation of protection measures that respect the security of consumers’ computers while protecting the interests of the record labels” ([citation to our request]) – would be fundamentally undermined if copyright owners – and everyone else – were left in such serious doubt about which measures were or were not subject to circumvention under the exemption.

Hanging from the end of the above-quoted excerpt is a footnote:

This uncertainty would be even more severe under the formulations proposed in submissions 2 (in which the terms “privacy or security” are left completely undefined) or 8 [i.e., the CCIA request] (in which the boundaries of the proposed exemption would turn on whether access controls “threaten critical infrastructure and potentially endanger lives”).

You read that right. They’re worried that there might be “serious doubt” about whether their future DRM access control systems are covered by these exemptions, and they think the doubt “would be even more severe” if the “exemption would turn on whether access controls ‘threaten critical infrastructure and potentially endanger lives’.”

Yikes.

One would have thought they’d make awfully sure that a DRM measure didn’t threaten critical infrastructure or endanger lives, before they deployed that measure. But apparently they want to keep open the option of deploying DRM even when there are severe doubts about whether it threatens critical infrastructure and potentially endangers lives.

And here’s the really amazing part. In order to protect their ability to deploy this dangerous DRM, they want the Copyright Office to withhold from users permission to uninstall DRM software that actually does threaten critical infrastructure and endanger lives.

If past rulemakings are a good predictor, it’s more likely than not that the Copyright Office will rule in their favor.

Nuts and Bolts of Net Discrimination, Part 2

Today I want to continue last week’s discussion of how network discrimination might actually occur. Specifically, I want to talk about packet reordering.

Recall that an Internet router is a device that receives packets of data on some number of incoming links, decides on which outgoing link each packet should be forwarded, and sends packets on the appropriate outgoing links. If, when a packet arrives, the appropriate outgoing link is busy, the packet is buffered (i.e., stored in the router’s memory) until the outgoing link is available.

When an outgoing link becomes available, there may be several buffered packets that are waiting to be transmitted on that link. You might expect the router to send the packet that has been waiting the longest – a first-come, first-served (FCFS) rule. Often that is what happens. But the Internet Protocol doesn’t require routers to forward packets in any particular order. In principle a router can choose any packet it likes to forward next.

This suggests an obvious mechanism for discriminating between two categories of traffic: a network provider can program its routers to always forward high-priority packets before low-priority packets. Low-priority packets feel this discrimination as an extra delay in passing through the network.

Recall that last week, when the topic was discrimination by packet-dropping, I distinguished between minimal dropping, which drops low-priority packets first but only drops a packet when necessary, and non-minimal dropping, which intentionally drops some low-priority packets even when it is possible to avoid dropping anything. The same kind of distinction applies to today’s discussion of discrimination by delay. A minimal form of delay discrimination only delays low-priority packets when it is necessary to delay some packet – for example when multiple packets are waiting for a link that can only transmit one packet at a time. There is also a non-minimal form of delay discrimination, which may delay a low-priority packet even when the link it needs is available. As before, a net neutrality rule might want to treat minimal and non-minimal delay discrimination differently.

One interesting consequence of minimal delay discrimination is that it hurts some applications more than others. Internet traffic is usually bursty, with periods of relatively low activity punctuated by occasional bursts of packets. If you’re browsing the Web, for example, you generate little or no traffic while you’re reading a page, but there is a burst of traffic when your browser needs to fetch a new page.

If a network provider is using minimal delay discrimination, and the high-priority traffic is bursty, then low-priority traffic will usually sail through the network with little delay, but will experience noticeable delay whenever there is a burst of high-priority traffic. The technical term for this kind of on-again, off-again delay is “jitter”.

Some applications can handle jitter with no problem. If you’re downloading a big file, you care more about the average packet arrival rate than about when any particular packet arrives. If you’re browsing the web, modest jitter will cause, at worst, a slight delay in downloading some pages. If you’re watching a streaming video, your player will buffer the stream so jitter won’t bother you much.

But applications like voice conferencing or Internet telephony, which rely on steady streaming of interactive, realtime communication, can suffer a lot if there is jitter. Users report that VoIP services like Vonage and Skype can behave poorly when subjected to network jitter.

And we know that residential ISPs are often phone companies or offer home phone service, so they may have a special incentive to discriminate against competing Internet phone services. Causing jitter for such services, whether by minimal or non-minimal delay discrimination, could be an effective tactic for an ISP that wants to drive customers away from independent Internet telephone services.

There is some anecdotal evidence to suggest that Comcast’s residential Internet customers may be having trouble using the Vonage Internet phone service because of jitter problems.

Let’s assume for the sake of argument that these reports are accurate – that Comcast’s network has high jitter, and that this is causing problems for Vonage users. What might be causing this? One possibility is that Comcast is using delay discrimination, either minimal or non-minimal, with the goal of causing this problem. Many people would want rules against this kind of behavior.

(To be clear: I’m not accusing Comcast of anything. I’m just saying that if we assume that Comcast’s network causes high jitter, and if we assume that high jitter does cause Vonage problems, then we should consider the possibility that Comcast is trying to cause the jitter.)

Another possibility is that Comcast isn’t trying to cause problems for Vonage users, and Comcast’s management of its network is completely reasonable and nondiscriminatory, but for reasons beyond Comcast’s control its network happens to have higher jitter than other networks have. Perhaps the jitter problems are temporary. In this case, most people would agree that net neutrality rules shouldn’t punish Comcast for something that isn’t really its fault.

This most challenging possibility, from a policy standpoint, (still assuming that the jitter problem exists) is that Comcast didn’t take any obvious steps to cause the problem but is happy that it exists, and is subtly managing its network in a way that fosters jitter. Network management is complicated, and many management decisions could impact jitter one way or the other. A network provider who wants to cause high jitter can do so, and might have pretextual excuses for all of the steps it takes. Can regulators tell this kind of strategem apart from fair and justified engineering decisions that happen to cause a little temporary jitter?

Surely some discriminatory strategies are so obvious, and the offered engineering pretexts so weak, that we could block or punish them without worrying about being wrong. But there would be hard cases too. Net neutrality regulation, even if justified, will inevitably lead to some difficult line-drawing.