October 31, 2024

RFID Virus Predicted

Melanie Rieback, Bruno Crispo, and Andy Tanenbaum have a new paper describing how RFID tags might be used to propagate computer viruses. This has garnered press coverage, including a John Markoff story in today’s New York Times.

The underlying technical argument is pretty simple. An RFID tag is a tiny device, often affixed to a product of some sort, that carries a relatively small amount of data. An RFID reader is a larger device, often stationary, that can use radio signals to read and/or modify the contents of RFID tags. In a retail application, a store might affix an RFID tag to each item in stock, and have an RFID reader at each checkout stand. A customer could wheel a shopping cart full of items up to the checkout stand, and the RFID reader would determine which items were in the cart and would charge the customer and adjust the store’s inventory database accordingly.

Simple RFID tags are quite simple and only carry data that can be read or modified by readers. Tags cannot themselves be infected by viruses. But they can act as carriers, as I’ll describe below.

RFID readers, on the other hand, are often quite complicated and interact with networked databases. In our retail example, each RFID reader can connect to the store’s backend databases, in order to update the store’s inventory records. If RFID readers run complicated software, then they will inevitably have bugs.

One common class of bugs involves bad handling of unexpected or diabolical input values. For example, web browsers have had bugs in their URL-handling code, which caused the browsers to either crash or be hijacked when they encountered diabolically constructed URLs. When such a bug existed, an attacker who could present an evil URL to the browser (for example, by getting the user to navigate to it) could seize control of the browser.

Suppose that some subset of the world’s RFID readers had an input-processing bug of this general type, so that whenever one of these readers scanned an RFID tag containing diabolically constructed input, the reader would be hijacked and would execute some command contained in that input. If this were the case, an RFID-carried virus would be possible.

A virus attack might start with a single RFID tag carrying evil data. When a vulnerable reader scanned that tag, the reader’s bug would be triggered, causing the reader to execute a command specified by that tag. The command would reconfigure the reader to make it write copies of the evil data onto tags that it saw in the future. This would spread the evil data onto more tags. When any of those tags came in contact with a vulnerable reader, that reader would be infected, turning it into a factory for making more infected tags. The infection would spread from readers to new tags, and from tags to new readers. Before long many tags and readers would be infected.

To demonstrate the plausibility of this scenario, the researchers wrote their own RFID reader, giving it a common type of bug called an SQL injection vulnerability. They then constructed the precise diabolical data needed to exploit that vulnerability, and demonstrated that it would spread automatically as described. In light of this demo, it’s clear that RFID viruses can exist, if RFID readers have certain types of bugs.

Do such bugs exist in real RFID readers? We don’t know – the researchers don’t point to any – but it is at least plausible that such bugs will exist. Our experience with Web and Internet software is not encouraging in this regard. Bugs can be avoided by very careful engineering. But will engineers be so careful? Not always. We don’t know how common RFID viruses will be, but it seems likely they will exist in the wild, eventually.

Designers of RFID-based systems will have to engineer their systems much more carefully than we had previously thought necessary.

RIAA Says Future DRM Might "Threaten Critical Infrastructure and Potentially Endanger Lives"

We’re in the middle of the U.S. Copyright Office’s triennial DMCA exemption rulemaking. As you might expect, most of the filings are dry as dust, but buried in the latest submission by a coalition of big copyright owners (publishers, Authors’ Guild, BSA, MPAA, RIAA, etc.) is an utterly astonishing argument.

Some background: In light of the Sony-BMG CD incident, Alex and I asked the Copyright Office for an exemption allowing users to remove from their computers certain DRM software that causes security and privacy harm. The CCIA and Open Source and Industry Association made an even simpler request for an exemption for DRM systems that “employ access control measures which threaten critical infrastructure and potentially endanger lives.” Who could oppose that?

The BSA, RIAA, MPAA, and friends – that’s who. Their objections to these two requests (and others) consist mostly of lawyerly parsing, but at the end of their argument about our request comes this (from pp. 22-23 of the document, if you’re reading along at home):

Furthermore, the claimed beneficial impact of recognition of the exemption – that it would “provide an incentive for the creation of protection measures that respect the security of consumers’ computers while protecting the interests of the record labels” ([citation to our request]) – would be fundamentally undermined if copyright owners – and everyone else – were left in such serious doubt about which measures were or were not subject to circumvention under the exemption.

Hanging from the end of the above-quoted excerpt is a footnote:

This uncertainty would be even more severe under the formulations proposed in submissions 2 (in which the terms “privacy or security” are left completely undefined) or 8 [i.e., the CCIA request] (in which the boundaries of the proposed exemption would turn on whether access controls “threaten critical infrastructure and potentially endanger lives”).

You read that right. They’re worried that there might be “serious doubt” about whether their future DRM access control systems are covered by these exemptions, and they think the doubt “would be even more severe” if the “exemption would turn on whether access controls ‘threaten critical infrastructure and potentially endanger lives’.”

Yikes.

One would have thought they’d make awfully sure that a DRM measure didn’t threaten critical infrastructure or endanger lives, before they deployed that measure. But apparently they want to keep open the option of deploying DRM even when there are severe doubts about whether it threatens critical infrastructure and potentially endangers lives.

And here’s the really amazing part. In order to protect their ability to deploy this dangerous DRM, they want the Copyright Office to withhold from users permission to uninstall DRM software that actually does threaten critical infrastructure and endanger lives.

If past rulemakings are a good predictor, it’s more likely than not that the Copyright Office will rule in their favor.

Mistrust-Based DRM

Randy Picker has an interesting post on the Chicago Law Faculty blog, describing what he calls “mistrust-based DRM”. The idea is that when an online music store gives you a song, it embeds into the song a watermark that contains your credit card number, or some other information that would let a (dishonest) person spend your money. This gives you an incentive not to distribute the song.

This is an instructive idea, but not a practical one.

In analyzing this idea, it’s helpful to divide it into two pieces: (1) embed a watermark that identifies the user, and (2) make that watermark a secret of the user and readable by the anyone who gets the file. Piece (1), taken alone, is a widely discussed DRM strategy which has not been used much in practice, for reasons I plan to discuss tomorrow. Today, I want to focus on the second piece.

Specifically, I want to compare two systems. In the more traditional system, the watermark is secret – it can be read only by the copyright owner or its agents – and users fear being sued for infringement if their files end up on P2P. In Randy’s system, the watermark is public – anybody can read it – and users fear being victimized by fraud if their files end up on P2P. I’ll call these two alternatives “secret-watermark” and “public-watermark”.

How do they compare? For starters, a secret watermark is much harder for an adversary to find and remove. If a watermark is public, everybody knows exactly where in the music it is stored. Common sense, and experience too, says that if you know where in a file information is stored, you can modify that part of the file and obliterate the information. But if the watermark is secret, then an adversary isn’t told where to look for it or how to change the file to remove it. Robustness of the watermark is an important issue that has been the downfall of past watermark systems.

A bigger problem with the public-watermark design, I think, are the forces unleashed when your design principle is to enable fraud. For example, the system will lose its force if unrelated anti-fraud measures become more effective, or if the financial system acts to protect users from fraud. Today, a consumer’s liability for fraudulent credit card transactions is capped at $50, and credit card companies often forgive even that $50. (You could use some other account information instead of the credit card number, but similar issues would still apply.) Copyright owners would be the only online merchants who wanted a higher level of fraud on the Net.

Worse yet, even law-abiding consumers would face a higher risk of fraud, because any loss or theft of their music or movie files would expose their financial information. Spyware programs could collect this information from users’ computers – and studies show that at least half of end-user PCs are infected with spyware. Law-abiding users would have a strong incentive to scrub the information out of their files, even if they had no intention of infringing. Alert anti-virus or anti-spyware vendors would be eager to provide this service.

Given the disadvantages of a public-watermark scheme, what are the arguments for it? Randy Picker argues that it gives end users an incentive to distrust fly-by-night purveyors of ripping software, worrying that they might steal the user’s information from the files and commit fraud. This isn’t entirely convincing: some such tools already contain heinous spyware that could cause users lots of harm, and reputable security suppliers are likely to provide watermark-scrubbing tools anyway. I think the threat of secret watermarks hidden in files, which fly-by-night vendors have no incentive to remove, would probably scare users enough.

On the whole, then, I think a secret-watermark scheme is better than a public-watermark one. But it should be noted that secret-watermark schemes themselves aren’t looking too good. They have mostly failed in the market, for reasons I’ll start digging into tomorrow.

Sony CD DRM Paper Released

Today Alex and I released our paper about the Sony CD DRM episode. This is the full, extended version of the paper, with a bunch of new material that hasn’t been published or posted before.

As an experiment, we posted draft sections of the paper here and asked readers for comments and feedback. The experiment was a success, giving us lots of good comments and suggestions that helped us improve the paper. Several reader-commenters are thanked in the paper’s acknowledgments section.

We also asked readers to help suggest a title for the paper. That didn’t work out so well – some suggestions were entertaining, but none were really practical. Perhaps a title of the sort we wanted doesn’t exist.

Enjoy the paper, and thanks for your help.

[UPDATE (Feb. 21): If you don’t like PDFs, you can now read the paper in your browser, thanks to an HTML+images version created by Jesse Weinstein.]

Secure Flight Mothballed

Secure Flight, the planned next-generation system for screening airline passengers, has been mothballed by the Transportation Security Administration, according to an AP story by Leslie Miller. TSA chief Kip Hawley cited security concerns and questions about the program’s overall direction.

Last year I served on the Secure Flight Working Group, a committee of outside technology and privacy experts asked by the TSA to give feedback on Secure Flight. After hearing about plans for Secure Flight, I was convinced that TSA didn’t have a clear idea of what the program was supposed to be doing or how it would work. This is essentially what later government studies of the program found. Here’s the AP story:

Nearly four years and $200 million after the program was put into operation, Hawley said last month that the agency hadn’t yet determined precisely how it would work.

Government auditors gave the project failing grades – twice – and rebuked its authors for secretly obtaining personal information about airline passengers.

The sad part of this is that Secure Flight seems to have started out as a simpler program that would have made sense to deploy.

Today, airlines are given a no-fly list and a watch-list, which they are asked to check against their passenger lists. There are obvious security drawbacks to distributing the lists to airlines – a malicious airline employee with access to the lists could leak them to the bad guys. The 9/11 Commission recommended keeping the lists within the government, and having the government check passengers’ names against the lists.

A program designed to do just that would have been a good idea. There would still be design issues to work out. For example, false matches are now handled by airline ticket agents, but that function would probably have to moved into the government too, which would raise some logistical issues. There would be privacy worries, but they could be handled with good design and oversight.

Instead of sticking to this more modest plan, Secure Flight became a vehicle for pie-in-the-sky plans about data mining and automatic identification of terrorists from consumer databases. As the program’s goals grew more ambitious and collided with practical design and deployment challenges, the program lost focus and seemed to have a different rationale and plan from one month to the next.

What happens now is predictable. The program will officially die but will actually be reincarnated with a new name. Congress has directed TSA to implement a program of this general type, so TSA really has no choice but to try again. Let’s hope that this time they make the hard choices they avoided last time, and end up with a simpler program that solves the easier problems first.

(Fellow Working Group member Lauren Gelman offers has a similar take on this story. Another member, Bruce Schneier, has also blogged extensively about Secure Flight.)