November 26, 2024

Rescorla on Airport ID Checks

Eric Rescorla, at Educated Guesswork, notes a flaw in the security process at U.S. airports – the information used to verify a passenger’s ID is not the same information used to look them up in a suspicious-persons database.

Let’s say that you’re a dangerous Canadian terrorist, bearing the clearly suspicious name “Guy Lafleur”. Now, the American government is aware of your activities and puts you on the CAPPS blacklist to stop you from boarding the plane. Further, let’s assume that you’re too incompetent to get a fake ID….

You have someone who’s not on the blacklist buy you a ticket under an innocuous assumed name, say “Babe Ruth”. This is perfectly legitimate and quite easy to do…. Then, the day before the flight you go onto the web and get your boarding pass. You print out two copies, one with your real name and one with the innocuous fake name. Remember, it’s just a web page, so it’s easy to modify When you go to the airport, you show the security agent your “Guy Lafleur” boarding pass and your real ID. He verifies that they match but doesn’t check the watchlist, because his only job is to verify that you have a valid-looking boarding pass and that it matches your ID. Then, when you go to board the plane, you give the gate agent your real boarding pass. Since they don’t check ID, you can just walk onboard.

What’s happened is that whoever designed this system violated a basic security principle that’s one of the first things protocol designers learn: information you’re using to make a decision has to be the information you verify. Unfortunately, that’s not the case here. The identity that’s being verified is what’s written on a piece of paper and the identity that’s being used to check the watchlist is in some computer database which isn’t tied to the paper in any way other than your computer and printer, which are easy to subvert.

In a later post, he discusses some ways to fix the problem.

Warning Fatigue

One of the many problems facing security engineers is warning fatigue – the tendency of users who have seen too many security warnings to start ignoring the warnings altogether. Good designers think carefully about every warning they display, knowing that each added warning will dilute the warnings that were already there.

Warning fatigue is a significant security problem today. Users are so conditioned to warning boxes that they click them away, unread, as if instinctively swatting a fly.

Which brings us to H.R. 2752, the “Author, Consumer, and Computer Owner Protection and Security (ACCOPS) Act of 2003”, introduced in the House of Representatives in July, and discussed by Declan McCullagh in his latest column. The bill would require a security warning, and user consent, before allowing the download of any “software that, when installed on the user’s computer, enables 3rd parties to store data on that computer, or use that computer to search other computers’ contents over the Internet.”

Most users already know that downloading software is potentially risky. Most users are already accustomed to swatting away warning boxes telling them so. One more warning is unlikely to deter the would-be KaZaa downloader.

This is especially true given that the same warning would have to be placed on many other types of programs that meet the bill’s criteria, including operating systems and web browsers. The ACCOPS warning will be just another of those dialog boxes that nobody reads.

Reading the Broadcast Flag Rules

With the FCC apparently about to announce Broadcast Flag rules, there has been a flurry of letters to the FCC and legislators about the harm such rules would do. The Flag is clearly a bad idea. It will raise the price of digital TV decoders; and it will retard innovation in decoder design; but it won’t make a dent in infringement. It’s also pretty much inevitable that the FCC will issue rules anyway – and soon.

It’s worth noting, though, that we don’t know exactly what the FCC’s rules will say, and that the details can make a big difference. When the FCC does issue its rules, we’ll need to read them carefully to see exactly how much harm they will do.

Here is my guide to what to look for in the rules:

First, look at the criteria that an anti-copying technology must meet to be on the list of approved technologies. Must a technology give copyright owners control over all uses of content; or is a technology allowed support legal uses such as time-shifting; or is it required to support such uses?

Second, look at who decides which technologies can be on the approved list. Whoever makes this decision will control entry into the market for digital TV decoders. Is this up to the movie and TV industries; or does an administrative body like the FCC decide; or is each vendor responsible for determining whether their own technology meets the requirements?

Third, see whether the regulatory process allows for the possibility that no suitable anti-copying technology exists. Will the mandate be delayed if no strong anti-copying technology exists; or do the rules require that some technology be certified by a certain date, even if none is up to par?

Finally, look at which types of devices are subject to design mandates. To be covered, must a device be primarily designed for decoding digital TV; or is it enough for it to be merely capable of doing so? Do the mandates apply broadly to “downstream devices”? And is something a “downstream device” based on what it is primarily designed to do, or on what it is merely capable of doing?

This last issue is the most important, since it defines how broadly the rule will interfere with technological progress. The worst-case scenario is an overbroad rule that ends up micro-managing the design of general-purpose technologies like personal computers and the Internet. I know the FCC means well, but I wish I could say I was 100% sure that they won’t make that mistake.

Recommended Reading

Ernest Miller, who has written lots of great stuff for LawMeme, now has his very own blog at importance.typepad.com.

SunnComm's Latest

SunnComm is now taking yet another position regarding Alex Halderman’s paper – that the paper is just “political activism masquerading as research”. (The quote comes from SunnComm president Peter Jacobs, responding to a question from Seth Finkelstein.) Jacobs had expressed the same sentiment earlier, on an investor discussion board, in this vitriolic message, which he apparently tried to retract later.

[I can’t resist pointing out how hilariously wrong Jacobs is when he says that nobody affiliated with the EFF has ever produced any digital content worth selling. There are many counterexamples, starting with the three founders of EFF (Mitch Kapor, John Perry Barlow, and John Gilmore) who all became rich and famous by producing copyrighted works.]

As far as I can tell, what Jacobs is arguing, essentially, is that even though Halderman’s paper does not make any political argument, the paper might affect the public policy debate about DRM. What I don’t understand is why that’s a bad thing. It seems to me that an accurate, truthful research report has more merit, rather than less, if its results are relevant to a public policy debate.

To put it another way, Halderman stands accused of relevance, which can be a dangerous tactic for an academic to follow.