November 28, 2024

Whom Should We Search at the Airport?

Here’s an interesting security design problem. Suppose you’re in charge of airport security. At security checkpoints, everybody gets a primary search. Some people get a more intensive secondary search as a result of the primary search, if they set off the metal detector or behave suspiciously during the primary search. In addition, you can choose some extra people who get a secondary search even if they look clean on the primary search. We’ll say these people have been “selected.”

Suppose further that you’re given a list of people who pose a heightened risk to aviation. Some people may pose such a serious threat that we won’t let them fly at all. I’m not talking about them, just about people who pose a risk that is higher than average, but still low overall. When I say these people are “high-risk” I don’t mean that the risk is high in absolute terms.

Who should be selected for secondary search? The obvious answer is to select all of the high-risk people, and some small fraction of the ordinary people. This ensures that a high-risk person can’t fly without a secondary search. And to the extent that our secondary-searching people and resources would otherwise be idle, we might as well search some ordinary people. (Searching ordinary people at random is also a useful safeguard against abusive behavior by the searchers, by ensuring that influential people are occasionally searched.)

But that might not be the best strategy. Consider the problem faced by a terrorist leader who wants to get a group of henchmen and some contraband onto a plane in order to launch an attack. If he can tell which of his henchmen are on the high-risk list, then he’ll give the contraband to a henchman who isn’t on the list. If we always select people on the list, then he can easily detect which henchmen are on the list by having the henchmen fly (without contraband) and seeing who gets selected for a secondary search. Any henchman who doesn’t get selected is not on the high-risk list; and so that is the one who will carry the contraband through security next time, for the attack.

The problem here is that our adversary can probe the system, and use the results of those probes to predict our future behavior. We can mitigate this problem by being less predictable. If we decide that people on the high-risk list should be selected usually, but not always, then we can introduce some uncertainty into the adversary’s calculation, by forcing him to worry that a henchman who wasn’t selected the first time might still be on the high-risk list.

The more we reduce the probability of searching high-risk people, the more we increase the adversary’s uncertainty, which helps us. But we don’t want to reduce that probability too far – after all, if we trick the terrorist into giving the contraband to a high-risk henchman, we still want a high probability of selecting that henchman the second time. Depending on our assumptions, we can calculate the optimal probability of secondary search for high-risk people. That probability will often be less than 100%.

But now consider the politics of the situation. Imagine what would happen if (God forbid) a successful attack occurred, and if we learned afterward that one of the attackers had carried contraband through security, and that the authorities knew he posed a hightened risk but chose not to search him due to a deliberate strategy of not always searching known high-risk people. The recriminations would be awful. Even absent an attack, a strategy of not always searching is an easy target for investigative reporters or political opponents. Even if it’s the best strategy, it’s likely to be infeasible politically.

The "Pirate Pyramid"

This month’s Wired runs a high-decibel piece by Jeff Howe, on topsites and their denizens:

When Frank … posted the Half-Life 2 code to Anathema, he tapped an international network of people dedicated to propagating stolen files as widely and quickly as possible.

It’s all a big game and, to hear Frank and others talk about “the scene,” fantastic fun. Whoever transfers the most files to the most sites in the least amount of time wins. There are elaborate rules, with prizes in the offing and reputations at stake. Topsites like Anathema are at the apex. Once a file is posted to a topsite, it starts a rapid descent through wider and wider levels of an invisible network, multiplying exponentially along the way. At each step, more and more pirates pitch in to keep the avalanche tumbling downward. Finally, thousands, perhaps millions, of copies – all the progeny of that original file – spill into the public peer-to-peer networks: Kazaa, LimeWire, Morpheus. Without this duplication and distribution structure providing content, the P2P networks would run dry.

The story paints this as a sort of organized-crime scene, akin to a drug cartel, in which a great many people conspire, via some kind of command-and-control network, to achieve the widest distribution of the product. If true, this would be good news for law enforcers – if they chopped off the organization’s head, “the P2P networks would run dry.”

But this is wrong way to interpret the facts, at least as I understand them. The topsites are exclusive clubs whose members compete for status by getting earlier, better content. The main goal is not to seed the common man’s P2P net, but to build status and share files within a small group. Smebody on the fringe of the group can grab a file and redistribute it to less exclusive club, as a way of building status within that lesser club. Then somebody on the fringe of that club can redistribute it again; and so on. And so the file diffuses outward from its source, into larger and less exclusive clubs, until eventually everybody can get it. The file is distributed not because of a coordinated conspiracy, but because of the local actions of individuals seeking status. The whole process is organized; but it’s organized like a market, not like a firm.

[It goes without saying that all of this is illegal. Please don’t mistake my description of this behavior for an endorsement of it. It’s depressing that this kind of disclaimer is still necessary, but I have learned by experience that it is.]

What puts some people at the top of this pyramid, and others at the bottom? It’s not so much that the people at the bottom are incapable of injecting content into the system; it’s just that the people at the top get their hands on content earlier. Content trickles down to the P2P nets at the bottom of the pyramid, often arriving there before the content is available by other means to ordinary members of the public. Once a song or movie is widely available, there’s no real reason for an ordinary user to rip their own copy and inject it.

The upshot is that enforcement against the top of the pyramid would have some effect, but much less than the Wired article implies. The main effect would be to delay the arrival of content in the big P2P networks, at least for a while, by blocking early leaks of content from the studios and production facilities. The files would still show up – there are just too many sources – but the copyright owners would gain a short interval of exclusivity before the content showed up on P2P. Certainly the P2P networks would not “run dry.”

Don’t get me wrong. Law enforcers should go after the people at the top of the pyramid. At least they would be making examples of the right people. But we should recognize that the rivers of P2P will continue to overflow.

UPDATE (7:25 PM): Jeff Howe, author of the Wired article, offers a response in the comments.

BSA To Ask For Expansion of ISP Liability

The Business Software Alliance (BSA), a software industry group, will ask Congress to expand the liability of ISPs for infringing traffic that goes across their networks, according to a Washington Post story by Jonathan Krim.

The campaign to modify the law is part of a broader effort by the BSA to address a variety of copyright and patent issues. In a report to be released today, the group outlines its concerns but offers no specifics on how the 1998 law should be changed. But in an interview, [Adobe chief Bruce] Chizen and BSA Executive Director Robert Holleyman said Internet service providers should no longer enjoy blanket immunity from liability for piracy by users.

The article doesn’t make clear what limits BSA would put on ISP liability. Making ISPs liable for everything that goes over their networks would be a death blow to ISPs, because there is no way to look at a file and tell what might be hidden in it. (Don’t believe me? Then tell me what is hidden in this file.) Actually, BSA members sell virtual private network software that hides messages from ISPs.

So the BSA must want something less than total liability. Perhaps they want to expand the DMCA subpoena-bot rule so that ISPs have to turn over a customer’s name on demand. The music industry once claimed that the existing DMCA rule requires that, but the courts disagreed. Congress could amend the DMCA to override that court decision.

Or perhaps they want to hold ISPs liable unless they deploy filtering and blocking technologies to try to stop certain files from circulating and certain protocols from being used. These technologies are only stopgap measures that would soon be overcome by P2P designers, so requiring their deployment seems like bad policy.

Most likely, this is just a tactic to put political pressure on ISPs, in the hope of extracting some concessions. I predict that either (a) this will go nowhere, or (b) ISPs will agree to allow an expansion of the subpoena-bot rule.

Predictions for 2005

Here is my list of twelve predictions for 2005.

(1) DRM technology, especially on PCs, will be seen increasingly as a security and privacy risk to end users.

(2) Vonage and other leading VoIP vendors will start to act like incumbents, welcoming regulation of their industry sector.

(3) Internet Explorer will face increasing competitive pressure from Mozilla Firefox. Microsoft’s response will be hamstrung by its desire to maintain the fiction that IE is an integral part of the operating system.

(4) As blogs continue to grow in prominence, we’ll see consolidation in the blog world, with major bloggers either teaming up with each other or affiliating with major news outlets or web sites.

(5) A TV show or movie that is distributed only on the net will become a cult hit.

(6) The Supreme Court’s Grokster decision won’t provide us with a broad, clear rule for evaluating future innovations, so the ball will be back in Congress’s court.

(7) Copyright issues will be stalemated in Congress.

(8) There will be no real progress on the spam, spyware, and desktop security problems.

(9) Congress will address the spyware problem by passing a harmless but ineffectual law, which critics will deride as the “CAN-SPY Act.”

(10) DRM technology will still fail to prevent widespread infringement. In a related development, pigs will still fail to fly.

(11) New P2P systems will marry swarming distribution (as in BitTorrent) with distributed indexing (as in Kazaa et al). Copyright owners will resort to active technical measures to try to corrupt the systems’ indices.

(12) X-ray vision technology will become more widely available (though not to the general public), spurring a privacy hoohah.

2004 Predictions Scorecard

A year ago, I offered seven predictions for 2004. Today, as penance for sins committed in 2004, it’s my duty to exhume these predictions and compare them to reality.

(1) Some public figure will be severely embarrassed by an image taken by somebody else’s picture-phone or an audio stream captured by somebody else’s pocket audio recorder. This will trigger a public debate about the privacy implications of personal surveillance devices.

The Abu Ghraib images seem to fit the bill here: pictures taken by a phonecam that severely embarass a public figure. When I made this prediction, I had in mind pictures or recordings of the public figure in question, but what the prediction as written wasn’t too far off.

Verdict: mostly right.

(2) The credibility of e-voting technologies will continue to leak away as more irregularities come to light. The Holt e-voting bill will get traction in Congress, posing a minor political dilemma for the president who will be caught between the bill’s supporters on one side and campaign contributors with e-voting ties on the other.

E-voting technologies did lose credibility as predicted. The Holt bill did gain some traction but was never close to passing. Republicans did feel some squeeze on this issue, and it became a bit of a partisan issue. (Now that the 2004 election is past, there is more hope for e-voting reform.)

Verdict: mostly right.

(3) A new generation of P2P tools that resist the recording industry’s technical countermeasures will grow in popularity. The recording industry will respond by devising new tactics to monitor and unmask P2P infringers.

P2P tools did evolve to resist technical countermeasures, for instance by using hashes to detect spoofed files. The recording industry is only now starting to change tactics. The big P2P technology of the year was BitTorrent, whose main innovation was in dispersing the bandwidth load required to distribute large files, rather than in evading countermeasures. Indeed, BitTorrent made possible a new set of countermeasures, which the copyright owners adopted near the end of the year.

Verdict: mostly right.

(4) Before the ink is dry on the FCC’s broadcast flag order, the studios will declare it insufficient and ask for a further mandate requiring watermark detectors in all analog-to-digital converters. The FCC will balk at the obvious technical and economic flaws in this proposal.

The studios did seem to want a watermark-based system to close the analog hole, but they were held back by its total infeasibility. My main error here was to misjudge the time scale.

Verdict: mostly wrong.

(5) DRM technology will still be ineffective and inflexible. A few people in the movie industry will wake up to the hopelessness of DRM, and will push the industry to try another approach. But they won’t be able to overcome the industry’s inertia ? at least not in 2004.

DRM technology was nearly useless, as predicted. We’re starting to hear faint rumblings within the movie industry that a different approach would be wise. But, as predicted, the industry isn’t paying much attention to them.

Verdict: right.

(6) Increasingly, WiFi will be provided as a free amenity rather than a paid service. This will catch on first in hotels and cafes, but by the end of the year free WiFi will be available in at least one major U.S. airport.

Even some New Jersey diners now offer free WiFi. The Pittsburgh airport has offered free WiFi for nearly a year. And some airline clubrooms offer free WiFi that is accessible from nearby terminal areas.

Verdict: right.

(7) Voice over IP (VoIP) companies like Vonage will be the darlings of the business press, but the most talked-about VoIP-related media stories will be contrarian pieces raising doubt about the security and reliability implications of relying on the Internet for phone service.

VoIP got plenty of attention, but these companies were not “darlings of the business press”. Security/reliability contrarian stories didn’t get much play. This prediction went too far.

Verdict: mostly wrong.

Overall score: two right, three mostly right, two mostly wrong, none wrong. I’m a bit surprised to have done so well. Obviously this year’s predictions need to be more outrageous. I’ll offer them later in the week.

[UPDATE (1:15 PM): I originally wrote that the first prediction was wrong. Then an anonymous commenter pointed out that Abu Ghraib would qualify. See also the incident in India referenced in the comments.]