March 28, 2024

Twenty-First Century Wiretapping: Content-Based Suspicion

Yesterday I argued that allowing police to record all communications that are flagged by some automated algorithm might be reasonable, if the algorithm is being used to recognize the voice of a person believed (for good reason) to be a criminal. My argument, in part, was that that kind of wiretapping would still be consistent with the principle of individualized suspicion, which says that we shouldn’t wiretap someone unless we have strong enough reason to suspect them, personally, of criminality.

Today, I want to argue that there are cases where even individualized suspicion isn’t necessary. I’ll do so by introducing yet another hypothetical.

Suppose we have reliable intelligence that al Qaeda operatives have been instructed to use a particular verbal handshake to identify each other. Operatives will prove they were members of al Qaeda by carrying out some predetermined dialog that is extremely unlikely to occur naturally. Like this, for instance:

First Speaker: The Pirates will win the World Series this year.
Second Speaker: Yes, and Da Vinci Code is the best movie ever made.

The police ask us for permission to run automated voice recognition algorithms on all phone conversations, and to record all conversations that contain this verbal handshake. Is it reasonable to give permission?

If the voice recognition is sufficiently accurate, this could be reasonable – even though the wiretapping is not based on advance suspicion of any particular individual. Suspicion is based not on the identity of the individuals speaking, but on the content of the communication. (You could try arguing that the content causes individualized suspicion, at the moment it is analyzed, but if you go that route the individualized suspicion principle doesn’t mean much anymore.)

Obviously we wouldn’t give the police carte blanche to use any kind of content-based suspicion whenever they wanted. What makes this hypothetical different is that the suspicion, though content-based, is narrowly aimed and is based on specific evidence. We have good reason to believe that we’ll be capturing some criminal conversations, and that we won’t be capturing many noncriminal ones. This, I think, is the general principle: intercepted communications may only be made known to a human based on narrowly defined triggers (whether individual-based or content-based), and those triggers must be justified based on specific evidence that they will be fruitful but not overbroad.

You might argue that if the individualized suspicion principle has been good enough for the past [insert large number] years, it should be good enough for the future too. But I think this argument misses an important consequence of changing technology.

Back before the digital revolution, there were only two choices: give the police narrow warrants to search or wiretap specific individuals or lines, or give the police broad discretion to decide whom to search or wiretap. Broad discretion was problematic because the police might search too many people, or might search people for the wrong reasons. Content-based triggering, where a person got to overhear the conversation only if its content satisfied specific trigger rules, was not possible, because the only way to tell whether the trigger was satisfied was to have a person listen to the conversation. And there was no way to unlisten to that conversation if the trigger wasn’t present. Technology raises the possibility that automated algorithms can implement triggering rules, so that content-based triggers become possible – in theory at least.

Given that content-based triggering was infeasible in the past, the fact that traditional rules don’t make provision for it does not, in itself, end the argument. This is the kind of situation that needs to be evaluated anew, with proper respect for traditional principles, but also with an open mind about how those principles might apply to our changed circumstances.

By now I’ve convinced you, I hope, that there is a plausible argument in favor of allowing government to wiretap based on content-based triggers. There are also plausible arguments against. The strongest ones, I think, are (1) that content-based triggers are inconsistent with the current legal framework, (2) that content-based triggers will necessarily make too many false-positive errors and thereby capture too many innocent conversations, and (3) that the infrastructure required to implement content-based triggers creates too great a risk of abuse. I’ll wrap up this series with three more posts, discussing each of these arguments in turn.

Comments

  1. enigma_foundry writes:
    “The Information Bomb” is not available on-line.

    Please do not tease people on Internet newsgroups, blogs, and forums with references to unfree material. This includes anything locked up behind any kind of registerwall, pay- or otherwise. If the information cannot be used by everyone equally, regardless of race, creed, geographic location, or income, then it should not be waved tantalizingly under everyone’s noses. There are six billion of us, and most of us don’t use US dollars, don’t have credit cards, can’t give a valid US zipcode to some registration form, don’t want more spam resulting from giving out our email address on some registration form, don’t want to have to jump through hoops with multiple mail providers or constantly creating new mail accounts to avoid said spam, and don’t want to have to remember yet another login and password for yet another site. In fact, most of us live on less than a dollar a day, which makes anything locked behind a paywall absolutely out of the question.

    Mentioning such “resources” online is tantamount to a bait-and-switch. Same with posting a URL ostensibly to an “interesting article” that in fact leads to an ad, a registration form, a CC# form, a login page, or similarly.

    Please do not do it again. If it isn’t freely viewable without hoop-jumping, it may as well not exist for the vast majority of us, and mentioning it wastes our time and frustrates us. And remember: we outnumber you 5,999,999,999 to 1, so best take us seriously.

  2. enigma_foundry says

    The Book by Paul Virilio “The Information Bomb” is not available on-line.

    his work is under-rated, perhaps because the quality of his writing does not, INHO, do justice to the quality of his ideas.

    He has many interesting observation about accidents, and I will summarize them further later this summer, if I can help complete the wikipedia article about Paul V.

    I will leave this excerpt from wikipedia and the link below:

    “The integral accident
    Technology cannot exist without the potential for accidents. For example, the invention of the locomotive also entailed the invention of the rail disaster. Virilio sees the Accident as a rather negative growth of social positivism and scientific progress. The growth of technology, namely television, separates us directly from the events of real space and real time. We lose wisdom, lose sight of our immediate horizon and resort to the indirect horizon of our dissimulated environment. From this angle, the Accident can be mentally pictured as a sort of “fractal meteorite” whose impact is prepared in the propitious darkness, a landscape of events concealing future collisions. Even Aristotle claimed that “there is no science of the accident,” but Virilio disagrees, pointing to the growing credibility of simulators designed to escape the accident — an industry born from the unholy marriage of post-WW2 science and the military-industrial complex….”

    http://en.wikipedia.org/wiki/Virilio

  3. Has anyone got something equivalent to this “information bomb” that isn’t behind a registerwall or paywall?

  4. cm wrote:
    Neo: Just guessing, based on a straightforward google search:
    [link snipped]

    That’s no good, you need a CC# to access the content that way. I said a direct URL to the content.

    cm also wrote:
    The idea is that surveillance based on content triggers is mostly useful for generating investigation leads, not court evidence, so it does not need to be of a high legal standard.

    Fruit of the poisonous tree. Don’t you watch Law & Order? 🙂

  5. a little comment on the counter arguments 2 and 3:

    i don’t see false identification as that much of a problem, as with more refined decision systems, these systems should be able to work very accurate. the real problem in here is to provide the system with enough information to make it “judge” the way you want it to.

    as with the abuse of the implemented systems … thats what buggs me the most. seeing as how easy people are corrupted, you would need a huge adminstrative body around the system that mostly controls itself rather the system in order to efficiently prevent abuse. this of course would cost alot of money … money noone is willing to spend unless its absolutely neccessary.
    quite a dilemma here …

  6. john erickson: Very simple, to good approximation it is undefinable, and hence the only practicable solution is that content is “whatever we can lay our hands on”. The idea is that surveillance based on content triggers is mostly useful for generating investigation leads, not court evidence, so it does not need to be of a high legal standard.

    The general problem in investigating anything is usually not securing the evidence, but figuring out where to look.

  7. Neo: Just guessing, based on a straightforward google search:
    http://www.amazon.com/gp/product/1844670597

  8. Enigma_foundry, you mention what sounds like a very interesting blog or resource, but fail to provide a URL. Please post a URL at which the content of this “information bomb” can be read. Thank you.

  9. john erickson says

    On this question of “content-based triggering,” WHAT qualifies as CONTENT?

    Any attempt to implement content-based triggering in surveillance law must first hinge on the definition of a “content trigger,” and then on what content may be collected given that trigger. Any definition of a trigger will depend upon whether the subsequent pattern recognition is being applied in real-time or through post-collection analysis.

    The most practical approach for real-time recognition on streams would seem to be some kind of dedicated finite state machine approach, which is performance- limited by the ability to bind keys to recognized phrase elements in the streaming content. This approach is being used very effectively for text-based content filtering; see the recent SciAm article on IBM Zurich’s work in this area, and extrapolate as required…

    Once tagging of the streams has occurred, it becomes a question of what constitutes a trigger. One approach is very literal: trigger if some logical combination of terms is found, including (and esp.) some precise sequence. But a much more effective approach would be to determine the distance between a vector sequence of key terms and a set of targets (patterns). But how do we encode such thresholds IN LAW?

    Finally, given a trigger event, what is a legal sample under the law? From an evidenciary standpoint, it would seem the best strategy would be to review the region from the original source that produced the sequence of keys that caused the particular trigger. The most useful information is unlikely to be in any precisely defined region or “moment”; it is more likely to be some imprecisely defined band of time, described statistically. How do we encode THAT in a warrant?

    This brings to mind the work of Michael F. Cohen (Microsoft Research) on “Capturing the Moment,” in which (for example) a shutter click actually captures a video sample spanning form x seconds BEFORE to x seconds after the click. Given this archive of the “moment” (and not just the instant of the click) his group has come up with all sorts of clever applications.

    THE POINT is, due to the inaccuracies of speech feature extraction, patterning recognition based on extracted features, etc, authorities would need to get warrants for periods of time NEAR the trigger in order to get an accurate sample of content/evidence. Warrants for suveillance (it would seem) must somehow encode such moments.

  10. There was also a small-scale scandal around CCTV surveillance in some European city where the (male) surveillance personnel allegedly got off on zooming in on and “checking out” women. Probably based on an internal complaint where some staff member deemed this unacceptable.

  11. I find it much more plausible than they quoted example that the “content” tracked is conversations around the respective policymakers’ pet peeves — criticism of social trends/policies, organizing efforts, and “interesting” proprietary technical and financial information, as well as privately motivated abuses. More likely than not, this is always happening to the extent technically/procedurally feasible.

    A while ago I read an article about how a cop tracked down a woman who he randomly met that he fancied based on her license plate # and then stalked her.

  12. enigma_foundry says

    “(3) that the infrastructure required to implement content-based triggers creates too great a risk of abuse.”

    This is the most important point, but the way it is actually stated, it is a dramatic _understatement_ of the problem.

    First consider how the ergonomics of power will be changed by advances in technology. The value of being in power changes, so different elements will become attracted to power. So not only has the potential for abuse increased, but power has become more valuable to abusers.

    I think that a little reading of Paul Virilio here would be appropriate. He seems to have an instinctual understanding of how techology and human traits can interact in very destructive ways. Read the Information Bomb in particular, and rememeber several of the articles there had been published before 9/11.

    These advances in technology could lead to a catastrophic accident–those in power, with the ability to perpetuate their power in way that makes it difficult to imagine how they would be dislodged.

    Remember, Hitler & Putin, were both elected by democracies. Given the wrong circumstances, such a demagogue could come to power in USA also.

    There are certain machineries of power that are too dangerous to build in the first place.

    Just say no.

  13. Mark Lee says

    In this case, the police know the passphrase, but no members of the group. Is this a realistic example? In the real world, this sort of evidence would likely come from an informant, and if the informant knows, they would have learned it from somebody, and you now have individualized suspicion on the somebody.

    Oh, but what if the info was found on a notebook in Afghanistan? Well, assuming that you can trust the info on the notebook, if the passphrase is on the book, I’d expect that there would be something that would allow you to trace other members of the group. They found that sort of stuff on Khalid Mohammed’s computer, IIRC.

    But enough of that, lets assume that there is a reasonable chance something like this could happen. Sure, you can argue that this is a reasonable case for a blanket search of phone content, but as other posters havve already commented, with an innocuous enough code phrase there will be massive false positives, and searching the content of conversations is a serious 1st amendment problem, as if people know that the government is search all phone content, it would have a huge potential to chill protected speech. It’s not at all clear that the balance would favor the search.

    Also, I don’t believe it’s hard to come up with sympathetic scenarios for the technology; the real problem is the potential for abuse. If the search capability is allowed at all, it would have to be strictly regulated.

  14. Mr. Felten, interesting points, and I have enjoyed reading your dialog on this issue and many others.

    A couple of problems that come to mind are that not all conversations are for lack of a better word “warrantable.” I can not go to a judge and get a warrant for dialog covered under attorney — client privlege. So could a perspective bad guy not just claim he was discussing baseball and recent movies with his lawyer, and would it not be illegal to record those conversations at all, or scan them in any way.

    I do believe that for your proposed system to work at all, the content based triggers would have to be extremly specific and individualized, to avoid the huge number of false positives. If you suspect (via other means) Alice to be a evildoer and that she might use the code phrases in her conversations and does do so in talking to Eve, then you might need to scan her and Eve’s (since you now have cause to suspect and request a warrant for her) incoming and outgoing conversations. But conversations say from Bob to Carol would be unrelated (assuming Bob and Carol dont have any predetermined connection to Alice or Eve).

    In this case you already have pre-existing suspicion and a conventional warrant application would perhaps be justified, and the voice recognition system amounts to a time saver in not having to have a person listen to the audio recordings. In this limited case, you could see the use of the content based triggers to scan a massive volume of conversations between suspects in faster than realtime. If one is investigating a criminal network of size N, how many people are needed to manually listen to all the N^2 possible calls made?

    As an ammusing false positive example, there was once an episode of L.A. Law in which a main character (Douglas Brackmann, I believe) was disscussing the list of items on a sushi menu with a neighboring female diner at an upscale sushi bar, including items as a “spicy tuna handroll” and was then arrested by the diner for solicitation (she was a police officer). The conversation was full of double entendre, but legally appropriate for the context. Unless you knew the context surrounding the conversation it could be mis-catagorized.

  15. This passphrase hypothetical reminds me a little of the “ticking-bomb” hypothetical that usually prefaces discussions of breaking other civilized/constitutional norms. Is there any realistic prospect that voice-recognition or voice-analysis software will reach this level of specificity, or that one would have the kind of detailed intelligence required to know about passphrases while being blissfully unaware of who was using them (assuming they’re used at all by the folkd we’re ostensibly targeting)?

    I think that this kind of discussion, if taken seriously, may attain some of the qualities of “serious liberal” discussion of the prospects for the invasion of Iraq, where hypotheticals about how one might do an invasion properly somehow morphed into an appearance of support for an invasion done by the administration actually in power.

  16. Are you assuming that you can recognise the key phrases no matter what language they are spoken in etc ? and you still seem to be avoiding the issue of the conversation being encrypted.
    Because of all the problems already highlighted I think it’s clear that your scanning technology could only be usefull in scenarios other than the ones you have described, clearly it’s dangerous to approve the use of technology when you can’t describe when and why it might be usefull.

  17. While the system you describe could theoretically be created, the number of instances where the enemy is not only using key phrases for ID, but those key phrases are known to authorities is probably pretty small. Add in the existence of untraceable cell phones, and the utility of the system drops even farther. Toss in the fact that you will be looking for hits where you don’t already know that the guy pinged is a suspect (if you already know, you don’t need the system) and it drops still farther.

    Any organization has to justify its expenses to someone, even if only themselves. So once a multi-billion dollar system for monitoring every telephone call in America is in place, what will the organization’s response be to the question of “We spend $X billion dollars on a system we only use 2-3 times a year when we find out a new identifying key phrase?”

    So what you are proposing is a system that is not useful enough to justify its cost, unless it is used more broadly than your proposal. So it gets bumped up to content of conversation rather than just code keys. Ever hear kids discussing the latest shoot-em-up? Phrases indistinguishable from serious threats except by context of the entire conversation will be in there. Then do you have the computer exclude any conversations that include video game names or listen to every one of them? If you exclude, then the bad guys will start adding them to their conversations to give false negatives.

  18. The voiceprint thing works since it requires the police to get a warrant on a person. The content scan doesn’t, since it makes the target a suspect for what he says in a private conversation, which has way too many free speech and privacy issues to actually be permittable in practice.

  19. Let me put one of Loyal Citizen’s point a little more succinctly.

    Ethically speaking, I think it’s pretty uncontroversial that any kind of mass wiretapping would need public approval to be acceptable. The gist of Ed’s argument has been that public wiretapping is acceptable under certain, limited circumstances, such as the hypothetical situation he described. Now, suppose that, as citizens, we agree with Ed and decide to allow mass wiretapping for the specific purpose of detecting key phrases. How many criminal organizations do you think are likely to use this method to confirm group membership?

    The paradox is this: Mass wiretapping for the purpose of detecting key phrases is acceptable only if the public is aware that it is happening. Unfortunately, it is only *effective* if the public (and by extension the criminals) is *unaware* that it is happening.

    Let me add one more premise: Mass wiretapping is only acceptable if the purpose for which it is used is effective. I think it’s clear where this leads: Mass wiretapping for the purpose of detecting key phrases is unacceptable whether or not the public is aware of it.

    I should add that there is one potential benefit of this kind of wiretapping: By making it unattractive to use key phrases, it makes it that much harder for criminal organizations (and all others) to operate in secret, since a different method will need to fulfill the role of the key phrases. That being so, there is no need to use public wiretapping to achieve this result; enforcers can achieve the same result by taking Loyal Citizen’s advice and make a policy of making the key phrases public once they are known. Secret phrases do not function if they are not secret.

  20. In order to enable such warrants, Congress would have to pass legislation. Such legislation, enabling content-based restrictions on speech, would be unconstitutional. So if we as a society want to make this policy change, we’ve made it hard for ourselves. That doesn’t make it unwise, of course.

    But there is a problem with your strawman construction of tradition. We’ve had the ability to produce automated content-based filters for over a century, and have not done so. Telegraph operators routinely used electromechanical devices for such. So why didn’t we? I think it’s because some human has to construct the filter, and there’s too much potential for abuse. Unlike a description of a physical place or a real person, a description of content is an interesting program. It admits complexity.

    For the same reason we don’t allow warrants for “any room that Alice and Bob both enter, after they’ve been there for long enough to have a conversation,” we don’t and shouldn’t allow arbitrary complexity in any warrant’s description. Any content-based warrant will necessarily be over that line.

  21. Loyal Citizen says

    I would answer, no, it’s not reasonable to give permission.

    In my opinion, any one of the three reasons you mentioned would be sufficient, with the addition of the fact that you introduce the Panopticon issue: no one knows when they are being listened to, so they self-censor, or change communication modes so that the monitoring is useless. Unless you are arguing that the list of ‘bad’ phrases is made public?

    It really all comes down to this– are we interested in law enforcement, or crime prevention? They are two very different things, and with very different goals- one is attainable, and one is not.

  22. But there’s not really a mechanism for getting this kind of warrant. A search warrant, according to the 4th Amendment, must “particularly describ[e] the place to be searched, and the persons or things to be seized”. A FISA warrant, as I understand it, must name the person to be wiretapped. How to deal with this lack of (current) mechanism will be the topic of the next post.

  23. These hypotheticals are all interesting and enlightening, but we already have a mechanism for dealing with cases like these.

    The mechanism is called “Go to a judge and get a warrant.” If there’s a reasonable basis for a warrant, the judge will issue one and there’s no problem.

    The real concern is not the possibility/reality of broad technical searches, its that there’s no oversight in the system. Worse, there is an oversight mechanism which appears to have been bypassed. Even if the people carrying out the surveillance are pure as the driven snow, bypassing the checks and balances looks really bad.