December 26, 2024

21st Century Wiretapping: Risk of Abuse

Today I’m returning, probably for the last time, to the public policy questions surrounding today’s wiretapping technology. Thus far in the series (1, 2, 3, 4, 5, 6, 7, 8) I have described how technology enables wiretapping based on automated recognition of certain features of a message (rather than individualized suspicion of a person), I have laid out the argument in favor of allowing such content-triggered wiretaps given a suitable warrant, and I have addressed some arguments against allowing them. These counterarguments, I thnk, show that content-triggered wiretaps be used carefully and with suitable oversight, but they do not justify forgoing such wiretaps entirely.

The best argument against content-triggered wiretaps is the risk of abuse. By “abuse” I mean the use of wiretaps, or information gleaned from wiretaps, illegally or for the wrong reasons. Any wiretapping regime is subject to some kind of abuse – even if we ban all wiretapping by the authorities, they could still wiretap illegally. So the risk of abuse is not a new problem in the high-tech world.

But it is a worse problem than it was before. The reason is that to carry out content-triggered wiretaps, we have to build an infrastructure that makes all communications available to devices managed by the authorities. This infrastructure enables new kinds of abuse, for example the use of content-based triggers to detect political dissent or, given enough storage space, the recording of every communication for later (mis)use.

Such serious abuses are not likely, but given the harm they could do, even a tiny chance that they could occur must be taken seriously. The infrastructure of content-triggered wiretaps is the infrastructure of a police state. We don’t live in a police state, but we should worry about building police state infrastructure. To make matters worse, I don’t see any technological way to limit such a system to justified uses. Our only real protections would be oversight and the threat of legal sanctions against abusers.

To sum up, the problem with content-triggered wiretaps is not that they are bad policy by themselves. The problem is that doing them requires some very dangerous infrastructure.

Given this, I think the burden should be on the advocates of content-triggered wiretaps to demonstrate that they are worth the risk. I won’t be convinced by hypotheticals, even vaguely plausible ones. I won’t be convinced, either, by vague hindsight claims that such wiretaps coulda-woulda-shoulda captured some specific badguy. I’m willing to be convinced, but you’ll have to show me some evidence.

Comments

  1. Did you see this?

    http://arstechnica.com/news.ars/post/20070504-bush-administration-proposes-retroactive-immunity-for-phone-companies.html

    retroactive immunity for the phone companies that cooperated with the NSA!

  2. btw: the reason because i wrote this was my observation that moral standards, even very basic ones, do not always correlate with laws.

  3. @ ed felten:
    “But many other cases are easy.”
    noone will talk about the cases that are easy. public interest will always be with the difficult cases (psychology thing …) so i think they are more important than the majority of the cases that are “clear”

    “reasons for wiretapping that are wrong:
    (c) corporate espionage”
    uhhh, no.
    see the line here is pretty thin. if you own a company and you think a competitor is doing something illegally or immorally, would it be morally justified to use wiretapping to prove your view?
    of course it would be against the law, but morally, i think there are some cases where this could be justified.
    and here we have the very first controversy .. do we stick only by the law?

  4. “d) environmental degradation is a clear and present danger to the lives and safety of all citizens.

    Gee, which group does not belong with the others….?

    HINT: I have made it bold”

    Perhaps you were not paying attention last autumn when the entire city of New Orleans suddenly reverted to its ancestral swampy nature?

  5. “Here is a design principle that tells us to assume that a security system will one day be under the control of someone you don’t trust.”

    An excellent reason not to trust “trusted computing” if they ever really start pushing its deployment, IMO.

    “Also highly reccommended as a follow up are Paul Virilio’s writings, esp. … ‘The Information Bomb’”

    I thought we already had this discussion. Unfree references are uncool. If I can’t follow it up with one click and zero dollars and zero cents, I don’t want to see it, and this probably goes for many if not most others also. And of course, as I pointed out before, most of the people in the world will not be able to get past any paywall, either because they don’t have the money, or because they don’t have access to the US credit-card system or convenient currency conversion or even perhaps convenient postal mail where they live.

  6. “That list of wrong reasons is nice, but b and c at least are not universally agreed. If, for example, government security forces believe (for some value of “believe”) that the personal or political enemies of the current administration would pursue unacceptable policies leading to danger to the republic, surveillance of them and others in their camp would be an obvious safety measure.”

    In this case, though, the justification for surveillance is that he’s “evidently a threat to the state”, not that he’s a “personal or political enemy of the current administration”.

    “Ditto if some company were believed to pose a direct danger to national security by virtue of its plans or products (eg a p2p voip company), or an indirect danger by threatening the stability and profitability of an important government contractor. (And certainly any information about foreign companies is fair game…)”

    I find this extremely disturbing. A company developing and marketing a product within the bounds of the law should not be treated as a possible enemy of the state just because, say, they market crypto. Either outlaw crypto or leave the crypto vendor alone. Likewise, a company competing with a government contractor is doing its job in a capitalist economy. Suggesting they should be monitored (and presumably interfered with depending on the results of that monitoring) merely because they compete is coming awfully close to suggesting we change the flag to a hammer and sickle.

    As for spying on foreign companies, it seems wrong, but legally it’s a matter of international treaties and their interpretation; domestic law makes clear though that espionage directed at domestic targets without probable cause (likely criminal activity — not merely “something we’d rather they didn’t do” such as distributing crypto) is unconstitutional.

  7. enigma_foundry says

    And on the other side, it doesn’t even get into the list of reasons that might be right on some level but pose an end to freedom if implemented. We know, for example, that many dangerous domestic groups believe at least one of the following things: a) certain medical services should not be provided by anyone, b) the government has far outstripped its legitimate authority, c) animals have enforceable rights possibly comparable to those of humans d) environmental degradation is a clear and present danger to the lives and safety of all citizens. Obviously content-based surveillance should be flagging any of those items.

    Gee, which group does not belong with the others….?

    (HINT: I have made it bold

  8. enigma_foundry says

    I have also written however about how building the infrastucture may be the greatest danger. In search of a pithy term for each danger, I call this the “pushbutton police state.” Because with the infrastructure in place, it becomes just a question of policy, and in the extreme simply one of flipping a bit in some software, to switch from a free society to a police state.

    I would add my voice to that chorus, (see my several posts to previous articles) I would also take this point a bit further and state that the existence of such an infrastructure will make the seat of power more attractive to those who would misuse this infrastructure, almost certainly ensuring that it would be mis-used.

    The existence of such an infrastructure in a large, rich and well-armed society is a catastrophe waiting to happen.

    Also highly reccommended as a follow up are Paul Virilio’s writings, esp. his idea of ‘the integral accident’ (i.e. the concept that all inventions contain their accidents, (e.g. the inventor of the train also invented derailment) and also his observations throughout ‘The Information Bomb’ that the interaction of human behavior with new technology leading to these failures has been under-studied)

    There is a reason why so many observers are coming to realize how dangerous this infrastructure is….

  9. I have also written however about how building the infrastucture may be the greatest danger. In search of a pithy term for each danger, I call this the “pushbutton police state.” Because with the infrastructure in place, it becomes just a question of policy, and in the extreme simply one of flipping a bit in some software, to switch from a free society to a police state.

    This is an extremely important point, and should be a design principle for security engineers.

    I have seen some papers proposing “balanced” or “privacy-friendly” DRM systems, where a 3rd party keeps customer information encrypted, and releases it to copyright holders only if pirated copies are presented as evidence.

    It’s the same big mistake: they propose an infrastructure that allows total monitoring, and the only thing that makes it “balanced” is the usage policy. Worse, in that case the 3rd party has no legal authority to choose the policy, and can be compelled by law to reveal that information.

    I think the moral is that if you ever want a system that strikes a “balance” between privacy and monitoring capability, the balance has to be architectural in nature. It has to be so inherent in the way the data is stored that the rules can’t be tweaked by someone who effectively inherits the system.

    Just like Kerckhoffs’s Criterion guides us to design under the assumption of algorithms being leaked, here is a design principle that tells us to assume that a security system will one day be under the control of someone you don’t trust.

    X

  10. That list of wrong reasons is nice, but b and c at least are not universally agreed. If, for example, government security forces believe (for some value of “believe”) that the personal or political enemies of the current administration would pursue unacceptable policies leading to danger to the republic, surveillance of them and others in their camp would be an obvious safety measure. Ditto if some company were believed to pose a direct danger to national security by virtue of its plans or products (eg a p2p voip company), or an indirect danger by threatening the stability and profitability of an important government contractor. (And certainly any information about foreign companies is fair game…)

    And on the other side, it doesn’t even get into the list of reasons that might be right on some level but pose an end to freedom if implemented. We know, for example, that many dangerous domestic groups believe at least one of the following things: a) certain medical services should not be provided by anyone, b) the government has far outstripped its legitimate authority, c) animals have enforceable rights possibly comparable to those of humans d) environmental degradation is a clear and present danger to the lives and safety of all citizens. Obviously content-based surveillance should be flagging any of those items.

  11. JCN,

    I’ll grant that in some specific cases, it can be hard to tell which reasons for wiretapping are okay. But many other cases are easy. Here are some reasons for wiretapping that are wrong:
    (a) voyeurism
    (b) desire to harass personal enemy/rival
    (c) corporate espionage

    I passed over the “wrong reasons” point so quickly because that wasn’t the main point of the post. Public policy should define which reasons for wiretapping are acceptable and which are not.

  12. “or for the wrong reasons.”

    what are wrong reasons anyway?
    is there really a way to distinguish wrong from right reasons? don’t the almost solely depend on the moral values of the affected person?

    i gave up thinking about reasons as wrong or right as it does not lead anywhere. we would all be better of if we would implement some social system that insures elements, that are counter-productive to the system, get cast out by the society. that would solve this whole “wrong/right” issue with one sweep.

    but what do i know …

  13. paul: My comment above expresses the same idea in historical context, albeit in a much less eloquent form.

  14. The notion that “such serious abuses [as storing everything or targeting political dissent] are unlikely” strikes me as almost humorous. We’ve already seen examples of peace activists, animal rights protesters and civil libertarians being placed on various kinds of intelligence-gathering and watch lists as possibly-violent terrorist sympathizers.

    Add the fact that such people are much easier to find and categorize than real terrorists, and you have a recipe for abuse that is effectively designed into any content-scanning system.

  15. Agreed. The capability for mass civil disobedience must always be preserved in a free society, or it may not remain free for much longer. This does, of course, mean there can never be perfect detection of crime, nor enforcement of law, with the costs that that implies, at least given that a large enough number of people choose to reject a law simultaneously.

  16. Govt Skeptic says

    Someone wrote:
    “So, I’ll trust Democratic and Constitutional mechanisms to protect us from the police state.”
    The practice of trusting the state not to abuse its citizens has a very poor performace record.
    I agree with other posters that certain individual practices can frustrate govt intrusion. Crypto, convoluted networks, and rootkit-free endpoints are just a few of these practices. From a second amendment point of view, these are the “arms” that enable our “well-regulated militia” of information-flow.
    In another sense, these practices are our canaries in the coal mine. Watch for those who would take away our rights to these practices, offering safety in their place. Of those who prefer security to liberty, only this is certain: Nothing will make them secure from subjugation.

  17. One of you wrote, “I jokingly describe a world where at government HQ there is a big lever switch maked “free society” and “police state” which can be thrown at any time once the infrastructure is in place. And a big sign on it saying, “Danger: Do no throw this switch!”

    If the interests currently peddling so-called “trusted computing” or “trustworthy computing” have their way, all of our microprocessor-containing tools (including, but not restricted to, traditional “computers”) will basically have rootkits that grant ultimate power and control to others than the hardware’s owners, and those others will delegate control to rigid, machine-enforced laws. A cop in every box, in other words, and moreover, one incapable of reason, common sense, or any concepts of justice or right and wrong; just of encoded, rigid laws.

    And all of it ultimately controlled back in some (probably corporate rather than government) HQ where the master root level Fritz-chip keys are made and issued.

    Then there really will be a switch that can be toggled between “free society” and “police state”, and if it is thrown, it’s instant fascist dictatorship.

    J.R.R. Tolkien put it more eloquently.

    “One Ring to rule them all, One Ring to find them, One Ring to bring them all and in the darkness bind them.”

  18. Defusing the risk. Enigma_foundry has good points. I think these ingredients would all help tremendously to mitigate the risks of “bad infrastructure”.

    1. Network neutrality.
    2. Widespread, nondiscriminatory availability of encryption tools.
    3. “Right to route”.
    4. Greater competition in the ISP marketplace.
    5. Wireless networking. A slow peer-to-peer wireless infrastructure is easy to create, if wireless devices can talk to their neighbors, which can talk to theirs, and so on. Information back roads, with here and there a cell tower or WLAN providing an on ramp to the superhighway. It might take many hops, but a packet could cross a continent without touching the backbones, and if it did, the part of its journey taking place in wires would be only a part of its journey. Identifing the source and destination exactly would be difficult. Blocking, controlling, or preventing encrypted traffic would be virtually impossible. There would always be digital backchannels. There are anyway — people can dead-drop CD-Rs in a hollowed-out tree trunk or the like. But right now, anyone who could take a lot of (often encrypted) e-commerce traffic hostage would gain a lot of negotiating leverage. A strong wireless infrastructure would stop that, and with a lot of low-power nodes operated by ordinary citizens instead of a handful by big corporations it would be difficult for any bad actor, corporate or government, to “lean on” all of the nodes to make them “see things their way” and sell out their users to nefarious interests with questionable motives.

  19. The other question, which I ask seriously, is how long the era of the “stupid” terrorist lasts? The “stupid” terrorist (or criminal) is the one that plans their activities through a medium which is known to be under surveillance. There certainly are stupid criminals. John Gotti, whom you think should have suspected he was subject to wiretap, still made plans on the phone.

    On the other hand we have bin Ladin, who never uses the phone himself, and his people who use unmonitored Islamic money transfer systems.

    For a surveillance system to work, the targets must either not be aware of the surveillance, or more commonly not believe it is going to happen to them. WIth human surveillance, like wiretaps, it’s easy to be fooled into thinking it is not happening to you because by definiition it is not happening to 99.9% of the people. With automatic surveillance, a rational party will assume it’s happening to them, but may also assume that they avoiding the magic triggers that promote their communications to human eyes.

    So the question is, how long does this state –where people presume they can beat the surveillance, and are incorrect and thus caught — last? Is it a temporary or permanent condition? If temporary, is a permanent arms-race possible? Or does strong crypto end the arms race for traffic interception if not traffic analysis?

  20. Brad, Ed: Ubiquitous surveillance will not remain confined to the domain of law enforcement and spooks. It will engender broader consequences that pervade every aspect of social interaction, and ultimately lead to an atmosphere and mindset where nobody wants to “stick out” or “rock the boat” in whichever way. This is exactly what happened in the “communist” Eastern bloc.

    Which is of course the point of the whole exercise. Unfortunately there are the “serious side effects” of people losing every incentive to take an active role in anything (stepping outside your assigned place) and contribute to anything outside their own private sphere (which may well include like-minded associates in family, friends, and professional circles).

    The latter is what brought the Eastern bloc down. You cannot rob people of prospects and/or install a system of top-down control/management and at the same time expect them to take the initiative that is needed to move things forward.

    (The alert reader will have spotted that I took a jump from surveillance to top-down control. That’s intentional; the driving force behind those surveillance efforts is a desire to practice top-down social management. Initially it is focused on crime/terrorism, then those concepts get broadened to more generalized notions of people not “playing by the rules”. In Eastern-bloc context, initially “counterrevolutionaries” and “enemy collaborators” were (nominally) targeted. Later it morphed into people questioning the top-down wisdom of the party, or seeking alternatives to top-down lifestyle prescriptions, by associating them with “moral rot” and “weakening the cause”.)

    I should not have to point out present-day analogies for being plain obvious.

  21. The proof that content-triggered wiretaps work (and the degree to which they work) will probably come from their deployment in foreign espionage, where privacy concerns of citizens outside the United States are less of an issue for the U.S. The ironic thing about this is that I think there is no treaty or law which prevents the UK from spying on United States citizens, and then good old Uncle Sam benefiting from the proceeds of “intelligence sharing”.

    Meeting your current requirements of proof is a fool’s errand. Any case of the technology working could be labeled as “not justifying the risk to privacy”. Even economic analysis is difficult, because our system places a premium on privacy which cannot easily be quantified.

    So, I’ll trust Democratic and Constitutional mechanisms to protect us from the police state. I’m well aware that the potential for abuse is real (as it is with police violence, lying to the public, etc.), but when scientists debate whether the mean time to infect all vulnerable internet hosts is 400 or 500 milliseconds, I’m not willing to outlaw research into automated methods of protecting our society from the many forces that might wish to topple it.

  22. I apologize for not having remembered all the prior postings. But as you do understand this threat, do you feel it is appropriate to say there are not external policy issues regarding uniquitious computerized surveillance? You are correct in pointing out the problems that arise from both the risk of abuse and the dangers of building the infrastructure, but if you want to list all problems you need to list again the heisenberg factor, and as I suggested earlier, the problem of enhanced human false positives — both above and beyond algorithmic false positives, and also triggered by algorithmic false positives.

    I have also written however about how building the infrastucture may be the greatest danger. In search of a pithy term for each danger, I call this the “pushbutton police state.” Because with the infrastructure in place, it becomes just a question of policy, and in the extreme simply one of flipping a bit in some software, to switch from a free society to a police state. No need to waste your time sending tanks to the radio stations and round up the usual suspects.

    I jokingly describe a world where at government HQ there is a big lever switch maked “free society” and “police state” which can be thrown at any time once the infrastructure is in place. And a big sign on it saying, “Danger: Do no throw this switch!”

  23. Brad,

    Actually I have written about that precise issue, earlier in the series. Here’s a quote:

    “Privacy Threat: Ordinary citizens will feel less comfortable and will feel compelled to speak more cautiously, due to the knowledge that wiretappers might be listening.”

    See http://www.freedom-to-tinker.com/?p=1013

  24. I don’t doubt that people get used to it. But for the players on Survivor, you must understand that this phrase just means they get past having a running loop of “Holy crap, there’s a camera crew in my face” in their brain. They do not become unaware that it’s there. Massive surveillance would not work if people were constantly aware of it, they will forget from time to time and get caught at things. (Or, like the 9/11 hijackers, they just didn’t care that they were being videotaped going to the planes.)

    But that doesn’t mean you get zero effect on your behaviour because you feel watched. It just means you get less than the full possible effect.

  25. What kind of evidence would suffice? And how would it be gathered?

  26. In reference to Brad’s Heisenberg Problem, I’m not so sure how true that concept is. I mean, people get desenstized to being watched. For example, look at the folks the reality shows. I’m unable find the reference, but in an interview one participant of ‘Survivor’ or some other such show was asked about having cameras around 24/7. They responded ‘you just get used to it’. Of course, the intelligent and disiplined criminals are aware and modify their behaviors accordingly, but the majority don’t.

    The same can be applied to network usage and the yutz’s that surf for inappropriate material while at work. There has been enough media coverage for people to know that what they do at work can and is being monitored, yet do dumb things thinking they won’t be caught.

  27. When you say that content-driven wiretaps are not bad policy in themselves, you ignore what I call the Heisenberg problem. Namely that the act of watching changes the watched, or more to the point, the feeling of being watched changes the behaviour and restricts the freedom, of the people.

    You are not as free at home with your mother for Thanksgiving as you are on your own for the first time at university.

    Content based wiretapping means we are all being watched, all the time. Just by computers which will report to humans if we do something to trigger their algorithms, but still watched. As such we can get the emotional feeling of being watched, even if it is a new, less human sort of watching, and it is the feeling of being watched that makes the difference here, not the abuses of the watching. (Though the fear of abuses does indeed affect the feeling and its results.)

    You can’t monitor everybody without giving them that feeling and thus curtailing their freedom. “A watched populace never boils” as I have written in essays on my web site.

    So there are policy implications of data-mining surveillance, and it is incorrect to say there are not.

  28. Strong crypto is too widely available to be suppressed. The only viable tactic for ensuring widespread eavesdropping is to treat all uses of strong crypto as an automatic suspect flag*. If you encrypt your communications without using the government-issued backdoor, then that needs to be sufficient evidence to allow the feds to hack into your computer, or break into your house and grab your files, or just throw you in jail until you cough up your passphrase. The only way to make sure criminals can’t hide by using crypto is to make all crypto-users criminals.

    So one way to prevent this infrastructure from being constructed is to use crypto as much as possible. If you do a lot of VoIP, grab Zfone and try to get your buddies to use it too. If you use IM products, use OTR encryption. Tunnel stuff over ssh. Etc. If a large enough number of non-criminals use crypto, that will hopefully put up a significant barrier to the construction of general eavesdropping.

    This is where you should insert the argument referring to piracy and how everybody does it but it’s still illegal. But I can’t think of a better technique.

    * I’m making the assumption that the NSA has not and will not crack modern crypto algorithms. (“Will not”, because they can always record the encrypted conversations for cracking later.) I believe that “has not” is a good assumption, but obviously have no proof. I also believe that “will not” would involve the creation of such disruptive technology that the government listening in on your old phone calls is going to be a very small worry compared to the rest.

  29. enigma_foundry says

    The reason is that to carry out content-triggered wiretaps, we have to build an infrastructure that makes all communications available to devices managed by the authorities.

    What can we do to prevent this infrastructure from being built, if we believe it to be dangerous?

    Certain in the legislative arena we must remain active and vigilant, but I am wondering if there isn’t some technical devices or software which could do an end run around any infrastructure.

    Would net neutrality make such an infrastructure easier to defeat?

  30. “I’m willing to be convinced, but you’ll have to show me some evidence.”

    I agree. However, I doubt the evidence that would convince me will be shown in the near future. Maybe I am just too greedy with my privacy.

    “coulda-woulda-shoulda” indeed.