April 23, 2024

Encryption as protest

As a computer scientist who studies Privacy-Enhancing Technologies, I remember my surprise when I first learned that some groups of people view and use them very differently than I’m used to. In computer science, PETs are used for protecting anonymity or confidentiality, often via application of cryptography, and are intended to be bullet-proof against an adversary who is trying to breach privacy.

By contrast, Helen Nissenbaum and others have developed a political and ethical theory of obfuscation [1], “a strategy for individuals, groups or communities to hide; to protect themselves; to protest or enact civil disobedience, especially in the context of monitoring, aggregated analysis, and profiling..”  CV Dazzle and Ad Nauseam are good examples.

Let’s consider the use of traditional PETs like Tor for obfuscation. The computer science literature is very comfortable with the first two uses (to hide and to protect oneself), but not the latter two (protest and civil disobedience). Adversarial thinking, the model used to analyze PETs by the computer security community, has nothing to say about these uses.

In this post, I want to examine the hypothesis that users of encryption tools also have protest and civil disobedience in mind, instead of (or in addition to) self-defense and anonymity. Encryption is explicitly outside the ambit of obfuscation as its theorists conceive it. Brunton and Nissenbaum are downright deferential:

Obfuscation, as we have presented it here, is at once richer and less rigorous than academically well–established methods of digital privacy protection, like encryption. It is far more ad hoc and contextual, without the quantifiable protection of cryptographic methods — a “weapon of the weak”

Can there ever be a science of obfuscation? With encryption, for example, algorithms have standard metrics based on objective measures such as key length, machine power, and length of time to inform community evaluations of their strength. By contrast, the success of obfuscation is a function of the goals and motives of both those who obfuscate and those to whom obfuscation is directed, the targets. We are tempted, for this reason, to characterize obfuscation as a relatively weak practice. Yet, when strong solutions, such as avoidance, disappearance, hiding (e.g., through encryption) are not available and flat out refusal is not permitted, obfuscation may emerge as a plausible alternative, perhaps the only alternative.

My claim, then, is that even when the supposedly strong weapon of encryption is available, it is often used for the “weak” purpose of obfuscation, specifically as a form of protest.

The key difference when encryption is used as protest is that it is a collective and participatory activity, rather than individualistic. Such users hope, in conjunction with other users, to make life a little bit harder for the powers that be and to protest the surveillance regime. Further, they would like to signal to their peers that they are conscientious citizens who will not accept the status quo. [2]

As a corollary, users will seek the simplest possible tools to achieve the objectives of protest and signalling, even at the expense of security. This is because there aren’t any major personal benefits to encrypting nor any repercussions from the encryption being defeated. It’s a bit like recycling — we’d like to act responsibly, but won’t do it if it’s too hard. Of course, users always favor convenience over security more than developers would like, but this is an extreme version.

My evidence for all this is primarily anecdotal, but there’s a lot of it. Here’s one comment that brings together a lot of what I’ve said above:

IMO, the “encryption as protest” idea has a lot of merit. A big challenge is finding tools that to make it easy for non-technical users. I’ve been toying with the apps from these guys for encrypting text and voice on Android based devices. All and all, they are pretty darn easy to use and seem to do the trick.

On a more puerile level, I’ve also been trying to figure out how to merge the “forbidden keyword” idea with photo-bombing our good workers at the NSA pictures of my naked butt. Since we know they don’t look at stuff like “Let’s ram buildings in America with commercial airliners” … I was thinking maybe something like an email saying “Let’s all picket the XL pipeline next Thursday with this poster” might be a better trigger.

So far, so good — some people use encryption to hide; many others use it to protest. But here’s the catch. There is an inescapable trade-off between convenience and security — tools that put security first require user training and informed decisions, whereas insecure ones pitch themselves as one-click solutions. [3] Because of the network effects in the market for these tools, those that cater to the lowest common denominator will win. This might explain why the vast majority of encryption tools that have cropped up over the last year appear to be insecure.

What can we learn from this? First, security and privacy researchers should study how users actually use PETs instead of assuming that all users have the same set of values and preferences. Even an insecure encryption tool is perfectly fine if used for obfuscation or protest. The security community regularly gives users too little credit. Conversely, users who do care about cryptographic security of their encryption tools should be aware that there are a lot of misleading claims out there, and strong security is probably not achievable without effort, training, and vigilance. Finally, I’d love for someone with a background in sociology or digital anthropology to research this topic and improve our understanding of why people encrypt!

[1] While it is tempting as a technologist to map concepts like obfuscation into existing technical terms, it is important to remember that no such mapping exists. Obfuscation is not a technology but the act of using certain technologies for certain ends.

[2] Kate Crawford takes this a step further and argues that it has become a status symbol to blend in, whether digitally or in the real world, and connects it to the normcore fashion trend.

[3] This was the root of Pete Zimmerman’s criticism of Wickr last week.

Thanks to Solon Barocas for comments on a draft.

Comments

  1. “To encrypt is to indicate the desire for privacy, and to encrypt with weak cryptography is to indicate not too much desire for privacy.”

  2. One of the interesting things about using encryption as a form of protest is that it is also a communitarian act designed to lower the signal-to-noise ratio for those trying to find people using encryption for concealment or secrecy. Even if General Eve has effectively infinite computational resources, those may not be as important as the cogntive/attention resources required to decide whether encryption is a useful flagging tool.

  3. John Millington says

    I’ve long felt that if more people would just use PGP/GPG, but _blow_ _off_ out-of-band fingerprint verification, certifying each other, and worrying about being safe from MitM, it’d be a great thing. You just generate a key and start using it, preferably with automatic uploading to, and searching of, the Net’s keyservers. All without ANY scary warnings that you don’t have a trust path. I’d _think_ a lot of the complexity would be avoided by doing this, possibly to the point that many laymen could handle it.

    And yet, despite being vulnerable to attack (any time-travelling nerd from the early 1990s would be absolutely horrified), it would be in-your-face encryption for anyone who is looking at it, and still provide _some_ protection (against totally-passive snooping).

    Best of all, people could always “get serious” later (where they think in terms of actual security, rather than protest), at the cost of having to learn a few things. Just check a preference box that enables a more “classical” MitM-conscientious UI.

    • Seth Schoen says

      Someone has actually made and uploaded a fake key for me and I’ve received several messages mistakenly encrypted using that key rather than my real key. Of course, I couldn’t read them! So even if you don’t care much about the MITM risk, there is a DoS risk to having people start casually using PGP without a trustworthy key exchange mechanism.

  4. Awesome, as always is the case with things that come out of your head and written down…

    I’ve started signing all my email. It rubs hardcore opsec/commsec people the wrong way as I’m essentially giving up non-repudiability for what I consider to be an important aspect of this conversation: I want people to say, “WTF is this ‘—–BEGIN PGP SIGNED MESSAGE—– Hash: SHA256 …’ crap?” And a fair number of people ask me and I get a chance to talk about encryption and even digital signatures if they’ll listen to me long enough. ::)