January 22, 2019

Privacy as a Social Problem, Not a Technology Problem

Bob Blakley had an interesting post Monday, arguing that technologists tend to frame the privacy issue poorly. (I would add that many non-technologists use the same framing.) Here’s a sample:

That’s how privacy works; it’s not about secrecy, and it’s not about control: it’s about sociability. Privacy is a social good which we give to one another, not a social order in which we control one another.

Technologists hate this; social phenomena aren’t deterministic and programmers can’t write code to make them come out right. When technologists are faced with a social problem, they often respond by redefining the problem as a technical problem they think they can solve.

The privacy framing that’s going on in the technology industry today is this:

Social Frame: Privacy is a social problem; the solution is to ensure that people use sensitive personal information only in ways that are beneficial to the subject of the information.

BUT as technologists we can’t … control peoples’ behavior, so we can’t solve this problem. So instead let’s work on a problem that sounds similar:

Technology Frame: Privacy is a technology problem; since we can’t make people use sensitive personal information sociably, the solution is to ensure that people never see others’ sensitive personal information.

We technologists have tried to solve the privacy problem in this technology frame for about a decade now, and, not surprisingly (information wants to be free!) we have failed.

The technology frame isn’t the problem. Privacy is the problem. Society can and routinely does solve the privacy problem in the social frame, by getting the vast majority of people to behave sociably.

This is an excellent point, and one that technologists and policymakers would be wise to consider. Privacy depends, ultimately, on people and institutions showing a reasonable regard for the privacy interests of others.

Bob goes on to argue that technologies should be designed to help these social mechanisms work.

A sociable space is one in which people’s social and antisocial actions are exposed to scrutiny so that normal human social processes can work.

A space in which tagging a photograph publicizes not only the identities of the people in the photograph but also the identities of the person who took the photograph and the person who tagged the photograph is more sociable than a space in which the only identity revealed is that of the person in the photograph – because when the picture of Jimmy holding a martini washes up on the HR department’s desk, Jimmy will know that Johnny took it (at a private party) and Julie tagged him – and the conversations humans have developed over tens of thousands of years to handle these situations will take place.

Again, this is an excellent and underappreciated point. But we need to be careful how far we take it. If we go beyond Bob’s argument, and we say that good design of the kind he advocates can completely solve the online privacy problem, then we have gone too far.

Technology doesn’t just move old privacy problems online. It also creates new problems and exacerbates old ones. In the old days, Johnny and Julie might have taken a photo of Jimmy drinking at the office party, and snail-mailed the photo to HR. That would have been a pretty hostile act. Now, the same harm can arise from a small misunderstanding: Johnny and Julie might assume that HR is more tolerant, or that HR doesn’t watch Facebook; or they might not realize that a site allows HR to search for photos of Jimmy. A photo might be taken by Johnny and tagged by Julie, even though Johnny and Julie don’t know each other. All in all, the photo scenario is more likely to happen today than in the pre-Net age.

This is just one example of what James Grimmelmann calls Accidental Privacy Spills. Grimmelmann tells the story of a private email message that was forwarded and re-forwarded to thousands of people, not by malice but because many people made the seemingly harmless decision to forward it to a few friends. This would never have happened with a personal letter. (Personal letters are sometimes publicized against the wishes of the author, but that’s very rare and wouldn’t have happened in the case Grimmelmann describes.) As the cost of capturing, transmitting, storing, and searching photos and other digital information falls to near-zero, it’s only natural that more capturing, transmitting, storing, and searching of information will occur.

Good design is not the whole solution to our privacy problem. But design has the huge advantage that we can get started on it right away, without needing to reach some sweeping societal agreement about what the rules should be. If you’re designing a product, or deciding which product to use, you can support good privacy design today.

Comments

  1. Now, the same harm can arise from a small misunderstanding: Johnny and Julie might assume that HR is more tolerant, or that HR doesn’t watch Facebook; or they might not realize that a site allows HR to search for photos of Jimmy.

    I completely agree; privacy is largely something that is given not enforced. In the case you outline, I would blame HR for violating Jimmy’s privacy (it has no business in the personal sphere of his life, no matter how interested it might be). Yes, intelligent technological design might help Jimmy keep his embarrassing photos secret, but that’s not really the goal here. There is social utility in allowing Jimmy’s photo to be tagged in a semi-public venue (e.g. Facebook), just as his original activity was appropriate in the semi-public venue of the office party. Protecting Jimmy’s privacy isn’t about keeping his personal life secret from HR, it’s about HR recognizing that Jimmy’s personal life is none of their business.

    That said, I think we are still working out norms for how to deal with privacy in a world where it takes very little effort to intrude on someone else’s privacy. I think we will need to simultaneously become more tolerant “spill” from people’s personal lives (since virtually everyone has something embarrassing to be found online) and also more aware that our personal lives are not 100% separate from our public lives. And, a great many people need to become aware that they do have public lives as long as they participate in society (online or off). There are some personal indiscretions — domestic violence and rape come to mind — that we do consider to be public / social problems, regardless of the environment in which they happen.

  2. Yes, but.

    I agree with much of the above, but it seems to me to be an elaboration of the idea that there is no technological solution to a social problem.

    In particular, right here: “Privacy depends, ultimately, on people and institutions showing a reasonable regard for the privacy interests of others.” – this contains the core of the problem.

    The phrasing treats it as a manner of good will. But in reality, this involves matters such as data-privacy laws, employment laws, and more. All of which tends to be vociferous opposed in the US by certain factions.

    Sure, good design helps – but it’s possible for that to become the equivalent of looking for lost items under the street lamp because that’s where the light is.

  3. This article dissolves into a logical pretzel that defies logic. In reading this article, the nature of the privacy debate is not clearly articulated. Refreshingly, this article does mention that privacy is a social good that we should respect. Nevertheless, I am left confused by the point of privacy as a technological/social problem.

    Maybe I am being simplistic, but privacy belongs to the recipient of any message not the instigator. Privacy as a social good means that the instigator must not initiate a contact with the recipient without their permission. From the technology side, what this means is that the the recipient is provided with an automatic opt-out option and if personal data is left behind with the collector, the collector does not share it, sell it, or otherwise market it. What happens in Vegas stays in Vegas. The technologists should be able to handle that quite easily.

    Now the post alludes to private data that is collected as a collateral activity, such as Google street view that accidentally catches some husband visiting his mistress. In that situation, you were out in public, too bad. When you are in public – the data that is collected is public – we should not have any expectation of privacy. In that type of situation, I don’t see why technology should have any obligation to protect “lost” privacy.

  4. You’re right to caution that technology creates new situations that relate to privacy, and hence must have a role in privacy protection. But I think you still take the original argument too far, in suggesting that “ordinary” privacy (that is, the privacy of things not created by technology) leaves no role for technological protection.

    People have always guarded their privacy with a mix of social and technological means. If this means that all humanity, from the first ape descendant on down, was socially incompetent and had recourse to technology only to cover for that … well, OK, so be it, we’re not up to the task of guarding privacy solely through social means. Bitter pill, but there it is.

    If there are people who think privacy is *exclusively* a technological problem (and I sometimes suspect there are), then they’re quite mistaken, and need your reminder. We have the entire history of technology to demonstrate that technology alone is not enough. But if there are those who think privacy can be protected exclusively socially … well, that history’s just as long and just as unsuccessful.

    But there is one thing that belongs exclusively to the social, rather than technological, realm: the definition and motivation of privacy. There’s no technological reason why any given fact should be protected more than any other: they’re all just bits to our modern binary world. The “whys” belong solely to the social. This is the most important reason why an exclusively technological view of privacy fails: because it lacks purpose, vision, consensus, or value.