December 5, 2024

Eternal vigilance is a solvable technology problem: A proposal for streamlined privacy alerts

Consider three recent news articles about online privacy:

  • Google+ added a new feature that shows view counts on everything you post, including your photos. It’s enabled by default, but if you don’t want to be part of the popularity contest, there’s a setting to turn it off.

  • There is a new privacy tool called XPrivacy for Android that protects you from apps that are hungry for your personal information (it does this by by feeding them fake data).

  • A new study reveals that several education technology providers have intrusive privacy policies. Students and parents might want to take this into account in making choices about online education services.

These are just a few examples of the dozens of articles that come out every month informing privacy-conscious users that they need to change some setting, install a tool, or otherwise take some action to protect their privacy. In particular, companies often release new features with permissive defaults and an opt-out setting. It seems that online privacy requires eternal vigilance.

Eternal vigilance is hard. Even as a privacy researcher I often miss privacy news that affects me; for the majority of people who don’t have as much time to devote to online privacy, the burden is just too much. But before concluding that the situation is hopeless, let’s ask if there’s a technological solution.

There seem to be two problems with the status quo. First, there is no way to separate the articles on privacy that provide direct, actionable solutions from those that conclude “this is an outrage!” or “write to your congressperson today!” [*] Second, only a small fraction of these stories affect any given user because they only affect specific demographics or users of a specific product.

Here’s how we could build a “privacy alert” system that solves these problems. It has two components. The first is a privacy “vulnerability tracker” similar to well-established security vulnerability trackers (1, 2, 3). Each privacy threat is tagged with severity, products or demographics affected, and includes a list of steps users can take. The second component is a user-facing privacy tool that knows the user’s product choices, overall privacy preferences, etc., and uses this to filter the vulnerability database and generate alerts tailored to the user.

While the core design is very simple, we can imagine a number of bells and whistles. The vulnerability database could utilize crowdsourcing to increase coverage and expediency, and offer an open API so that anyone can utilize the data. If the user-facing tool taps into browsing history and other personal information, it can automatically infer which vulnerabilities are relevant to the user. Of course, this raises its own privacy concerns, so the tool would have to be offered by a company or organization that the user trusts.

One wishes that a tool like this weren’t necessary — wouldn’t it be great if companies had privacy-protective defaults and clearly communicated new features and privacy policy changes to users? But that’s not the world we live in, despite occasional positive examples like Facebook’s new location feature. Given this reality, some pundits emphasize personal responsibility while others call for change, often in the form of regulation. While these positions are philosophically very different, from a practical perspective the technological tool outlined here might be able to bridge the gap between them.

The ideas in this post aren’t fundamentally new, but by describing how the tool could work I hope to encourage people to work on it. It would make for a neat student project, and I’d be happy to collaborate with someone who wants to build it.

[*] Of course, sometimes there simply aren’t any meaningful protective steps an individual can take in response to a privacy intrusion, and collective action is the only recourse. But this post is about simplifying dealing with the 90% of privacy threats that do have individual-level solutions.

Thanks to Jonathan Mayer for reviewing a draft.

Edit: fixed link.

Comments

  1. Andrew McNaughton says

    You need to be a bit careful about a technological privacy solution that starts with collecting info about the user. You say “Of course, this raises its own privacy concerns, so the tool would have to be offered by a company or organization that the user trusts.” I don’t think this is enough. The service needs to be able to operate on a zero knowledge basis.

  2. Richard Beaumont says

    Arvind. I believe that we have some of the pieces of the puzzle in place – and have had similar ideas, though perhaps on a less ambitious scale. Do reach out by email if you want to explore them to take this forward.