October 30, 2024

Do Not Track: Not as Simple as it Sounds

Over the past few weeks, regulators have rekindled their interest in an online Do Not Track proposal in hopes of better protecting consumer privacy. FTC Chairman Jon Leibowitz told a Senate Commerce subcommittee last month that Do Not Track is “one promising area” for regulatory action and that the Commission plans to issue a report in the fall about “whether this is one viable way to proceed.” Senator Mark Pryor (D-AR), who sits on the subcommittee, is also reportedly drafting a new privacy bill that includes some version of this idea, of empowering consumers with blanket opt-out powers over online tracking.

Details are sparse at this point about how a Do Not Track mechanism might actually be implemented. There are a variety of possible technical and regulatory approaches to the problem, each with its own difficulties and limitations, which I’ll discuss in this post.

An Adaptation of “Do Not Call”

Because of its name, Do Not Track draws immediate comparisons to arguably the most popular piece of consumer protection regulation ever instituted in the US—the National Do Not Call Registry. If the FTC were to take an analogous approach for online tracking, a consumer would register his device’s network identifier—its IP address—with the national registry. Online advertisers would then be prohibited from tracking devices that are identified by those IP addresses.

Of course, consumer devices rarely have persistent long-term IP addresses. Most ISPs assign IP addresses dynamically (using DHCP) and a single device might be assigned a new IP address every few minutes. Consumer devices often also share the same IP address at the same time (using NAT) so there’s no stable one-to-one mapping between IPs and devices. Things could be different with IPv6, where each device could have its own stable IP address, but the Do Not Call framework, directly applied, is not the best solution for today’s online world.

The comparison is still useful though, if only to caution against the assumption that Do Not Track will be as easy, or as successful, as Do Not Call. The differences between the problems at hand and the technologies involved are substantial.

A Registry of Tracking Domains

Back in 2007, a coalition of online consumer privacy groups lobbied for the creation of a national Do Not Track List. They proposed a reverse approach: online advertisers would be required to register with the FTC all domain names used to issue persistent identifiers to user devices. The FTC would then publish this list, and it would be up to the browser to protect users from being tracked by these domains. Notice that the onus here is fully on the browser—equipped with this list—to protect the user from being uniquely identified. Meanwhile, online advertisers would still have free rein to try any method they wish to track user behavior, so long as it happens from these tracking domains.

We’ve learned over the past couple of years that modern browsers, from a practical perspective, can be limited in their ability to protect the user from unique identification. The most stark example of this is the browser fingerprinting attack, which was popularized by the EFF earlier this year. In this attack, the tracking site runs a special script that gathers information about the browser’s configurations, which are unique enough to identify the browser instance in nearly every case. The attack takes advantage of the fact that much of the gathered information is used frequently for legitimate purposes—such as determining which plugins are available to the site—so a browser which blocks the release of this information would surely irritate the user. As these kinds of “side-channel” attacks grow in sophistication, major browser vendors might always be playing catch-up in the technical arms race, leaving most users vulnerable to some form of tracking by these domains.

The x-notrack Header

If we believe that browsers, on their own, will be unable to fully protect users, then any effective Do No Track proposal will need to place some restraints on server tracking behavior. Browsers could send a signal to the tracking server to indicate that the user does not want this particular interaction to be tracked. The signaling mechanism could be in the form of a standard pre-defined cookie field, or more likely, an HTTP header that marks the user’s tracking preference for each connection.

In the simplest case, the HTTP header—call it x-notrack—is a binary flag that can be turned on or off. The browser could enable x-notrack for every HTTP connection, or for connections to only third party sites, or for connections to some set of user-specified sites. Upon receiving the signal not to track, the site would be prevented, by FTC regulation, from setting any persistent identifiers on the user’s machine or using any other side-channel mechanism to uniquely identify the browser and track the interaction.

While this approach seems simple, it could raise a few complicated issues. One issue is bifurcation: nothing would prevent sites from offering limited content or features to users who choose to opt-out of tracking. One could imagine a divided Web, where a user who turns on the x-notrack header for all HTTP connections—i.e. a blanket opt-out—would essentially turn off many of the useful features on the Web.

By being more judicious in the use of x-notrack, a user could permit silos of first-party tracking in exchange for individual feature-rich sites, while limiting widespread tracking by third parties. But many third parties offer useful services, like embedding videos or integrating social media features, and they might require that users disable x-notrack in order to access their services. Users could theoretically make a privacy choice for each third party, but such a reality seems antithetical to the motivations behind Do Not Track: to give consumers an easy mechanism to opt-out of harmful online tracking in one fell swoop.

The FTC could potentially remedy this scenario by including some provision for “tracking neutrality,” which would prohibit sites from unnecessarily discriminating against a user’s choice not to be tracked. I won’t get into the details here, but suffice it to say that crafting a narrow yet effective neutrality provision would be highly contentious.

Privacy Isn’t a Binary Choice

The underlying difficulty in designing a simple Do Not Track mechanism is the subjective nature of privacy. What one user considers harmful tracking might be completely reasonable to another. Privacy isn’t a single binary choice but rather a series of individually-considered decisions that each depend on who the tracking party is, how much information can be combined and what the user gets in return for being tracked. This makes the general concept of online Do Not Track—or any blanket opt-out regime—a fairly awkward fit. Users need simplicity, but whether simple controls can adequately capture the nuances of individual privacy preferences is an open question.

Another open question is whether browser vendors can eventually “win” the technical arms race against tracking technologies. If so, regulations might not be necessary, as innovative browsers could fully insulate users from unwanted tracking. While tracking technologies are currently winning this race, I wouldn’t call it a foregone conclusion.

The one thing we do know is this: Do Not Track is not as simple as it sounds. If regulators are serious about putting forth a proposal, and it sounds like they are, we need to start having a more robust conversation about the merits and ramifications of these issues.

A Good Day for Email Privacy: A Court Takes Back its Earlier, Bad Ruling in Rehberg v. Paulk

In March, the U.S. Court of Appeals for the Eleventh Circuit, the court that sets federal law for Alabama, Florida, and Georgia, ruled in an opinion in a case called Rehberg v. Paulk that people lacked a reasonable expectation of privacy in the content of email messages stored with an email provider. This meant that the police in those three states were free to ignore the Fourth Amendment when obtaining email messages from a provider. In this case, the plaintiff alleged that the District Attorney had used a sham subpoena to trick a provider to hand over the plaintiff’s email messages. The Court ruled that the DA was allowed to do this, consistent with the Constitution.

I am happy to report that today, the Court vacated the opinion and replaced it with a much more carefully reasoned, nuanced opinion.

Most importantly, the Eleventh Circuit no longer holds that “A person also loses a reasonable expectation of privacy in emails, at least after the email is sent to and received by a third party.” nor that “Rehberg’s voluntary delivery of emails to third parties constituted a voluntary relinquishment of the right to privacy in that information.” These bad statements of law have effectively been erased from the court reporters.

This is a great victory for Internet privacy, although it could have been even better. The Court no longer strips email messages of protection, but it didn’t go further and affirmatively hold that email users possess a Fourth Amendment right to privacy in email. Instead, the Court ruled that even if such a right exists, it wasn’t “clearly established,” at the time the District Attorney acted, which means the plaintiff can’t continue to pursue this claim.

I am personally invested in this case because I authored a brief asking the Court to reverse its earlier bad ruling. I am glad the Court agreed with us and thank all of the other law professors who signed the brief: Susan Brenner, Susan Freiwald, Stephen Henderson, Jennifer Lynch, Deirdre Mulligan, Joel Reidenberg, Jason Schultz, Chris Slobogin, and Dan Solove. Thanks also to my incredibly hard-working and talented research assistants, Nicole Freiss and Devin Looijien.

Updated: The EFF (which represents the plaintiff) is much more disappointed in the amended opinion than I. They make a lot of good points, but I prefer to see the glass half-full.

Privacy Theater

I have a piece in today’s NY Times “Room for Debate” feature, on whether the government should regulate Facebook. In writing the piece, I was looking for a pithy way to express the problems with today’s notice-and-consent model for online privacy. After some thought, I settled on “privacy theater”.

Bruce Schneier has popularized the term “security theater,” denoting security measures that look impressive but don’t actually protect us—they create the appearance of security but not the reality. When a security guard asks to see your ID but doesn’t do more than glance at it, that’s security theater. Much of what happens at airport checkpoints is security theater too.

Privacy theater is the same concept, applied to privacy. Facebook’s privacy policy runs to almost 6000 words of dense legalese. We are all supposed to have read it and agreed to accept its terms. But that’s just theater. Hardly any of us have actually read privacy policies, and even fewer consider carefully their provisions. As I wrote in the Times piece, we pretend to have read sites’ privacy policies, and the sites pretend that we have understood and consented to all of their terms. It’s privacy theater.

Worse yet. privacy policies are subject to change. When sites change their policies, we get another round of privacy theater, in which sites pretend to notify us of the changes, and we pretend to consider them before continuing our use of the site.

And yet, if we’re going to replace the notice-and-consent model, we need something else to put in its place. At this point, It’s hard to see what that might be. It might help to set up default rules, on the theory that a policy that states how it differs from the default might be shorter and simpler than a stand-alone policy, but that approach will only go so far.

In the end, we may be stuck with privacy theater, just as we’re often stuck with security theater. If we can’t provide the reality of privacy or security, we can settle for theater, which at least makes us feel a bit better about our vulnerability.

Google Publishes Data on Government Data and Takedown Requests

Citizens have long wondered how often their governments ask online service providers for data about users, and how often governments ask providers to take down content. Today Google took a significant step on this issue, unveiling a site reporting numbers on a country-by-country basis.

It’s important to understand what is and isn’t included in the data on the Google site. First, according to Google, the data excludes child porn, which Google tries to block proactively, worldwide.

Second, the site reports requests made by government, not by private individuals. (Court orders arising from private lawsuits are included, because the court issuing the order is an arm of government.) Because private requests are excluded, the number of removal requests is lower than you might expect — presumably removal requests from governments are much less common than those from private parties such as copyright owners.

Third, Google is reporting the number of requests received, and not the number of users affected. A single request might affect many users; or several requests might focus on a single user. So we can’t use this data to estimate the number of citizens affected in any particular country.

Another caveat is that Google reports the country whose government submitted the request to Google, but this may not always be the government that originated the request. Under Mutual Legal Assistance Treaties, signatory countries agree to pass on law enforcement data requests for other signatories under some circumstances. This might account for some of the United States data requests, for example, if other countries asked the U.S. government to make data requests to Google. We would expect there to be some such proxy requests, but we can’t tell from the reported data how many there were. (It’s not clear whether Google would always be able to distinguish these proxy requests from direct requests.)

With these caveats in mind, let’s look at the numbers. Notably, Brazil tops both the data-requests list and the takedown-requests list. The likely cause is the popularity of Orkut, Google’s social network product, in Brazil. India, where Orkut is also somewhat popular, appears relatively high on the list as well. Social networks often breed disputes about impersonation and defamation, which could lead a government to order release of information about who is using a particular account.

The U.S. ranks second on the data-requests list but is lower on the takedown-requests list. This is consistent with the current U.S. trend toward broader data gathering by law enforcement, along with the relatively strong protection of free speech in the U.S.

Finally, China is a big question mark. According to Google, the Chinese government claims that the relevant data is a state secret, so Google cannot release it. The Chinese government stands conspicuously alone in this respect, choosing to deny its citizens even this basic information about their government’s activities.

There’s a lot more information I’d like to see about government requests. How many citizens are affected? How many requests does Google comply with? What kinds of data do governments seek about Google users? And so on.

Despite its limitations, Google’s site is a valuable step toward transparency about governments’ attempts to observe and control their citizens’ online activities. I hope other companies will follow suit, and that Google will keep pushing on this issue.

Pseudonyms: The Natural State of Online Identity

I’ve been writing recently about the problems that arise when you try to use cryptography to verify who is at the other end of a network connection. The cryptographic math works, but that doesn’t mean you get the identity part right.

You might think, from this discussion, that crypto by itself does nothing — that cryptographic security can only be bootstrapped from some kind of real-world identity verification. That’s the way it works for website certificates, where a certificate authority has to check your bona fides before it will issue you a certificate.

But this intuition turns out to be wrong. There is one thing that crypto can do perfectly, without any real-world support: providing pseudonyms. Indeed, crypto is so good at supporting pseudonyms that we can practically say that pseudonyms are the natural state of identity online.

To explain why this is true, I need to offer a gentle introduction to a basic crypto operation: digital signatures. Suppose John Doe (“JD”) wants to use digital signatures. First, JD needs to create a private cryptographic key, which he does by generating some random numbers and combining them according to a special geeky recipe. The result is a unique private key that only JD knows. Next, JD uses a certain procedure to determine the public key that corresponds to his private key. He announces the public key to everyone. The math guarantees that (1) JD’s public key is unique and corresponds to JD’s private key, and (2) a person who knows JD’s public key can’t figure out JD’s private key.

Now JD can make digital signatures. If JD wants to “sign” a certain message M, he combines M with JD’s private key in a special way, and the result is JD’s “signature on M”. Now anybody can verify the signature, using JD’s public key. Only JD can make the signature, because only JD knows JD’s private key; but anybody can verify the signature.

At no point in this process does JD tell anybody who he is — I called him “John Doe” for a reason. Indeed, JD’s public key is a perfect pseudonym: it conveys nothing about JD’s actual identity, yet it has a distinct “owner” whose presence can be verified. (“You’re really the person who created this public key? Then you should be able to make a signature on the message ‘squeamish ossifrage’ for me….”)

Using this method, anybody can make up a fresh pseudonym whenever they want. If you can generate random numbers and do some math (or have your computer do those things for you), then you can make a fresh pseudonym. You can make as many as you want, without needing to coordinate with anybody. This is all easy to do.

These methods, pseudonyms and signatures, are used even in cases where we want to verify somebody’s real-world identity. When you connect to (say) https://mail.google.com, Google’s web server gives you its public key — a pseudonym — along with a digital certificate that attests that that public key — that pseudonym — belongs to Google Inc. Binding public keys — pseudonyms — to real-world identities is tedious and messy, but of course this is often necessary in practice.

Online, identities are hard to manage. Pseudonyms are easy.