September 15, 2019

Deconstructing Google’s excuses on tracking protection

By Jonathan Mayer and Arvind Narayanan.

Blocking cookies is bad for privacy. That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight.

Our high-level points are:

1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting.

2) There is little trustworthy evidence on the comparative value of tracking-based advertising.

3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical.

4) Google is attempting a punt to the web standardization process, which will at best result in years of delay.

What follows is a reproduction of excerpts from yesterday’s announcement, annotated with our comments.

Technology that publishers and advertisers use to make advertising even more relevant to people is now being used far beyond its original design intent – to a point where some data practices don’t match up to user expectations for privacy.

Google is trying to thread a needle here, implying that some level of tracking is consistent with both the original design intent for web technology and user privacy expectations. Neither is true.

If the benchmark is original design intent, let’s be clear: cookies were not supposed to enable third-party tracking, and browsers were supposed to block third-party cookies. We know this because the authors of the original cookie technical specification said so (RFC 2109, Section 4.3.5). 

Similarly, if the benchmark is user privacy expectations, let’s be clear: study after study has demonstrated that users don’t understand and don’t want the pervasive web tracking that occurs today. 

Recently, some other browsers have attempted to address this problem, but without an agreed upon set of standards, attempts to improve user privacy are having unintended consequences.

This is clearly a reference to Safari’s Intelligent Tracking Prevention and Firefox’s Enhanced Tracking Protection, which we think are laudable privacy features. We’ll get to the unintended consequences claim.

First, large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected. We think this subverts user choice and is wrong.

To appreciate the absurdity of this argument, imagine the local police saying, “We see that our town has a pickpocketing problem. But if we crack down on pickpocketing, the pickpocketers will just switch to muggings. That would be even worse. Surely you don’t want that, do you?”

Concretely, there are several things wrong with Google’s argument. First, while fingerprinting is indeed a privacy invasion, that’s an argument for taking additional steps to protect users from it, rather than throwing up our hands in the air. Indeed, Apple and Mozilla have already taken steps to mitigate fingerprinting, and they are continuing to develop anti-fingerprinting protections.

Second, protecting consumer privacy is not like protecting security—just because a clever circumvention is technically possible does not mean it will be widely deployed. Firms face immense reputational and legal pressures against circumventing cookie blocking. Google’s own privacy fumble in 2012 offers a perfect illustration of our point: Google implemented a workaround for Safari’s cookie blocking; it was spotted (in part by one of us), and it had to settle enforcement actions with the Federal Trade Commission and state attorneys general. Afterward, Google didn’t double down—it completely backed away from tracking cookies for Safari users. Based on peer-reviewed research, including our own, we’re confident that fingerprinting continues to represent a small proportion of overall web tracking. And there’s no evidence of an increase in the use of fingerprinting in response to other browsers deploying cookie blocking.

Third, even if a large-scale shift to fingerprinting is inevitable (which it isn’t), cookie blocking still provides meaningful protection against third parties that stick with conventional tracking cookies. That’s better than the defeatist approach that Google is proposing.

This isn’t the first time that Google has used disingenuous arguments to suggest that a privacy protection will backfire. We’re calling this move privacy gaslighting, because it’s an attempt to persuade users and policymakers that an obvious privacy protection—already adopted by Google’s competitors—isn’t actually a privacy protection.

Second, blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant web. Many publishers have been able to continue to invest in freely accessible content because they can be confident that their advertising will fund their costs. If this funding is cut, we are concerned that we will see much less accessible content for everyone. Recent studies have shown that when advertising is made less relevant by removing cookies, funding for publishers falls by 52% on average.

The overt paternalism here is disappointing. Google is taking the position that it knows better than users—if users had all the privacy they want, they wouldn’t get the free content they want more. So no privacy for users.

As for the “recent studies” that Google refers to, that would be one paragraph in one blog post presenting an internal measurement conducted by Google. There is a glaring omission of the details of the measurement that are necessary to have any sort of confidence in the claim. And as long as we’re comparing anecdotes, the international edition of the New York Times recently switched from tracking-based behavioral ads to contextual and geographic ads—and it did not experience any decrease in advertising revenue.

Independent research doesn’t support Google’s claim either: the most recent academic study suggests that tracking only adds about 4% to publisher revenue. This is a topic that merits much more research, and it’s disingenuous for Google to cherry pick its own internal measurement. And it’s important to distinguish the economic issue of whether tracking benefits advertising platforms like Google (which it unambiguously does) from the economic issue of whether tracking benefits publishers (which is unclear).

Starting with today’s announcements, we will work with the web community to develop new standards that advance privacy, while continuing to support free access to content. Over the last couple of weeks, we’ve started sharing our preliminary ideas for a Privacy Sandbox – a secure environment for personalization that also protects user privacy. Some ideas include new approaches to ensure that ads continue to be relevant for users, but user data shared with websites and advertisers would be minimized by anonymously aggregating user information, and keeping much more user information on-device only. Our goal is to create a set of standards that is more consistent with users’ expectations of privacy.

There is nothing new about these ideas. Privacy preserving ad targeting has been an active research area for over a decade. One of us (Mayer) repeatedly pushed Google to adopt these methods during the Do Not Track negotiations (about 2011-2013). Google’s response was to consistently insist that these approaches are not technically feasible. For example: “To put it simply, client-side frequency capping does not work at scale.” We are glad that Google is now taking this direction more seriously, but a few belated think pieces aren’t much progress.

We are also disappointed that the announcement implicitly defines privacy as confidentiality. It ignores that, for some users, the privacy concern is behavioral ad targeting—not the web tracking that enables it. If an ad uses deeply personal information to appeal to emotional vulnerabilities or exploits psychological tendencies to generate a purchase, then that is a form of privacy violation—regardless of the technical details. 

We are following the web standards process and seeking industry feedback on our initial ideas for the Privacy Sandbox. While Chrome can take action quickly in some areas (for instance, restrictions on fingerprinting) developing web standards is a complex process, and we know from experience that ecosystem changes of this scope take time. They require significant thought, debate, and input from many stakeholders, and generally take multiple years.

Apple and Mozilla have tracking protection enabled, by default, today. And Apple is already testing privacy-preserving ad measurement. Meanwhile, Google is talking about a multi-year process for a watered-down form of privacy protection. And even that is uncertain—advertising platforms dragged out the Do Not Track standardization process for over six years, without any meaningful output. If history is any indication, launching a standards process is an effective way for Google to appear to be doing something on web privacy, but without actually delivering. 

In closing, we want to emphasize that the Chrome team is full of smart engineers passionate about protecting their users, and it has done incredible work on web security. But it is unlikely that Google can provide meaningful web privacy while protecting its business interests, and Chrome continues to fall far behind Safari and Firefox. We find this passage from Shoshana Zuboff’s The Age of Surveillance Capitalism to be apt:

“Demanding privacy from surveillance capitalists or lobbying for an end to commercial surveillance on the internet is like asking old Henry Ford to make each Model T by hand. It’s like asking a giraffe to shorten its neck, or a cow to give up chewing. These demands are existential threats that violate the basic mechanisms of the entity’s survival.”

It is disappointing—but regrettably unsurprising—that the Chrome team is cloaking Google’s business priorities in disingenuous technical arguments.

Thanks to Ryan Amos, Kevin Borgolte, and Elena Lucherini for providing comments on a draft.
 

Why PhD experiences are so variable and what you can do about it

People who do PhDs seem to have either strongly positive or strongly negative experiences — for some, it’s the best time of their lives, while others regret the decision to do a PhD. Few career choices involve such a colossal time commitment, so it’s worth thinking carefully about whether a PhD is right for you, and what you can do to maximize your chances of having a good experience. Here are four suggestions. Like all career advice, your mileage may vary.

1. A PhD should be viewed as an end in itself, not a means to an end. Some people find that they are not enjoying their PhD research, but decide to stick with it, seeing it as a necessary route to research success and fulfillment. This is a trap. If you’re not enjoying your PhD research, you’re unlikely to enjoy a research career as a professor. Besides, professors spend the majority of our time on administrative and other unrewarding activities. (And if you don’t plan to be a professor, then you have even less of a reason to stick with an unfulfilling PhD.)

If you feel confident that you’d be happier at some other job than in your PhD, jumping ship is probably the right decision. If possible, structure your program at the outset so that you can leave with a Master’s degree in about two years if the PhD isn’t working out. And consider deferring your PhD for a year or two after college, so that you’ll have a point of comparison for job satisfaction.

2. A PhD is a terrible financial decision. Doing a PhD incurs an enormous financial opportunity cost. If maximizing your earning potential is anywhere near the top of your life goals, you probably want to stay away from a PhD. While earning prospects vary substantially by discipline, a PhD is unlikely to improve your career earnings, regardless of area.

3. The environment matters. PhD programs can be welcoming and nurturing, or toxic and dysfunctional, or anywhere in between. The institution, department, your adviser, and your peers all make a big difference to your experience. But these differences are not reflected in academic rankings. When you’re deciding between programs, you might want to weigh factors like support structures for mental health, the incidence of harassment, location, and extra-curricular activities more strongly than rankings. It is extremely common for graduate researchers to face mental health challenges. During my own PhD, I benefited greatly from professional mental health support.

4. Manage risk. Like viral videos, acting careers, and startups, the distribution of success in research is wildly skewed. Most research papers gather dust while a few get all the credit — and the process that sorts papers involves a degree of luck and circumstance that researchers often don’t like to admit. This contributes to the high variance in PhD outcomes and experiences. Even for the eventual “winners”, the uncertainty is a source of stress.

Perhaps counterintuitively, the role of luck means that you should embrace risky projects, because if a project is low-risk the upside will probably be relatively insignificant as well. How, then, to manage risk? One way is to diversify — maintain a portfolio of independent research agendas. Also, if the success of research projects is not purely meritocratic, it follows that selling your work makes a big difference. Many academics find this distasteful, but it’s simply a necessity. Still, at the end of the day, be mentally prepared for the possibility that your objectively best work languishes while a paper that you cranked out as a hack job ends up being your most highly cited.

Conclusion. Many people embark on a PhD for the wrong reasons, such as their professors talking them into it. But a PhD only makes sense if you strongly value the intrinsic reward of intellectual pursuit and the chance to make an impact through research, with financial considerations being of secondary importance. This is an intensely personal decision. Even if you decide it’s right for you, you might want to leave yourself room to re-evaluate your choice. You should pick your program carefully and have a strategy in place for managing the inherent riskiness of research projects and the somewhat lonely nature of the journey.

A note on terminology. I don’t use the terms grad school and PhD student. The “school” frame is utterly at odds with what PhD programs are about. Its use misleads prospective PhD applicants and does doctoral researchers a disservice. Besides, Master’s and PhD programs have little in common, so the umbrella term “grad school” is doubly unhelpful.

Thanks to Ian Lundberg and Veena Rao for feedback on a draft.

Against privacy defeatism: why browsers can still stop fingerprinting

In this post I’ll discuss how a landmark piece of privacy research was widely misinterpreted, how this misinterpretation deterred the development of privacy technologies rather than spurring it, how a recent paper set the record straight, and what we can learn from all this.

The research in question is about browser fingerprinting. Because of differences in operating systems, browser versions, fonts, plugins, and at least a dozen other factors, different users’ web browsers tend to look different. This can be exploited by websites and third-party trackers to create so-called fingerprints. These fingerprints are much more effective than cookies for tracking users across websites: they leave no trace on the device and cannot easily be reset by the user.

The question is simply this: how effective is browser fingerprinting? That is, how unique is the typical user’s device fingerprint? The answer has big implications for online privacy. But studying this question scientifically is hard: while there are many tracking companies that have enormous databases of fingerprints, they don’t share them with researchers.

The first large-scale experiment on fingerprinting, called Panopticlick, was done by the Electronic Frontier Foundation starting in 2009. Hundreds of thousands of volunteers visited panopticlick.eff.org and agreed to have their browser fingerprinted for research. What the EFF found was remarkable at the time: 83% of participants had a fingerprint that was unique in the sample. Among those with Flash or Java enabled, fingerprints were even more likely to be unique: 94%. A project by researchers at INRIA in France with an even larger sample found broadly similar results. Meanwhile, researchers, including us, found that an ever larger number of browser features — Canvas, Battery, Audio, and WebRTC — were being abused by tracking companies for fingerprinting.

The conclusion was clear: fingerprinting is devastatingly effective. It would be futile for web browsers to try to limit fingerprintability by exposing less information to scripts: there were too many leaks to plug; too many fingerprinting vectors. The implications were profound. Browser vendors concluded that they wouldn’t be able to stop third-party tracking, and so privacy protection was left up to extensions. [1] These extensions didn’t aim to limit fingerprintability either. Instead, most of them worked in a convoluted way: by manually compiling block lists of thousands of third-party tracking scripts, constantly playing catch up as new players entered the tracking game.

But here’s the twist: a team at INRIA (including some of the same researchers responsible for the earlier study) managed to partner with a major French website and test the website’s visitors for fingerprintability. The findings were published a few months ago, and this time the results were quite different: only a third of users had unique fingerprints (compared to 83% and 94% earlier), despite the researchers’ use of a comprehensive set of 17 fingerprinting attributes. For mobile users the number was even lower: less than a fifth. There were two reasons for the differences: a larger sample in the new study, and because self-selection of participants appears to have introduced a bias in the earlier studies. There’s more: since the web is evolving away from plugins such as Flash and Java, we should expect fingerprintability to drop even further. A close look at the paper’s findings suggests that even simple interventions by browsers to limit the highest-entropy attributes would greatly improve the ability of users to hide in the crowd.

Apple recently announced that Safari would try and limit fingerprinting, and it’s likely that the recent paper had an influence in this decision. Notably, a minority of web privacy experts never subscribed to the view that fingerprinting protection is futile, and W3C, the main web standards body, has long provided guidance for developers of new standards on how to minimize fingerprintability. It’s still not too late. But if we’d known in 2009 what we know today, browsers would have had a big head start in developing and deploying fingerprinting defenses.

Why did the misinterpretation happen in the first place? One easy lesson is that statistics is hard, and non-representative samples can thoroughly skew research conclusions. But there’s another pill that’s harder to swallow: the recent study was able to test users in the wild only because the researchers didn’t ask or notify the users. [2] With Internet experiments, there is a tension between traditional informed consent and validity of findings, and we need new ethical norms to resolve this.

Another lesson is that privacy defenses don’t need to be perfect. Many researchers and engineers think about privacy in all-or-nothing terms: a single mistake can be devastating, and if a defense won’t be perfect, we shouldn’t deploy it at all. That might make sense for some applications such as the Tor browser, but for everyday users of mainstream browsers, the threat model is death by a thousand cuts, and privacy defenses succeed by interfering with the operation of the surveillance economy.

Finally, the fingerprinting-defense-is-futile argument is an example of privacy defeatism. Faced with an onslaught of bad news about privacy, we tend to acquire a form of learned helplessness, and reach the simplistic conclusion that privacy is dying and there’s nothing we can do about it. But this position is not supported by historical evidence: instead, we find that there is a constant re-negotiation of the privacy equilibrium, and while there are always privacy-infringing developments, there are offset from time to time by legal, technological, and social defenses.

Browser fingerprinting remains on the frontlines of the privacy battle today. The GDPR is making things harder for fingerprinters. It’s time for browser vendors to also get serious in cracking down on this sneaky practice. 

Thanks to Günes Acar and Steve Englehardt for comments on a draft.

[1] One notable exception is the Tor browser, but it comes at a serious cost to performance and breakage of features on websites. Another is Brave, which has a self-selected userbase presumably willing to accept some breakage in exchange for privacy.

[2] The researchers limited their experiment to users who had previously consented to the site’s generic cookie notice; they did not specifically inform users about their study.