October 16, 2019

Archives for 2019

Content Moderation for End-to-End Encrypted Messaging

Thursday evening, the Attorney General, the Acting Homeland Security Secretary, and top law enforcement officials from the U.K. and Australia sent an open letter to Mark Zuckerberg. The letter emphasizes the scourge of child abuse content online, and the officials call on Facebook to press pause on end-to-end encryption for its messaging platforms.

The letter arrived the same week as a widely shared New York Times article, describing how reports of child abuse content are multiplying. The article provides a heartbreaking account of how the National Center for Missing and Exploited Children (NCMEC) and law enforcement agencies are overburdened and under-resourced in addressing horrible crimes against children.

Much of the public discussion about content moderation and end-to-end encryption over the past week has appeared to reflect two common technical assumptions:

  1. Content moderation is fundamentally incompatible with end-to-end encrypted messaging.
  2. Enabling content moderation for end-to-end encrypted messaging fundamentally poses the same challenges as enabling law enforcement access to message content.

In a new discussion paper, I provide a technical clarification for each of these points.

  1. Forms of content moderation may be compatible with end-to-end encrypted messaging, without compromising important security principles or undermining policy values.
  2. Enabling content moderation for end-to-end encrypted messaging is a different problem from enabling law enforcement access to message content. The problems involve different technical properties, different spaces of possible designs, and different information security and public policy implications.

I aim to demonstrate these clarifications by formalizing specific content moderation properties for end-to-end encrypted messaging, then offering at least one possible protocol design for each property.

  • User Reporting: If a user receives a message that he or she believes contains harmful content, can the user report that message to the service provider?
  • Known Content Detection: Can the service provider automatically detect when a user shares content that has previously been labeled as harmful?
  • Classifier-based Content Detection: Can the service provider detect when a user shares new content that has not been previously identified as harmful, but that an automated classifier predicts may be harmful?
  • Content Tracing: If the service provider identifies a message that contains harmful content, and the message has been forwarded by a sequence of users, can the service provider trace which users forwarded the message?
  • Popular Content Collection: Can the service provider curate a set of content that has been shared by a large number of users, without knowing which users shared the content?

The discussion paper is inherently preliminary and an agenda for further interdisciplinary research (including my own). I am not yet prepared to normatively advocate for or against the protocol designs that I describe. I am not claiming that these concepts provide sufficient content moderation capabilities, the same content moderation capabilities as current systems, or sufficient robustness against evasion. I am also not claiming that these designs adequately address information security risks or public policy values, such as free speech, international human rights, or economic competitiveness.

I do not know if there is a viable path forward for content moderation and end-to-end encrypted messaging that will be acceptable to technology platforms, law enforcement, NCMEC, civil society groups, information security experts, and other stakeholders. I do have confidence that, if such a path exists, we will only find it through open research and dialogue.

Watching You Watch: The Tracking Ecosystem of Over-the-Top TV Streaming Devices

By Hooman Mohajeri Moghaddam, Gunes Acar, Ben Burgess, Arunesh Mathur, Danny Y. Huang, Nick Feamster, Ed Felten, Prateek Mittal, and Arvind Narayanan

By 2020 one third of US households are estimated to “cut the cord”, i.e., discontinue their multichannel TV subscriptions and switch to internet-connected streaming services. Over-the-Top (“OTT”) streaming devices such as Roku and Amazon Fire TV, which currently sell between for $30 to $100, are cheap alternatives to smart TVs for cord-cutters. Instead of charging more for the hardware or the membership, Roku and Amazon Fire TV monetize their platforms through advertisements, which rely on tracking users’ viewing habits.

Although tracking of users on the web and on mobile is well studied, tracking on smart TVs and OTT devices has remained unexplored. To address this gap, we conducted the first study of tracking on OTT platforms. In a paper that we will present at the ACM CCS 2019 conference, we found that: 

  • Major online trackers such as Google and Facebook are also highly prominent in the OTT ecosystem. However, OTT channels also contain niche and lesser known trackers such as adrise.tv and monarchads.com.
  • The information shared with tracker domains includes video titles (see Figure 1), channel names, permanent device identifiers and wireless SSIDs.
  • Countermeasures made available to users are ineffective at preventing tracking.
  • Roku had a vulnerability that allowed malicious web pages visited by Roku users to geolocate users, read device identifiers and install channels without their consent.
 Figure 1. AsianCrush channel on Roku sends the device ID and video title to online video advertising platform spotxchange.com

Method and Findings:

Similar to how Android or iOS supports third-party apps, Amazon and Roku support third-party applications known as channels, ranging from popular channels like Netflix and CNN to several obscure ones.

Automation is one of the main challenges of studying how these channels track users. Tools that automate interaction with web pages (such as Selenium) do not exist for OTT platforms. To address this challenge, we developed a system that can automatically download OTT channels and interact with them all while intercepting the network traffic and performing best-effort TLS interception. We describe the different components of our tool in the Appendix. Using this crawler we collected data from the top 1000 channels on both Roku and the Amazon Fire TV channel stores.

The distribution of trackers by channel category and rank is shown in Figure 2. The “Games” category of Roku channels contact the most trackers: nine of the top ten channels (ordered by the number of trackers) are categorized as game channels. On the other hand, five of the ten Fire TV channels with the most trackers are “News” channels, where the top three channels contact close to 60 tracker domains each. Below we summarize our findings:

Figure 2. Distribution of trackers by channel ranks and channel categories.

Google and Facebook are among the most popular trackers

Google and Facebook domains (doubleclick.net, google-analytics.com, googlesyndication.com and facebook.com) are among the most prevalent trackers in the OTT channels on both platforms we studied. Google’s doubleclick.net appeared on 975 of the top 1000 Roku channels, while amazon-adsystem.com appeared on 687 of the top 1000 Amazon Fire TV channels.

Table 1. Most prevalent trackers on top 1000 channels on Roku (left) and Amazon (right).

User and device identifiers shared with trackers

Trackers have access to a wide range of device and user identifiers on OTT platforms. Some of these identifiers can be reset by users (e.g., Advertising IDs), while others are permanent (e.g., serial numbers, MAC addresses). To detect the identifiers shared with trackers, we followed the method described by Englehardt et al. to search for device and user identifiers in the network traffic of the top 1000 channels for each platform. This allowed us to detect leaks even when the identifiers were encoded or hashed. An overview of the leaked IDs on each platform is given in Table 2.

Table 2. Overview of identifier and information leakage detected in the Roku (left) and the FireTV (right) crawls.

Channels share video titles with third-party trackers

Out of 100 randomly selected channels on Roku and Amazon, we found 9 channels on Roku (e.g., “CBS News” and “News 5 Cleveland WEWS”)  and 14 channels on the Fire TV (e.g., “NBC News” and “Travel Channel”) that leaked the title of the video to a tracking domain. On Roku, all video titles were leaked over unencrypted connections, exposing user video history to eavesdroppers. On Fire TV, only two channels (NBC News and WRAL) used an unencrypted connection when sending the title to tracking domains.

Overwhelming majority of the channels use unencrypted connections

Out of the 1000 channels we studied on Roku and Amazon Fire TV, 794 channels on Roku and 762 on Amazon Fire TV had at least one unencrypted HTTP session, potentially exposing users’ information and identities to network adversaries.

Countermeasures

OTT platforms provide privacy options that purport to limit tracking on their devices: “Limit Ad Tracking” on Roku and ”Disable Interest-based Ads” on Amazon Fire TV. Our measurements show that these privacy options fall short of preventing tracking. Turning on these options did not change the number of trackers contacted. Turning on “Limit Ad Tracking” on Roku reduced the number of AD ID leaks from 390 to zero, but did not change the number of serial number leaks.

Roku Remote Control API Vulnerability

To investigate other ways OTT devices may compromise user privacy and security, we analyzed local API endpoints of Roku and Fire TV. OTT devices expose such interfaces to enable debugging, remote control, and home automation by mobile apps and other automation software. We discovered a vulnerability in the Roku’s remote control API that allows an attacker to:

  • send commands to install/uninstall/launch channels and collect unique identifiers from Roku devices – even when the connected display is turned off.
  • geolocate Roku users via the SSID of the wireless network
  • extract MAC address, serial number, and other unique identifiers to track users or respawn tracking identifiers (similar to evercookies).
  • get the list of installed channels and use it for profiling purposes.

We reported the vulnerability to Roku in December 2018. Roku addressed the issue and finalized rolling out their security fix by March 2019.

Going forward

Our research shows that users, who are already being pervasively tracked on the web and mobile, face another set of privacy-intrusive tracking practices when using their OTT streaming platforms. A combination of technical and policy solutions can be considered when addressing these privacy and security issues. OTT platforms should offer better privacy controls, similar to Incognito/Private Browsing Mode of modern web browsers. Insecure connections should be disincentivized by platform policies. For example, clear-text connections should be blocked unless an exception is requested by the channel. Regulators and policy makers should ensure the privacy protections available for brick and mortar video rental services, such as Video Privacy Protection Act (VPPA), are updated to cover emerging OTT platforms.

Appendix

Crawler architecture:

We set out to build a crawler to study tracking and privacy practices of OTT channels at scale. Our crawler installs a channel, launches it, and attempts to view a video on the channel, while collecting network traffic and attempting “best-effort” TLS interception. The crawler consists of a number of different hardware devices:

  • A desktop machine connected to the Internet acts as a wireless access point (AP).
  • An OTT stick connects to the Internet via the WiFi AP provided by the desktop machine. It also connects to a TV through an HDMI Capture and Split Card to sidestep the HDCP protections.

The desktop machine orchestrates our crawls and has the following software components:

  • Automatic interaction engine:
    • Remote Control API: OTT platforms provide an API to enable remote control apps to send commands such as switching or installing channels. We wrote our own wrappers for both Roku and Amazon Fire TV’s remote APIs.
    • Audio/Video processing: We process the audio from the OTT device on the desktop machine and use it to detect video playback, which guides our automatic interaction with channels. Video input is also saved as screenshots for post-processing and validation.
  • Network Capture: We collect network traffic of the OTT devices as pcap files and dump all DNS transactions in a Redis database.
  • TLS interception: We use mitmproxy to perform “best-effort” TLS interception. For each channel and each new TLS endpoint, we attempt to intercept the traffic using a self-signed certificate. If the interception fails, we add the endpoint to a no-intercept list to avoid further interception attempts. On Amazon Fire TV, we manage to root the device using a previously known vulnerability, and install mitmproxy’s self-signed certificate on the device certificate store. In addition, we use Frida to bypass certificate pinning.
Figure 3. Overview of our smart crawler.

Deconstructing Google’s excuses on tracking protection

By Jonathan Mayer and Arvind Narayanan.

Blocking cookies is bad for privacy. That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight.

Our high-level points are:

1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting.

2) There is little trustworthy evidence on the comparative value of tracking-based advertising.

3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical.

4) Google is attempting a punt to the web standardization process, which will at best result in years of delay.

What follows is a reproduction of excerpts from yesterday’s announcement, annotated with our comments.

Technology that publishers and advertisers use to make advertising even more relevant to people is now being used far beyond its original design intent – to a point where some data practices don’t match up to user expectations for privacy.

Google is trying to thread a needle here, implying that some level of tracking is consistent with both the original design intent for web technology and user privacy expectations. Neither is true.

If the benchmark is original design intent, let’s be clear: cookies were not supposed to enable third-party tracking, and browsers were supposed to block third-party cookies. We know this because the authors of the original cookie technical specification said so (RFC 2109, Section 4.3.5). 

Similarly, if the benchmark is user privacy expectations, let’s be clear: study after study has demonstrated that users don’t understand and don’t want the pervasive web tracking that occurs today. 

Recently, some other browsers have attempted to address this problem, but without an agreed upon set of standards, attempts to improve user privacy are having unintended consequences.

This is clearly a reference to Safari’s Intelligent Tracking Prevention and Firefox’s Enhanced Tracking Protection, which we think are laudable privacy features. We’ll get to the unintended consequences claim.

First, large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected. We think this subverts user choice and is wrong.

To appreciate the absurdity of this argument, imagine the local police saying, “We see that our town has a pickpocketing problem. But if we crack down on pickpocketing, the pickpocketers will just switch to muggings. That would be even worse. Surely you don’t want that, do you?”

Concretely, there are several things wrong with Google’s argument. First, while fingerprinting is indeed a privacy invasion, that’s an argument for taking additional steps to protect users from it, rather than throwing up our hands in the air. Indeed, Apple and Mozilla have already taken steps to mitigate fingerprinting, and they are continuing to develop anti-fingerprinting protections.

Second, protecting consumer privacy is not like protecting security—just because a clever circumvention is technically possible does not mean it will be widely deployed. Firms face immense reputational and legal pressures against circumventing cookie blocking. Google’s own privacy fumble in 2012 offers a perfect illustration of our point: Google implemented a workaround for Safari’s cookie blocking; it was spotted (in part by one of us), and it had to settle enforcement actions with the Federal Trade Commission and state attorneys general. Afterward, Google didn’t double down—it completely backed away from tracking cookies for Safari users. Based on peer-reviewed research, including our own, we’re confident that fingerprinting continues to represent a small proportion of overall web tracking. And there’s no evidence of an increase in the use of fingerprinting in response to other browsers deploying cookie blocking.

Third, even if a large-scale shift to fingerprinting is inevitable (which it isn’t), cookie blocking still provides meaningful protection against third parties that stick with conventional tracking cookies. That’s better than the defeatist approach that Google is proposing.

This isn’t the first time that Google has used disingenuous arguments to suggest that a privacy protection will backfire. We’re calling this move privacy gaslighting, because it’s an attempt to persuade users and policymakers that an obvious privacy protection—already adopted by Google’s competitors—isn’t actually a privacy protection.

Second, blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant web. Many publishers have been able to continue to invest in freely accessible content because they can be confident that their advertising will fund their costs. If this funding is cut, we are concerned that we will see much less accessible content for everyone. Recent studies have shown that when advertising is made less relevant by removing cookies, funding for publishers falls by 52% on average.

The overt paternalism here is disappointing. Google is taking the position that it knows better than users—if users had all the privacy they want, they wouldn’t get the free content they want more. So no privacy for users.

As for the “recent studies” that Google refers to, that would be one paragraph in one blog post presenting an internal measurement conducted by Google. There is a glaring omission of the details of the measurement that are necessary to have any sort of confidence in the claim. And as long as we’re comparing anecdotes, the international edition of the New York Times recently switched from tracking-based behavioral ads to contextual and geographic ads—and it did not experience any decrease in advertising revenue.

Independent research doesn’t support Google’s claim either: the most recent academic study suggests that tracking only adds about 4% to publisher revenue. This is a topic that merits much more research, and it’s disingenuous for Google to cherry pick its own internal measurement. And it’s important to distinguish the economic issue of whether tracking benefits advertising platforms like Google (which it unambiguously does) from the economic issue of whether tracking benefits publishers (which is unclear).

Starting with today’s announcements, we will work with the web community to develop new standards that advance privacy, while continuing to support free access to content. Over the last couple of weeks, we’ve started sharing our preliminary ideas for a Privacy Sandbox – a secure environment for personalization that also protects user privacy. Some ideas include new approaches to ensure that ads continue to be relevant for users, but user data shared with websites and advertisers would be minimized by anonymously aggregating user information, and keeping much more user information on-device only. Our goal is to create a set of standards that is more consistent with users’ expectations of privacy.

There is nothing new about these ideas. Privacy preserving ad targeting has been an active research area for over a decade. One of us (Mayer) repeatedly pushed Google to adopt these methods during the Do Not Track negotiations (about 2011-2013). Google’s response was to consistently insist that these approaches are not technically feasible. For example: “To put it simply, client-side frequency capping does not work at scale.” We are glad that Google is now taking this direction more seriously, but a few belated think pieces aren’t much progress.

We are also disappointed that the announcement implicitly defines privacy as confidentiality. It ignores that, for some users, the privacy concern is behavioral ad targeting—not the web tracking that enables it. If an ad uses deeply personal information to appeal to emotional vulnerabilities or exploits psychological tendencies to generate a purchase, then that is a form of privacy violation—regardless of the technical details. 

We are following the web standards process and seeking industry feedback on our initial ideas for the Privacy Sandbox. While Chrome can take action quickly in some areas (for instance, restrictions on fingerprinting) developing web standards is a complex process, and we know from experience that ecosystem changes of this scope take time. They require significant thought, debate, and input from many stakeholders, and generally take multiple years.

Apple and Mozilla have tracking protection enabled, by default, today. And Apple is already testing privacy-preserving ad measurement. Meanwhile, Google is talking about a multi-year process for a watered-down form of privacy protection. And even that is uncertain—advertising platforms dragged out the Do Not Track standardization process for over six years, without any meaningful output. If history is any indication, launching a standards process is an effective way for Google to appear to be doing something on web privacy, but without actually delivering. 

In closing, we want to emphasize that the Chrome team is full of smart engineers passionate about protecting their users, and it has done incredible work on web security. But it is unlikely that Google can provide meaningful web privacy while protecting its business interests, and Chrome continues to fall far behind Safari and Firefox. We find this passage from Shoshana Zuboff’s The Age of Surveillance Capitalism to be apt:

“Demanding privacy from surveillance capitalists or lobbying for an end to commercial surveillance on the internet is like asking old Henry Ford to make each Model T by hand. It’s like asking a giraffe to shorten its neck, or a cow to give up chewing. These demands are existential threats that violate the basic mechanisms of the entity’s survival.”

It is disappointing—but regrettably unsurprising—that the Chrome team is cloaking Google’s business priorities in disingenuous technical arguments.

Thanks to Ryan Amos, Kevin Borgolte, and Elena Lucherini for providing comments on a draft.