April 30, 2016

avatar

What Your ISP (Probably) Knows About You

Earlier this week, I came across a working paper from Professor Peter Swire—a highly respected attorney, professor, and policy expert.  Swire’s paper, entitled “Online Privacy and ISPs“, argues that ISPs have limited capability to monitor users’ online activity. The paper argues that ISPs have limited visibility into users’ online activity for three reasons:  (1) users are increasingly using many devices and connections, so any single ISP is the conduit of only a fraction of a typical user’s activity; (2) end-to-end encryption is becoming more pervasive, which limits ISPs’ ability to glean information about user activity; and (3) users are increasingly shifting to VPNs to send traffic.

An informed reader might surmise that this writeup relates to the reclassification of Internet service providers under Title II of the Telecommunications Act, which gives the FCC a mandate to protect private information that ISPs learn about their customers. This private information includes both personal information, as well as information about a customer’s use of the service that is provided as a result of receiving service—sometimes called Customer Proprietary Network Information, or CPNI. One possible conclusion a reader might draw from this white paper is that ISPs have limited capability to learn information about customers’ use of their service and hence should not be subject to additional privacy regulations.

I am not taking a position in this policy debate, nor do I intend to make any normative statements about whether an ISP’s ability to see this type of user information is inherently “good” or “bad” (in fact, one might even argue that an ISP’s ability to see this information might improve network security, network management, or other services). Nevertheless, these debates should be based on a technical picture that is as accurate as possible.  In this vein, it is worth examining Professor Swire’s “factual description of today’s online ecosystem” that claims to offer the reader an “up-to-date and accurate understanding of the facts”. It is true that the report certainly contains many facts, but it also omits important details about the “online ecosystem”. Below, I fill in what I see as some important missing pieces. Much of what I discuss below I have also sent verbatim in a letter to the FCC Chairman. I hope that the original report will ultimately incorporate some of these points.

[Update (March 9): Swire notes in a response that the report itself doesn’t contain technical inaccuracies. Although there are certainly many points that are arguable, they are hard to disprove without better data, so it is difficult to “prove” the inaccuracies. Even if we take it as a given that there are no inaccuracies, that’s a very different thing than saying that the report tells the whole story.]
[Read more…]

avatar

An analogy to understand the FBI’s request of Apple

After my previous blog post about the FBI, Apple, and the San Bernadino iPhone, I’ve been reading many other bloggers and news articles on the topic. What seems to be missing is a decent analogy to explain the unusual nature of the FBI’s demand and the importance of Apple’s stance in opposition to it. Before I dive in, it’s worth understanding what the FBI’s larger goals are. Cyrus Vance Jr., the Manhattan DA, states it clearly: “no smartphone lies beyond the reach of a judicial search warrant.” That’s the FBI’s real goal. The San Bernadino case is just a vehicle toward achieving that goal. With this in mind, it’s less important to focus on the specific details of the San Bernadino case, the subtle improvements Apple has made to the iPhone since the 5c, or the apparent mishandling of the iCloud account behind the San Bernadino iPhone.

Our Analogy: TSA Luggage Locks

When you check your bags in the airport, you may well want to lock them, to keep baggage handlers and other interlopers from stealing your stuff. But, of course, baggage inspectors have a legitimate need to look through bags. Your bags don’t have any right of privacy in an airport. To satisfy these needs, we now have “TSA locks”. You get a combination you can enter, and the TSA gets their own secret key that allows airport staff to open any TSA lock. That’s a “backdoor”, engineered into the lock’s design.

What’s the alternative? If you want the TSA to have the technical capacity to search a large percentage of bags, then there really isn’t an alternative. After all, if we used “real” locks, then the TSA would be “forced” to cut them open. But consider the hypothetical case where these sorts of searches were exceptionally rare. At that point, the local TSA could keep hundreds of spare locks, of all makes and models. They could cut off your super-duper strong lock, inspect your bag, and then replace the cut lock with a brand new one of the same variety. They could extract the PIN or key cylinder from the broken lock and install it in the new one. They could even rough up the new one so it looks just like the original. Needless to say, this would be a specialized skill and it would be expensive to use. That’s pretty much where we are in terms of hacking the newest smartphones.

Another area where this analogy holds up is all the people who will “need” access to the backdoor keys. Who gets the backdoor keys? Sure, it might begin with the TSA, but every baggage inspector in every airport, worldwide, will demand access to those keys. And they’ll even justify it, because their inspectors work together with ours to defeat smuggling and other crimes. We’re all in this together! Next thing you know, the backdoor keys are everywhere. Is that a bad thing? Well, the TSA backdoor lock scheme is only as secure as their ability to keep the keys a secret. And what happened? The TSA mistakenly allowed the Washington Post to publish a photo of all the keys, which makes it trivial for anyone to fabricate those keys. (CAD files for them are now online!) Consequently, anybody can take advantage of the TSA locks’ designed-in backdoor, not just all the world’s baggage inspectors.

For San Bernadino, the FBI wants Apple to retrofit a backdoor mechanism where there wasn’t one previously. The legal precedent that the FBI wants creates a capability to convert any luggage lock into a TSA backdoor lock. This would only be necessary if they wanted access to lots of phones, at a scale where their specialized phone-cracking team becomes too expensive to operate. This no doubt becomes all the more pressing for the FBI as modern smartphones get better and better at resisting physical attacks.

Where the analogy breaks down: If you travel with expensive stuff in your luggage, you know well that those locks have very limited resistance to an attacker with bolt cutters. If somebody steals your luggage, they’ll get your stuff, whereas that’s not necessarily the case with a modern iPhone. These phones are akin to luggage having some kind of self-destruct charge inside. You force the luggage open and the contents will be destroyed. Another important difference is that much of the data that the FBI presumably wants from the San Bernadino phone can be gotten elsewhere, e.g., phone call metadata and cellular tower usage metadata. We have very little reason to believe that the FBI needs anything on that phone whatsoever, relative to the mountain of evidence that it already has.

Why this analogy is important: The capability to access the San Bernadino iPhone, as the court order describes it, is a one-off thing—a magic wand that converts precisely one traditional luggage lock into a TSA backdoor lock, having no effect on any other lock in the world. But as Vance makes clear in his New York Times opinion, the stakes are much higher than that. The FBI wants this magic wand, in the form of judicial orders and a bespoke Apple engineering process, to gain backdoor access to any phone in their possession. If the FBI can go to Apple to demand this, then so can any other government. Apple will quickly want to get itself out of the business of adjudicating these demands, so it will engineer in the backdoor feature once and for good, albeit under duress, and will share the necessary secrets with the FBI and with every other nation-state’s police and intelligence agencies. In other words, Apple will be forced to install a TSA backdoor key in every phone they make, and so will everybody else.

While this would be lovely for helping the FBI gather the evidence it wants, it would be especially lovely for foreign intelligence officers, operating on our shores, or going after our citizens when they travel abroad. If they pickpocket a phone from a high-value target, our FBI’s policies will enable any intel or police organization, anywhere, to trivially exercise any phone’s TSA backdoor lock and access all the intel within. Needless to say, we already have a hard time defending ourselves from nation-state adversaries’ cyber-exfiltration attacks. Hopefully, sanity will prevail, because it would be a monumental error for the government to require that all our phones be engineered with backdoors.

avatar

Apple, the FBI, and the San Bernadino iPhone

Apple just posted a remarkable “customer letter” on its web site. To understand it, let’s take a few steps back.

In a nutshell, one of the San Bernadino shooters had an iPhone. The FBI wants to root through it as part of their investigation, but they can’t do this effectively because of Apple’s security features. How, exactly, does this work?

  • Modern iPhones (and also modern Android devices) encrypt their internal storage. If you were to just cut the Flash chips out of the phone and read them directly, you’d learn nothing.
  • But iPhones need to decrypt that internal storage in order to actually run software. The necessary cryptographic key material is protected by the user’s password or PIN.
  • The FBI wants to be able to exhaustively try all the possible PINs (a “brute force search”), but the iPhone was deliberately engineered with a “rate limit” to make this sort of attack difficult.
  • The only other option, the FBI claims, is to replace the standard copy of iOS with something custom-engineered to defeat these rate limits, but an iPhone will only accept an update to iOS if it’s digitally signed by Apple. Consequently, the FBI convinced a judge to compel Apple to create a custom version of iOS, just for them, solely for this investigation.
  • I’m going to ignore the legal arguments on both sides, and focus on the technical and policy aspects. It’s certainly technically possible for Apple to do this. They could even engineer their customized iOS build to measure the serial number of the iPhone on which it’s installed, such that the backdoor would only work on the San Bernadino suspect’s phone, without being a general-purpose skeleton key for all iPhones.

With all that as background, it’s worth considering a variety of questions.
[Read more…]

avatar

How Does Zero-Rating Affect Mobile Data Usage?

On Monday, the Telecom Regulatory Authority of India (TRAI) released a decision that effectively bans “zero-rated” Internet services in the country. While the notion of zero-rating might be somewhat new to many readers in the United States, the practice is common in many developing economies. Essentially, it is the practice by which a carrier creates an arrangement whereby its customers are not charged normal data rates for accessing certain content.

High-profile instances of zero-rating include Facebook’s “Free Basics” (formerly “Internet.org“) and Wikipedia Zero. But, many readers might be surprised to learn that the practice is impressively widespread. Although comprehensive documentation is hard to come by, experience and conventional wisdom affirm that mobile data carriers in regions across the world regularly partner with mobile data providers to provide services that are effectively free to the consumer, and these offerings tend to change frequently.

I experienced zero-rating first-hand on a trip to South Africa last summer. While on a research trip there, I learned that Cell C, a mobile telecom provider, had partnered with Internet.org to offer its subscribers free access to a limited set of sites through the Internet.org mobile application. I immediately wondered whether a citizen’s socioeconomic class could affect Internet usage—and, as a consequence, their access to information.

Zero-rating evokes a wide range of (strong) opinions (emphasis on “opinion”). Mark Zuckerberg would have us believe that Free Basics is a way to bring the Internet to the next billion people, where the alternative might be that this demographic might not have access to the Internet at all. This, of course, presumes that we equate “access to Facebook” with “access to the Internet”—something which at least one study has shown can occur (and is perhaps even more cause for concern). Others have argued that zero-rated services violate network neutrality principles and could also result in the creation of walled gardens where citizens’ Internet access might be brokered by a few large and powerful organizations.

And yet, while the arguments on zero-rating are loud, emotional, and increasingly higher-stakes, these opinions on either side have yet to be supported by any actual data.
[Read more…]

avatar

The Princeton Bitcoin textbook is now freely available

The first complete draft of the Princeton Bitcoin textbook is now freely available. We’re very happy with how the book turned out: it’s comprehensive, at over 300 pages, but has a conversational style that keeps it readable.

If you’re looking to truly understand how Bitcoin works at a technical level and have a basic familiarity with computer science and programming, this book is for you. Researchers and advanced students will find the book useful as well — starting around Chapter 5, most chapters have novel intellectual contributions.

Princeton University Press is publishing the official, peer-reviewed, polished, and professionally done version of this book. It will be out this summer. If you’d like to be notified when it comes out, you should sign up here.

Several courses have already used an earlier draft of the book in their classes, including Stanford’s CS 251. If you’re an instructor looking to use the book in your class, we welcome you to , and we’d be happy to share additional teaching materials with you.

Online course and supplementary materials. The Coursera course accompanying this book had 30,000 students in its first version, and it was a success based on engagement and end-of-course feedback. 

We plan to offer a version with some improvements shortly. Specifically, we’ll be integrating the programming assignments developed for the Stanford course with our own, with Dan Boneh’s gracious permission. We also have tenative plans to record a lecture on Ethereum (we’ve added a discussion of Ethereum to the book in Chapter 10).

Finally, graduate students at Princeton have been leading the charge on several exciting research projects in this space. Watch this blog or my Twitter for updates.

avatar

Updating the Defend Trade Secrets Act?

Despite statements to the contrary by sponsors and supporters in April 2014, August 2015, and October 2015, backers of the Defend Trade Secrets Act (DTSA) now aver that “cyber espionage is not the primary focus” of the legislation. At last month’s Senate Judiciary Committee hearing, the DTSA was instead supported by two different primary reasons: the rise of trade secret theft by rogue employees and the need for uniformity in trade secret law.

While a change in a policy argument is not inherently bad, the alteration of the core justification for a bill should be considered when assessing it. Perhaps the new position of DTSA proponents acknowledges the arguments by over 40 academics, including me, that the DTSA will not reduce cyberespionage. However, we also disputed these new rationales in that letter: the rogue employee is more than adequately addressed by existing trade secret law, and there will be less uniformity in trade secrecy under the DTSA because of the lack of federal jurisprudence.

The downsides — including weakened industry cybersecurity, abusive litigation against small entities, and resurrection of the anti-employee inevitable disclosure doctrine — remain. As such, I continue to oppose the DTSA as a giant trade secrecy policy experiment with little data to back up its benefits and much evidence of its costs.

avatar

Who Will Secure the Internet of Things?

Over the past several months, CITP-affiliated Ph.D. student Sarthak Grover and fellow Roya Ensafi been investigating various security and privacy vulnerabilities of Internet of Things (IoT) devices in the home network, to get a better sense of the current state of smart devices that many consumers have begun to install in their homes.

To explore this question, we purchased a collection of popular IoT devices, connected them to a laboratory network at CITP, and monitored the traffic that these devices exchanged with the public Internet. We initially expected that end-to-end encryption might foil our attempts to monitor the traffic to and from these devices. The devices we explored included a Belkin WeMo Switch, the Nest Thermostat, an Ubi Smart Speaker, a Sharx Security Camera, a PixStar Digital Photoframe, and a Smartthings hub.

What We Found: Be Afraid!

Many devices fail to encrypt at least some of the traffic that they send and receive. Investigating the traffic to and from these devices turned out to be much easier than expected, as many of the devices exchanged personal or private information with servers on the Internet in the clear, completely unencrypted.

We recently presented a summary of our findings to the Federal Trade Commission, last week at PrivacyCon.  The video of Sarthak’s talk is available from the FTC website, as well as on YouTube.  Among some of the more striking findings include:

  • The Nest thermostat was revealing location information of the home and weather station, including the user’s zip code, in the clear.  (Note: Nest promptly fixed this bug after we notified them.)
  • The Ubi uses unencrypted HTTP to communicate information to its portal, including voice chats, sensor readings (sound, temperature, light, humidity). It also communicates to the user using unencrypted email. Needless to say, much of this information, including the sensor readings, could reveal critical information, such as whether the user was home, or even movements within a house.
  • The Sharx security camera transmits video over unencrypted FTP; if the server for the video archive is outside of the home, this traffic could also be intercepted by an eavesdropper.
  • All traffic to and from the PixStar photoframe was sent unencrypted, revealing many user interactions with the device.

Traffic capture from Nest Thermostat in Fall 2015, showing user zip code and other information in cleartext.

Traffic capture from Ubi, which sends sensor values and states in clear text.

Some devices encrypt data traffic, but encryption may not be enough. A natural reaction to some of these findings might be that these devices should encrypt all traffic that they send and receive. Indeed, some devices we investigated (e.g., the Smartthings hub) already do so. Encryption may be a good starting point, but by itself, it appears to be insufficient for preserving user privacy.  For example, user interactions with these devices generate traffic signatures that reveal information, such as when power to an outlet has been switched on or off. It appears that simple traffic features such as traffic volume over time may be sufficient to reveal certain user activities.

In all cases, DNS queries from the devices clearly indicate the presence of these devices in a user’s home. Indeed, even when the data traffic itself is encrypted, other traffic sent in the clear, such as DNS lookups, may reveal not only the presence of certain devices in your home, but likely also information about both usage and activity patterns.

Of course, there is also the concern about how these companies may use and share the data that they collect, even if they manage to collect it securely. And, beyond the obvious and more conventional privacy and security risks, there are also potential physical risks to infrastructure that may result from these privacy and security problems.

Old problems, new constraints. Many of the security and privacy problems that we see with IoT devices sound familiar, but these problems arise in a new, unique context, which present unique challenges:

  • Fundamentally insecure. Manufacturers of consumer products have little interest in releasing software patches and may even design the device without any interfaces for patching the software in the first place.  There are various examples of insecure devices that ordinary users may connect to the network without any attempts to secure them (or any means of doing so).  Occasionally, these insecure devices can result in “stepping stones” into the home for attackers to mount more extensive attacks. A recent study identified more than 500,000 insecure, publicly accessible embedded networked devices.
  • Diverse. Consumer IoT settings bring a diversity of devices, manufacturers, firmware versions, and so forth. This diversity can make it difficult for a consumer (or the consumer’s ISP) to answer even simple questions such as exhaustively identifying the set of devices that are connected to the network in the first place, let alone detecting behavior or network traffic that might reveal an anomaly, compromise, or attack.
  • Constrained. Many of the devices in an IoT network are severely resource-constrained: the devices may have limited processing or storage capabilities, or even limited battery life, and they often lack a screen or intuitive user interface. In some cases, a user may not even be able to log into the device.  

Complicating matters, a user has limited control over the IoT device, particularly as compared to a general-purpose computing platform. When we connect a general purpose device to a network, we typically have at least a modicum of choice and control about what software we run (e.g., browser, operating system), and perhaps some more visibility or control into how that device interacts with other devices on the network and on the public Internet. When we connect a camera, thermostat, or sensor to our network, the hardware and software are much more tightly integrated, and our visibility into and control over that device is much more limited. At this point, we have trouble, for example, even knowing that a device might be sending private data to the Internet, let alone being able to stop it.

Compounding all of these problems, of course, is the access a consumer gives an untrusted IoT device to other data or devices on the home network, simply by connecting it to the network—effectively placing it behind the firewall and giving it full network access, including in many cases the shared key for the Wi-Fi network.

A Way Forward

Ultimately, multiple stakeholders may be involved with ensuring the security of a networked IoT device, including consumers, manufacturers, and Internet service providers. There remain many unanswered questions concerning both who is able to (and responsible for) securing these devices, but we should start the discussion about how to improve the security for networks with IoT devices.

This discussion will include both policy aspects (including who bears the ultimate responsibility for device insecurity, whether devices need to adopt standard practices or behavior, and for how long their manufacturers should continue to support them), as well as technical aspects (including how we design the network to better monitor and control the behavior of these often-insecure devices).

Devices should be more transparent. The first step towards improving security and privacy for IoT should be to work with manufacturers to improve the transparency of these IoT devices, so that consumers (and possibly ISPs) have more visibility into what software the devices are running, and what traffic they are sending and receiving. This, of course, is a Herculean effort, given the vast quantity and diversity of device manufacturers; an alternative would be trying to infer what devices are connected to the network based on their traffic behavior, but doing so in a way that is both comprehensive, accurate, and reasonably informative seems extremely difficult.

Instead, some IoT device manufacturers might standardize on a manifest protocol that announces basic information, such as the device type, available sensors, firmware version, the set of destinations the device expects to communicate with (and whether the traffic is encrypted), and so forth. (Of course, such a manifest poses its own security risks.)

Network infrastructure can play a role. Given such basic information, anomalous behavior that is suggestive of a compromise or data leak would be more evident to network intrusion detection systems and firewalls—in other words, we could bring more of our standard network security tools to bear on these devices, once we have a way to identify what the devices are and what their expected behavior should be. Such a manifest might also serve as a concise (and machine readable!) privacy statement; a concise manifest might be one way for consumers to evaluate their comfort with a certain device, even though it may be far from a complete privacy statement.

Armed with such basic information about the devices on the network, smart network switches would have a much easier way to implement network security policies. For example, a user could specify that the smart camera should never be able to communicate with the public Internet, or that the thermostat should only be able to interact with the window locks if someone is present.

Current network switches don’t provide easy mechanisms for consumers to either express or implement these types of policies. Advances in Software-Defined Networking (SDN) in software switches such as Open vSwitch may make it possible to implement policies that resolve contention for shared resources and conflicts, or to isolate devices on the network from one another, but even if that is a reasonable engineering direction, this technology will only take us part of the way, as users will ultimately need far better interfaces to both monitor network activity and express policies about how these devices should behave and exchange traffic.

Update [20 Jan 2015]: After significant press coverage, Nest has contacted the media to clarify that the information being leaked in cleartext was not the zip code of the thermostat, but merely the zip code that the user enters when configuring the device. (Clarifying statement here.) Of course, when would a user ever enter a zip code other than that of their home, where the thermostat was located?

avatar

The Web Privacy Problem is a Transparency Problem: Introducing the OpenWPM measurement tool

In a previous blog post I explored the success of our study, The Web Never Forgets, in having a positive impact on web privacy. To ensure a lasting impact, we’ve been doing monthly, automated 1-million-site measurement of tracking and privacy. Soon we’ll be releasing these datasets and our findings. But in this post I’d like to introduce our web measurement platform OpenWPM that we’ve built for this purpose. OpenWPM has been quickly gaining adoption — it has already been used by at least 6 other research groups, as well as journalists, regulators, and students for class projects. In this post, I’ll explain why we built OpenWPM, describe a previously unmeasured type of tracking we found using the tool, and show you how you can participate and contribute to this community effort.

This post is based on a talk I gave at the FTC’s PrivacyCon. You can watch the video online here.

Why monthly, large-scale measurements are necessary

In my previous post, I showed how measurements from academic studies can help improve online privacy, but I also pointed out how they can fall short. Measurement results often have an immediate impact on online privacy. Unless that impact leads to a technical, policy, or legal solution, the impact will taper off over time as the measurements age.

Technical solutions do not always exist for privacy violations. I discussed how canvas fingerprinting can’t be prevented without sacrificing usability in my previous blog post, but there are others as well. For example, it has proven difficult to find a satisfactory solution to the privacy concerns surrounding WebRTC’s access to local IPs. This is also highlighted in the unofficial W3C draft on Fingerprinting Guidance for Web Specification Authors, which states: “elimination of the capability of browser fingerprinting by a determined adversary through solely technical means that are widely deployed is implausible.”

It seems inevitable that measurement results will go out of date, for two reasons. Most obviously, there is a high engineering cost to running privacy studies. Equally important is the fact that academic papers in this area are published as much for their methodological novelty as for their measurement results. Updating the results of an earlier study is unlikely to lead to a publication, which takes away the incentive to do it at all. [1]

OpenWPM: our platform for automated, large-scale web privacy measurements

We built OpenWPM (Github, technical report), a generic platform for online tracking measurement. It provides the stability and instrumentation necessary to run many online privacy studies. Our goal in developing OpenWPM is to decrease the initial engineering cost of studies and make running a measurement as effortless as possible. It has already been used in several published studies from multiple institutions to detect and reverse engineer online tracking.

OpenWPM also makes it possible to run large-scale measurements with Firefox, a real consumer browser [2]. Large scale measurement lets us compare the privacy practices of the most popular sites to those in the long tail. This is especially important when observing the use of a tracking technique highlighted in a measurement study. For example, we can check if it’s removed from popular sites but added to less popular sites.

Transparency through measurement, on 1 million sites

We are using OpenWPM to run the Princeton Transparency Census, a monthly web-scale measurement of tracking techniques and privacy issues, comprising 1 million sites. With it, we will be able to detect and measure many of the known privacy violations reported by researchers so far: the use of stateful tracking mechanisms, browser fingerprinting, cookie synchronization, and more.

During the measurements, we’ll collect data in three categories: (1)  network traffic — all HTTP requests and response headers (2) client-side state — cookies, Flash cookies, etc. (3) execution traces — we trap and record targeted JavaScript API calls that have been known to be used for tracking. In addition to releasing all of the raw data collected during the census, we’ll release the results of our own automated analysis.

Alongside the 1 million site measurement, we are also running smaller, targeted measurements with different browser configurations. Examples include crawling deeper into the site or browsing with a privacy extension, such as Ghostery or AdBlock Plus. These smaller crawls will provide additional insight into the privacy threats faced by real users.

Detecting WebRTC local IP discovery

As a case study of the ease of introducing a new measurement into the infrastructure, I’ll walk through the steps I took to measure scripts using WebRTC to discover a machine’s local IP address [3]. For machines behind a home router, this technique may reveal an IP of the form 192.168.1.*. Users of corporate or university networks may return a unique local IP address from within that organization’s IP range.

A user’s local IP address adds additional information to a browser fingerprint. For example, it can be used as a way to differentiate multiple users behind a NAT without requiring browser state. The amount of identifying information it provides for the average user hasn’t been studied. However, both Chrome and Firefox [4] have implemented opt-in solutions to prevent the technique. The first reported use that I could find for this technique in the wild was a third-party on nytimes.com in July 2015.

After examining a demo script, I decided to record all property access and all method calls of the RTCPeerConnection interface, the primary interface for WebRTC. The additional instrumentation necessary for this interface is just a single line of Javascript in OpenWPM’s Firefox extension.

A preliminary analysis [5] of a 50,000 site pilot measurement from October 2015 suggests that WebRTC local IP discovery is used on the homepages of over 100 sites, from over 20 distinct scripts. Only 1 of these scripts would be blocked by EasyList or EasyPrivacy.

How can this be useful for you

We envision several ways researchers and other members of the community can make use of  OpenWPM and our measurements. I’ve listed them here from least involved to most involved.

(1) Use our measurement data for their own tools. In my analysis of canvas fingerprinting I mentioned that Disconnect incorporated our research results into their blocklist. We want to make it easy for privacy tools to make use of the analysis we run, by releasing analysis data in a structured, machine readable way.

(2) Use the data collected during our measurements, and build their own analysis on top of it. We know we’ll never be able to take the human element out of these studies. Detection methodologies will change, new features of the browser will be released and others will change. The depth of the Transparency measurements should make it easy test new ideas, with the option of contributing them back to the regular crawls.

(3) Use OpenWPM to collect and release their own data. This is the model we see most web privacy researchers opting for, and a model we plan to use for most of our own studies. The platform can be used and tweaked as necessary for the individual study, and the measurement results and data can be shared publicly after the study is complete.

(4) Contribute to OpenWPM through pull requests. This is the deepest level of involvement we see. Other developers can write new features into the infrastructure for their own studies or to be run as part of our transparency measurements. Contributions here will benefit all users of OpenWPM.

Over the coming months we will release new blog posts and research results on the measurements I’ve discussed in this post. You can follow our progress here on Freedom to Tinker, on Twitter @s_englehardt, and on our Github repository.

 

[1] Notable exceptions include the study of cookie respawning: 2009, 2011, 2011, 2014. and the statistics on stateful tracking use and third-party inclusion: 2009, 2012, 2012, 2012, 2015.

[2] Crawling with a real browser is important for two reasons: (1) it’s less likely to be detected as a bot, meaning we’re less likely to receive different treatment from a normal user, and (2) a real browser supports all the modern web features (e.g. WebRTC, HTML5 audio and video), plugins (e.g. Flash), and extensions (e.g. Ghostery, HTTPS Everywhere). Many of these additional features play a large role in the average user’s privacy online.

[3] There is an additional concern that WebRTC can be used to determine a VPN user’s actual IP address, however this attack is distinct from the one described in this post.

[4] uBlock Origin also provides an option to prevent WebRTC local IP discovery on Firefox.

[5] We are in the process of running and verifying this analysis on a our 1 million site measurements, and will release an updated analysis with more details in the future.

avatar

Do privacy studies help? A Retrospective look at Canvas Fingerprinting

It seems like every month we hear of some new online privacy violation in the news, on topics such as fingerprinting or web tracking. Many of these news stories highlight academic research. What we don’t see is whether these studies and the subsequent news stories have any impact on privacy.

Our 2014 canvas fingerprinting measurement offers an opportunity for me to provide that insight, as we ended up receiving a surprising amount of press coverage after releasing the paper. In this post I’ll examine the reaction to the paper and explore which aspects contributed to its positive impact on privacy. I’ll also explore how we can use this knowledge when designing future studies to maximize their impact.

What we found in 2014

The 2014 measurement paper, The Web Never Forgets, is a collaboration with researchers at KU Leuven. In it, we measured the prevalence of three persistent tracking techniques online: canvas fingerprinting, cookie respawning, and cookie syncing [1]. They are persistent in that are hard to control, hard to detect, and resilient to blocking or removing.

We found that 5% of the top 100,000 sites were utilizing the HTML5 Canvas API as a fingerprinting vector. The overwhelming majority of which, 97%, was caused by the top two providers. The ability to use the HTML5 Canvas as a fingerprinting vector was first introduced in a 2012 paper by Mowery and Shacham. In the time between that 2012 paper and our 2014 measurement, approximately 20 sites and trackers started using canvas to fingerprint their visitors.

Several examples of the text written to the canvas for fingerprinting purposes. Each of these images would be converted to strings and then hashed to create an identifier.

The reaction to our study

Shortly after we released our paper publicly, we saw a significant amount of press coverage, including articles on ProPublica, BBC, Der Spiegel, and more. The amount of coverage our paper received was a surprise for us; we weren’t the creators of the method, and we certainly weren’t the first to report on the fingerprintability of browsers [2]. Just two days later, AddThis stopped using canvas fingerprinting. The second largest provider at the time, Ligatus, also stopped using the technique.

As can be expected, we saw many users take their frustrations to Twitter. There are users who wondered why publishers would fingerprint them:

complained about AddThis:

and expressed their dislike for canvas fingerprinting in general:

We even saw a user question as to why Mozilla does not protect against canvas fingerprinting in Firefox:

However a general technical solution which preserves the API’s usefulness and usability doesn’t exist [3]. Instead the best solutions are either blocking the feature or blocking the trackers which use it.

The developer community responded by releasing canvas blocking extensions for Firefox and Chrome, tools which are used by over 18,000 users in total. AdBlockPlus and Disconnect both commented that the large trackers are already on their block lists, with Disconnect mentioning that the additional, lesser-known parties from our study would be added to their lists.

Why was our study so impactful?

Much of the online privacy problem is actually a transparency problem. By default, users have very little information on the privacy practices of the websites they visit, and of the trackers included on those sites. Without this information users are unable to differentiate between sites which take steps to protect their privacy and sites which don’t. This leads to less of an incentive for site owners to protect the privacy of their users, as online privacy often comes at the expense of additional ad revenue or third-party features.

With our study, we were not only able to remove this information asymmetry [4], but were able to do so in a way that was relatable to users. The visual representation of canvas fingerprinting proved particularly helpful in removing that asymmetry of information; it was very intuitive to see how the shapes drawn to a canvas could produce a unique image. The ProPublica article even included a demo where users could see their fingerprint built in real time.

While writing the paper we made it a point to include not only the trackers responsible for fingerprinting, but to also include the sites on which the fingerprinting was taking place. Instead of reading that tracker A was responsible for fingerprinting, they could understand that it occurs when they visit publishers X, Y and Z. If a user is frustrated by a technique, and is only familiar with the tracker responsible, there isn’t much they can do. By knowing the publishers on which the technique is used, they can voice their frustrations or choose to visit alternative sites. Publishes, which have in interest in keeping users, will then have an incentive to change their practices.

The technique wasn’t only news to users, even some site owners were unaware that it was being used on their sites. ProPublica updated their original story with a message from YouPorn stating, “[the website was] completely unaware that AddThis contained a tracking software…”, and had since removed it. This shows that measurement work can even help remove the information asymmetry between trackers and the sites upon which they track.

How are things now?

In a re-run of the measurements in January 2016 [5], I’ve observed that the number of distinct trackers utilizing canvas fingerprinting has more than doubled since our 2014 measurement. While the overall number of publisher sites on which the tracking occurs is still below that of our previous measurement, the use of the technique has at least partially revived since AddThis and Ligatus stopped the practice.

This made me curious if we see similar trends for other tracking techniques. In our 2014 paper we also studied cookie respawning [6]. This technique was well studied in the past, both in 2009 and 2011, making it a good candidate to analyze the longitudinal effects of measurement.  As is the case with our measurement, these studies also received a bit of press coverage when released.

The 2009 study, which found HTTP cookie respawning on 6 of the top 100 sites, resulted in a $2.4 million settlement. The 2011 follow-up study found that the use of respawning decreased to just 2 sites in the top 100, and likewise resulted in a $500 thousand settlement. In 2014 we observed respawning on 7 of the top 100 sites, however none of these sites or trackers were US-based entities. This suggests that lawsuits can have an impact, but that impact may be limited by the global nature of the web.

What we’ve learned

Providing transparency into privacy violations online has the potential for huge impact. We saw users unhappy with the trackers that use canvas fingerprinting, with the sites that include those trackers, and even with the browsers they use to visit those sites. It is important that studies visit a large number of sites, and list those on which the privacy violation occurs.

The pressure of transparency affects the larger players more than the long tail. A tracker which is present on a large number of sites, or is present on sites which receive more traffic is more likely to be the focus of news articles or subject to lawsuits. Indeed, our 2016 measurements support it: we’ve seen a large increase in the number of parties involved, but the increase is limited to parties with a much smaller presence.

In the absence of a lawsuit, policy change, or technical solution, we see that canvas fingerprinting use is beginning to grow again. Without constant monitoring and transparency, level of privacy violations can easily creep back to where they were. A single, well-connected tracker can re-introduce a tracking technique to a large number of first-parties.

The developer community will help, we just need to provide them with the data they need. Our detection methodology served as the foundation for blocking tools, which intercept the same calls we used for detection. The script lists we included in our paper and on our website were incorporated into current blocklists.

In a follow-up post, I’ll discuss the work we’re doing to make regular, large scale measurements of web tracking a reality. I’ll show how the tools we’ve built make it possible to run automated, million site crawls can run every month, and I’ll introduce some new results we’re planning to release.

 

[1] The paper’s website provides a short description of each of these techniques.

[2] See: the EFF’s Panopticlick, and academic papers Cookieless Monster and FPDetective.

[3] For example, adding noise to canvas readouts has the potential to cause problems for non-tracking use cases and can still be defeated by a determined tracker. The Tor Browser’s solution of prompting the user on certain canvas calls does work, however it requires a user’s understanding that the technique can be used for tracking and provides for a less than optimal user experience.

[4] For a nice discussion of information asymmetry and the web: Privacy and the Market for Lemons, or How Websites Are Like Used Cars

[5] These measurements were run using the canvas instrumentation portion of OpenWPM.

[6] For a detailed description of cookie respawning, I suggest reading through Ashkan Soltani’s blog post on the subject.

Thanks to Arvind Narayanan for his helpful comments.

avatar

How Will Consumers Use Faster Internet Speeds?

This week saw an exciting announcement about the experimental deployment of DOCSIS 3.1 in limited markets in the United States, including Philadelphia, Atlanta, and parts of northern California, which will bring gigabit-per-second Internet speeds to many homes over the existing cable infrastructure. The potential for gigabit speeds over the existing cable networks bring hope that more consumers will ultimately enjoy much higher-speed Internet connectivity both in the United States and elsewhere.

This development is also a pointed response to the not-so-implicit pressure from the Federal Communications Commission to deploy higher-speed Internet connectivity, which includes other developments such as the redefinition of broadband to a downstream throughput rate of 25 megabits per second, up from a previous (and somewhat laughable) definition of 4 Mbps; many commissioners have also stated their intentions to raise the threshold for the definition of a broadband network to a downstream throughput of 100 Mbps, as a further indication that ISPs will see increasing pressure for higher speed links to home networks. Yet, the National Cable and Telecommunications Association has also claimed in an FCC filing that such speeds are far more than a “typical” broadband user would require.

These developments and posturing beg the question: How will consumers change their behavior in response to faster downstream throughput from their Internet service providers? 

Ph.D. student Sarthak Grover, postdoc Roya Ensafi, and I set out to study this question with a cohort of about 6,000 Comcast subscribers in Salt Lake City, Utah, from October through December 2014. The study involved what is called a randomized controlled trial, an experimental method commonly used in scientific experiments where a group of users is randomly divided into a control group (whose user experience no change in conditions) and a treatment group (whose users are subject to a change in conditions).  Assuming the cohort is large enough and represents a cross-section of the demographic of interest, and that the users for the treatment group are selected at random, it is possible to observe differences between the two groups’ outcomes and conclude how the treatment affects the outcome.

In the case of this specific study, the control group consisted of about 5,000 Comcast subscribers who were paying for (and receiving) 105 Mbps downstream throughput; the treatment group, on the other hand, comprised about 1,500 Comcast subscribers who were paying for 105 Mbps but at the beginning of the study period were silently upgraded to 250 Mbps. In other words, users in the treatment group were receiving faster Internet service but was unaware of the faster downstream throughput of their connections. We explored how this treatment affected user behavior and made a few surprising discoveries:

“Moderate” users tend to adjust their behavior more than the “heavy” users. We expected that subscribers who downloaded the most data in the 250 Mbps service tier would be the ones causing the largest difference in mean demand between the two groups of users (previous studies have observed this phenomenon, and we do observe this behavior for the most aggressive users). To our surprise, however, the median subscribers in the two groups exhibited much more significant differences in traffic demand, particularly at peak times.  Notably, the 40% of subscribers with lowest peak demands more than double their daily peak traffic demand in response to service-tier upgrades (i.e., in the treatment group).

With the exception of the most aggressive peak-time subscribers, the subscribers who are below the 40th percentile in terms of peak demands increase their peak demand more than users who initially had higher peak demands.

This result suggests a surprising trend: it’s not the aggressive data hogs who account for most of the increased use in response to faster speeds, but rather the “typical” Internet user, who tends to use the Internet more as a result of the faster speeds. Our dataset does not contain application information, so it is difficult to say what, exactly is responsible for the higher data usage of the median user. Yet, the result uncovers an oft-forgotten phenomena of faster links: even existing applications that do not need to “max out” the link capacity (e.g., Web browsing, and even most video streaming) can benefit from a higher capacity link, simply because they will see better performance overall (e.g., faster load times and more resilience to packet loss, particularly when multiple parallel connections are in use). It might just be that the typical user is using the Internet more with the faster connection simply because the experience is better, not because they’re interested in filling the link to capacity (at least not yet!).

Users may use faster speeds for shorter periods of time, not always during “prime time”. There has been much ado about prime-time video streaming usage, and we most certainly see those effects in our data. To our surprise, the average usage per subscriber during prime-time hours was roughly the same between the treatment and control groups, yet outside of prime time, the difference in usage was much more pronounced between the two groups, with average usage per subscriber in the treatment group exhibiting 25% more usage than that in the control group for non-prime-time weekday hours.  We also observe that the peak-to-mean ratios for usage in the treatment group are significantly higher than they are in the control group, indicating that users with faster speeds may periodically (and for short times) take advantage of the significantly higher speeds, even though they are not sustaining a high rate that exhausts the higher capacity.

These results are interesting for last-mile Internet service providers because they suggest that the speeds at the edge may not currently be the limiting factor for user traffic demand. Specifically, the changes in peak traffic outside of prime-time hours also suggest that even the (relatively) lower-speed connections (e.g., 105 Mbps) may be sufficient to satisfy the demands of users during prime-time hours. Of course, the constraints on prime-time demand (much of which is largely streaming) likely result from other factors, including both available content and perhaps the well-known phenomena of congestion in the middle of the network, rather than in the last mile. All of this points to the increasing importance of resolving the performance issues that we see as a result of interconnection. In the best case, faster Internet service moves the bottleneck from the last mile to elsewhere in the network (e.g., interconnection points, long-haul transit links); but, in reality, it seems that the bottlenecks are already there, and we should focus on mitigating those points of congestion.

Further reading and study. You’ll be able to read more about our study in the following paper: A Case Study of Traffic Demand Response to Broadband Service-Plan Upgrades. S. Grover, R. Ensafi, N. Feamster. Passive and Active Measurement Conference (PAM). Heraklion, Crete, Greece. March 2016. (We will post an update when the final paper is published in early 2016.) There is plenty of room for follow-up work, of course; notably, the data we had access to did not have information about application usage, and only reflected byte-level usage at fifteen-minute intervals. Future studies could (and should) continue to study the effects of higher-speed links by exploring how the usage of specific applications (e.g., streaming video, file sharing, Web browsing) changes in response to higher downstream throughput.