February 12, 2016

avatar

How Does Zero-Rating Affect Mobile Data Usage?

On Monday, the Telecom Regulatory Authority of India (TRAI) released a decision that effectively bans “zero-rated” Internet services in the country. While the notion of zero-rating might be somewhat new to many readers in the United States, the practice is common in many developing economies. Essentially, it is the practice by which a carrier creates an arrangement whereby its customers are not charged normal data rates for accessing certain content.

High-profile instances of zero-rating include Facebook’s “Free Basics” (formerly “Internet.org“) and Wikipedia Zero. But, many readers might be surprised to learn that the practice is impressively widespread. Although comprehensive documentation is hard to come by, experience and conventional wisdom affirm that mobile data carriers in regions across the world regularly partner with mobile data providers to provide services that are effectively free to the consumer, and these offerings tend to change frequently.

I experienced zero-rating first-hand on a trip to South Africa last summer. While on a research trip there, I learned that Cell C, a mobile telecom provider, had partnered with Internet.org to offer its subscribers free access to a limited set of sites through the Internet.org mobile application. I immediately wondered whether a citizen’s socioeconomic class could affect Internet usage—and, as a consequence, their access to information.

Zero-rating evokes a wide range of (strong) opinions (emphasis on “opinion”). Mark Zuckerberg would have us believe that Free Basics is a way to bring the Internet to the next billion people, where the alternative might be that this demographic might not have access to the Internet at all. This, of course, presumes that we equate “access to Facebook” with “access to the Internet”—something which at least one study has shown can occur (and is perhaps even more cause for concern). Others have argued that zero-rated services violate network neutrality principles and could also result in the creation of walled gardens where citizens’ Internet access might be brokered by a few large and powerful organizations.

And yet, while the arguments on zero-rating are loud, emotional, and increasingly higher-stakes, these opinions on either side have yet to be supported by any actual data.

We Must Bring Data to this Debate

Unfortunately, there is essentially  no data concerning the central question of how users adjust their behavior in response to mobile data pricing practices. Erik Stallman’s eloquent post today on the TRAI ruling and the Center for Democracy and Technology‘s recent white paper on zero-rating both lament the lack of data on either side of the debate.

I want to change that. To this end, as Internet measurement researchers and policy-interested computer scientists, we are starting to bring some data to this debate—although we still have a long way to go.

As luck would have it, we had already been gathering some data that shed some light on this question. In 2013, we developed a mobile performance measurement application, My Speed Test, which performs speed test measurements of a user’s mobile network, but also gathers information about a user’s application usage, and whether that usage occurs on the cellular data network or on a Wi-Fi network. My Speed Test has been installed on thousands of phones in several hundred countries around the world over this three-year period. In addition to a significant based of installations in the United States, we had several hundred users running the application in South Africa, due to a study of mobile network performance that we performed in the country a couple of years ago.

This deployment gave us a unique opportunity to study the application usage patterns of a group of users, across a wide range of carriers, across countries, over three years. It allowed us to look at usage patterns in the United States (where many users are on pre-paid plans) to South Africa (where most users are on pre-paid, pay-as-you-go plans).  It also allowed us to look at how users responded to zero-rated services in South Africa. A superstar undergraduate student, Ava Chen, led this research in collaboration with Enrico Calandro at Research ICT Africa and Sarthak Grover, a Ph.D. student here at Princeton. I briefly summarize some of Ava’s results below.

The results of this study are preliminary. More widespread deployment of My Speed Test would ultimately allow us to gather more data and draw more conclusive results. We can use your help by spreading the word about our work, and My Speed Test.

Effects of Zero Rating on Usage

We explored the extent to which the zero-rating offerings of various South African carriers affected usage patterns for different applications. During our data-collection period, mobile data provider Cell C offered its customers several zero-rating packages:

  • From November 19, 2014 to August 31, 2015, Cell C zero-rated WhatsApp. From September 1, 2015 until now, Cell C adopted a bundle offer where, for a fee of ZAR 5 (about $0.30), users could use up to 1 GB on WhatsApp for 30 days, including voice calls.
  • On July 1, 2015, Cell C began zero-rating Facebook’s Free Basics service.
  • On two separate occasions—May 1–July 31, 2014 and August 1, 2014–February 13, 2015—MTN zero-rated Twitter.

We aimed to determine whether users adjusted their mobile behavior in response to these various pricing promotions. We found the following trends:

Cell C users increased WhatsApp usage by more than a factor of three on both cellular and Wi-Fi. The average monthly user for Cell C increased WhatsApp usage on the cellular network by a factor of three, from about 7 MB per month to about 22 MB per month on average. Interestingly, not only did the usage of WhatsApp on the cellular network increased, usage also increased on Wi-Fi networks, by more than a factor of seven—to about 17 MB per month. Interestingly, users still used WhatsApp more on the cellular data network than on Wi-Fi.

Cell C usage on WhatsApp increased on both WiFi and cellular in response to zero-rating a zero-rating offering from the carrier.

Twitter usage on MTN increased in response to zero-rating.  We limited our analysis of MTN’s zero-rating practices to 2014, because we did not have enough data to draw conclusive results from the second period. Our analysis of the 2014 period, however, found that  aside from the holiday season (when Twitter traffic is known to spike due to shopping promotions), the second most significant spike in usage on MTN occurred during the period from May through July 2014 when the zero-rating promotion was in effect. During this period the average Twitter user on MTN exchanged as much as 40 MB per day on Twitter, whereas usage outside of the promotional period was typically closer to about 10 MB per day.

Other Responses to Mobile Data Pricing

Mobile users in the United States use more mobile data, on both cellular and Wi-Fi. Mobile users in both the United States and South Africa used YouTube and Facebook extensively; other applications were more specific to country. We noticed some interesting trends. First, when looking at the total data usage for these applications in each country, the median user in the United States tended to use more data per month, not only on the cellular data network but also on Wi-Fi networks. It is understandable that South African users would be far more conservative with their use of cellular data; previous studies have noted this effect. It is remarkable, however, that these users also were more conservative with their data usage on Wi-Fi networks; this effect could also be explained that even Wi-Fi and wired Internet connections are still considerably more expensive (and more of a luxury good) than they seem to be in the United States. In contrast, users in the United States not only used more data in general, but they often used more data on average on cellular data networks than on Wi-Fi—perhaps as a result of the fact that users in the US were much less sensitive to the cost of mobile data than those in South Africa.

Mobile users in South Africa exchanged significantly more Facebook traffic than streaming video traffic—even when on cellular data plans. Given the high cost of cellular data in South Africa, we expected that users would be conservative with mobile data usage in general. Although our findings mostly confirmed this, Facebook was a notable exception: Not only did the typical user consume significantly more traffic using Facebook than with streaming video, users also exchanged more Facebook traffic over the cellular network than they did on Wi-Fi networks. This behavior suggests that Facebook usage is dominant to the extent that users appear to be more willing to pay for relatively expensive mobile data to use it than they are for other applications.

Summary and Request for Help

Our preliminary evidence suggests that zero-rated pricing structures may have an effect on usage of an application—not only on the cellular network where pricing instruments are implemented, but also in general. However, we need more data to draw stronger conclusions. We are actively seeking collaborations to help us deploy My Speed Test on a larger scale, to facilitate a larger-scale analysis.

To this end, we are excited to announce a collaboration with the Alliance for an Affordable Internet (A4AI) to use My Speed Test to study these effects in other countries on a larger scale. We are interested in gathering more widespread longitudinal data on this topic, through both organic installations of the application or studies with targeted recruitment.

Please let me know if you would like to help us in this important effort!

avatar

The Princeton Bitcoin textbook is now freely available

The first complete draft of the Princeton Bitcoin textbook is now freely available. We’re very happy with how the book turned out: it’s comprehensive, at over 300 pages, but has a conversational style that keeps it readable.

If you’re looking to truly understand how Bitcoin works at a technical level and have a basic familiarity with computer science and programming, this book is for you. Researchers and advanced students will find the book useful as well — starting around Chapter 5, most chapters have novel intellectual contributions.

Princeton University Press is publishing the official, peer-reviewed, polished, and professionally done version of this book. It will be out this summer. If you’d like to be notified when it comes out, you should sign up here.

Several courses have already used an earlier draft of the book in their classes, including Stanford’s CS 251. If you’re an instructor looking to use the book in your class, we welcome you to , and we’d be happy to share additional teaching materials with you.

Online course and supplementary materials. The Coursera course accompanying this book had 30,000 students in its first version, and it was a success based on engagement and end-of-course feedback. 

We plan to offer a version with some improvements shortly. Specifically, we’ll be integrating the programming assignments developed for the Stanford course with our own, with Dan Boneh’s gracious permission. We also have tenative plans to record a lecture on Ethereum (we’ve added a discussion of Ethereum to the book in Chapter 10).

Finally, graduate students at Princeton have been leading the charge on several exciting research projects in this space. Watch this blog or my Twitter for updates.

avatar

Updating the Defend Trade Secrets Act?

Despite statements to the contrary by sponsors and supporters in April 2014, August 2015, and October 2015, backers of the Defend Trade Secrets Act (DTSA) now aver that “cyber espionage is not the primary focus” of the legislation. At last month’s Senate Judiciary Committee hearing, the DTSA was instead supported by two different primary reasons: the rise of trade secret theft by rogue employees and the need for uniformity in trade secret law.

While a change in a policy argument is not inherently bad, the alteration of the core justification for a bill should be considered when assessing it. Perhaps the new position of DTSA proponents acknowledges the arguments by over 40 academics, including me, that the DTSA will not reduce cyberespionage. However, we also disputed these new rationales in that letter: the rogue employee is more than adequately addressed by existing trade secret law, and there will be less uniformity in trade secrecy under the DTSA because of the lack of federal jurisprudence.

The downsides — including weakened industry cybersecurity, abusive litigation against small entities, and resurrection of the anti-employee inevitable disclosure doctrine — remain. As such, I continue to oppose the DTSA as a giant trade secrecy policy experiment with little data to back up its benefits and much evidence of its costs.

avatar

Who Will Secure the Internet of Things?

Over the past several months, CITP-affiliated Ph.D. student Sarthak Grover and fellow Roya Ensafi been investigating various security and privacy vulnerabilities of Internet of Things (IoT) devices in the home network, to get a better sense of the current state of smart devices that many consumers have begun to install in their homes.

To explore this question, we purchased a collection of popular IoT devices, connected them to a laboratory network at CITP, and monitored the traffic that these devices exchanged with the public Internet. We initially expected that end-to-end encryption might foil our attempts to monitor the traffic to and from these devices. The devices we explored included a Belkin WeMo Switch, the Nest Thermostat, an Ubi Smart Speaker, a Sharx Security Camera, a PixStar Digital Photoframe, and a Smartthings hub.

What We Found: Be Afraid!

Many devices fail to encrypt at least some of the traffic that they send and receive. Investigating the traffic to and from these devices turned out to be much easier than expected, as many of the devices exchanged personal or private information with servers on the Internet in the clear, completely unencrypted.

We recently presented a summary of our findings to the Federal Trade Commission, last week at PrivacyCon.  The video of Sarthak’s talk is available from the FTC website, as well as on YouTube.  Among some of the more striking findings include:

  • The Nest thermostat was revealing location information of the home and weather station, including the user’s zip code, in the clear.  (Note: Nest promptly fixed this bug after we notified them.)
  • The Ubi uses unencrypted HTTP to communicate information to its portal, including voice chats, sensor readings (sound, temperature, light, humidity). It also communicates to the user using unencrypted email. Needless to say, much of this information, including the sensor readings, could reveal critical information, such as whether the user was home, or even movements within a house.
  • The Sharx security camera transmits video over unencrypted FTP; if the server for the video archive is outside of the home, this traffic could also be intercepted by an eavesdropper.
  • All traffic to and from the PixStar photoframe was sent unencrypted, revealing many user interactions with the device.

Traffic capture from Nest Thermostat in Fall 2015, showing user zip code and other information in cleartext.

Traffic capture from Ubi, which sends sensor values and states in clear text.

Some devices encrypt data traffic, but encryption may not be enough. A natural reaction to some of these findings might be that these devices should encrypt all traffic that they send and receive. Indeed, some devices we investigated (e.g., the Smartthings hub) already do so. Encryption may be a good starting point, but by itself, it appears to be insufficient for preserving user privacy.  For example, user interactions with these devices generate traffic signatures that reveal information, such as when power to an outlet has been switched on or off. It appears that simple traffic features such as traffic volume over time may be sufficient to reveal certain user activities.

In all cases, DNS queries from the devices clearly indicate the presence of these devices in a user’s home. Indeed, even when the data traffic itself is encrypted, other traffic sent in the clear, such as DNS lookups, may reveal not only the presence of certain devices in your home, but likely also information about both usage and activity patterns.

Of course, there is also the concern about how these companies may use and share the data that they collect, even if they manage to collect it securely. And, beyond the obvious and more conventional privacy and security risks, there are also potential physical risks to infrastructure that may result from these privacy and security problems.

Old problems, new constraints. Many of the security and privacy problems that we see with IoT devices sound familiar, but these problems arise in a new, unique context, which present unique challenges:

  • Fundamentally insecure. Manufacturers of consumer products have little interest in releasing software patches and may even design the device without any interfaces for patching the software in the first place.  There are various examples of insecure devices that ordinary users may connect to the network without any attempts to secure them (or any means of doing so).  Occasionally, these insecure devices can result in “stepping stones” into the home for attackers to mount more extensive attacks. A recent study identified more than 500,000 insecure, publicly accessible embedded networked devices.
  • Diverse. Consumer IoT settings bring a diversity of devices, manufacturers, firmware versions, and so forth. This diversity can make it difficult for a consumer (or the consumer’s ISP) to answer even simple questions such as exhaustively identifying the set of devices that are connected to the network in the first place, let alone detecting behavior or network traffic that might reveal an anomaly, compromise, or attack.
  • Constrained. Many of the devices in an IoT network are severely resource-constrained: the devices may have limited processing or storage capabilities, or even limited battery life, and they often lack a screen or intuitive user interface. In some cases, a user may not even be able to log into the device.  

Complicating matters, a user has limited control over the IoT device, particularly as compared to a general-purpose computing platform. When we connect a general purpose device to a network, we typically have at least a modicum of choice and control about what software we run (e.g., browser, operating system), and perhaps some more visibility or control into how that device interacts with other devices on the network and on the public Internet. When we connect a camera, thermostat, or sensor to our network, the hardware and software are much more tightly integrated, and our visibility into and control over that device is much more limited. At this point, we have trouble, for example, even knowing that a device might be sending private data to the Internet, let alone being able to stop it.

Compounding all of these problems, of course, is the access a consumer gives an untrusted IoT device to other data or devices on the home network, simply by connecting it to the network—effectively placing it behind the firewall and giving it full network access, including in many cases the shared key for the Wi-Fi network.

A Way Forward

Ultimately, multiple stakeholders may be involved with ensuring the security of a networked IoT device, including consumers, manufacturers, and Internet service providers. There remain many unanswered questions concerning both who is able to (and responsible for) securing these devices, but we should start the discussion about how to improve the security for networks with IoT devices.

This discussion will include both policy aspects (including who bears the ultimate responsibility for device insecurity, whether devices need to adopt standard practices or behavior, and for how long their manufacturers should continue to support them), as well as technical aspects (including how we design the network to better monitor and control the behavior of these often-insecure devices).

Devices should be more transparent. The first step towards improving security and privacy for IoT should be to work with manufacturers to improve the transparency of these IoT devices, so that consumers (and possibly ISPs) have more visibility into what software the devices are running, and what traffic they are sending and receiving. This, of course, is a Herculean effort, given the vast quantity and diversity of device manufacturers; an alternative would be trying to infer what devices are connected to the network based on their traffic behavior, but doing so in a way that is both comprehensive, accurate, and reasonably informative seems extremely difficult.

Instead, some IoT device manufacturers might standardize on a manifest protocol that announces basic information, such as the device type, available sensors, firmware version, the set of destinations the device expects to communicate with (and whether the traffic is encrypted), and so forth. (Of course, such a manifest poses its own security risks.)

Network infrastructure can play a role. Given such basic information, anomalous behavior that is suggestive of a compromise or data leak would be more evident to network intrusion detection systems and firewalls—in other words, we could bring more of our standard network security tools to bear on these devices, once we have a way to identify what the devices are and what their expected behavior should be. Such a manifest might also serve as a concise (and machine readable!) privacy statement; a concise manifest might be one way for consumers to evaluate their comfort with a certain device, even though it may be far from a complete privacy statement.

Armed with such basic information about the devices on the network, smart network switches would have a much easier way to implement network security policies. For example, a user could specify that the smart camera should never be able to communicate with the public Internet, or that the thermostat should only be able to interact with the window locks if someone is present.

Current network switches don’t provide easy mechanisms for consumers to either express or implement these types of policies. Advances in Software-Defined Networking (SDN) in software switches such as Open vSwitch may make it possible to implement policies that resolve contention for shared resources and conflicts, or to isolate devices on the network from one another, but even if that is a reasonable engineering direction, this technology will only take us part of the way, as users will ultimately need far better interfaces to both monitor network activity and express policies about how these devices should behave and exchange traffic.

Update [20 Jan 2015]: After significant press coverage, Nest has contacted the media to clarify that the information being leaked in cleartext was not the zip code of the thermostat, but merely the zip code that the user enters when configuring the device. (Clarifying statement here.) Of course, when would a user ever enter a zip code other than that of their home, where the thermostat was located?

avatar

The Web Privacy Problem is a Transparency Problem: Introducing the OpenWPM measurement tool

In a previous blog post I explored the success of our study, The Web Never Forgets, in having a positive impact on web privacy. To ensure a lasting impact, we’ve been doing monthly, automated 1-million-site measurement of tracking and privacy. Soon we’ll be releasing these datasets and our findings. But in this post I’d like to introduce our web measurement platform OpenWPM that we’ve built for this purpose. OpenWPM has been quickly gaining adoption — it has already been used by at least 6 other research groups, as well as journalists, regulators, and students for class projects. In this post, I’ll explain why we built OpenWPM, describe a previously unmeasured type of tracking we found using the tool, and show you how you can participate and contribute to this community effort.

This post is based on a talk I gave at the FTC’s PrivacyCon. You can watch the video online here.

Why monthly, large-scale measurements are necessary

In my previous post, I showed how measurements from academic studies can help improve online privacy, but I also pointed out how they can fall short. Measurement results often have an immediate impact on online privacy. Unless that impact leads to a technical, policy, or legal solution, the impact will taper off over time as the measurements age.

Technical solutions do not always exist for privacy violations. I discussed how canvas fingerprinting can’t be prevented without sacrificing usability in my previous blog post, but there are others as well. For example, it has proven difficult to find a satisfactory solution to the privacy concerns surrounding WebRTC’s access to local IPs. This is also highlighted in the unofficial W3C draft on Fingerprinting Guidance for Web Specification Authors, which states: “elimination of the capability of browser fingerprinting by a determined adversary through solely technical means that are widely deployed is implausible.”

It seems inevitable that measurement results will go out of date, for two reasons. Most obviously, there is a high engineering cost to running privacy studies. Equally important is the fact that academic papers in this area are published as much for their methodological novelty as for their measurement results. Updating the results of an earlier study is unlikely to lead to a publication, which takes away the incentive to do it at all. [1]

OpenWPM: our platform for automated, large-scale web privacy measurements

We built OpenWPM (Github, technical report), a generic platform for online tracking measurement. It provides the stability and instrumentation necessary to run many online privacy studies. Our goal in developing OpenWPM is to decrease the initial engineering cost of studies and make running a measurement as effortless as possible. It has already been used in several published studies from multiple institutions to detect and reverse engineer online tracking.

OpenWPM also makes it possible to run large-scale measurements with Firefox, a real consumer browser [2]. Large scale measurement lets us compare the privacy practices of the most popular sites to those in the long tail. This is especially important when observing the use of a tracking technique highlighted in a measurement study. For example, we can check if it’s removed from popular sites but added to less popular sites.

Transparency through measurement, on 1 million sites

We are using OpenWPM to run the Princeton Transparency Census, a monthly web-scale measurement of tracking techniques and privacy issues, comprising 1 million sites. With it, we will be able to detect and measure many of the known privacy violations reported by researchers so far: the use of stateful tracking mechanisms, browser fingerprinting, cookie synchronization, and more.

During the measurements, we’ll collect data in three categories: (1)  network traffic — all HTTP requests and response headers (2) client-side state — cookies, Flash cookies, etc. (3) execution traces — we trap and record targeted JavaScript API calls that have been known to be used for tracking. In addition to releasing all of the raw data collected during the census, we’ll release the results of our own automated analysis.

Alongside the 1 million site measurement, we are also running smaller, targeted measurements with different browser configurations. Examples include crawling deeper into the site or browsing with a privacy extension, such as Ghostery or AdBlock Plus. These smaller crawls will provide additional insight into the privacy threats faced by real users.

Detecting WebRTC local IP discovery

As a case study of the ease of introducing a new measurement into the infrastructure, I’ll walk through the steps I took to measure scripts using WebRTC to discover a machine’s local IP address [3]. For machines behind a home router, this technique may reveal an IP of the form 192.168.1.*. Users of corporate or university networks may return a unique local IP address from within that organization’s IP range.

A user’s local IP address adds additional information to a browser fingerprint. For example, it can be used as a way to differentiate multiple users behind a NAT without requiring browser state. The amount of identifying information it provides for the average user hasn’t been studied. However, both Chrome and Firefox [4] have implemented opt-in solutions to prevent the technique. The first reported use that I could find for this technique in the wild was a third-party on nytimes.com in July 2015.

After examining a demo script, I decided to record all property access and all method calls of the RTCPeerConnection interface, the primary interface for WebRTC. The additional instrumentation necessary for this interface is just a single line of Javascript in OpenWPM’s Firefox extension.

A preliminary analysis [5] of a 50,000 site pilot measurement from October 2015 suggests that WebRTC local IP discovery is used on the homepages of over 100 sites, from over 20 distinct scripts. Only 1 of these scripts would be blocked by EasyList or EasyPrivacy.

How can this be useful for you

We envision several ways researchers and other members of the community can make use of  OpenWPM and our measurements. I’ve listed them here from least involved to most involved.

(1) Use our measurement data for their own tools. In my analysis of canvas fingerprinting I mentioned that Disconnect incorporated our research results into their blocklist. We want to make it easy for privacy tools to make use of the analysis we run, by releasing analysis data in a structured, machine readable way.

(2) Use the data collected during our measurements, and build their own analysis on top of it. We know we’ll never be able to take the human element out of these studies. Detection methodologies will change, new features of the browser will be released and others will change. The depth of the Transparency measurements should make it easy test new ideas, with the option of contributing them back to the regular crawls.

(3) Use OpenWPM to collect and release their own data. This is the model we see most web privacy researchers opting for, and a model we plan to use for most of our own studies. The platform can be used and tweaked as necessary for the individual study, and the measurement results and data can be shared publicly after the study is complete.

(4) Contribute to OpenWPM through pull requests. This is the deepest level of involvement we see. Other developers can write new features into the infrastructure for their own studies or to be run as part of our transparency measurements. Contributions here will benefit all users of OpenWPM.

Over the coming months we will release new blog posts and research results on the measurements I’ve discussed in this post. You can follow our progress here on Freedom to Tinker, on Twitter @s_englehardt, and on our Github repository.

 

[1] Notable exceptions include the study of cookie respawning: 2009, 2011, 2011, 2014. and the statistics on stateful tracking use and third-party inclusion: 2009, 2012, 2012, 2012, 2015.

[2] Crawling with a real browser is important for two reasons: (1) it’s less likely to be detected as a bot, meaning we’re less likely to receive different treatment from a normal user, and (2) a real browser supports all the modern web features (e.g. WebRTC, HTML5 audio and video), plugins (e.g. Flash), and extensions (e.g. Ghostery, HTTPS Everywhere). Many of these additional features play a large role in the average user’s privacy online.

[3] There is an additional concern that WebRTC can be used to determine a VPN user’s actual IP address, however this attack is distinct from the one described in this post.

[4] uBlock Origin also provides an option to prevent WebRTC local IP discovery on Firefox.

[5] We are in the process of running and verifying this analysis on a our 1 million site measurements, and will release an updated analysis with more details in the future.

avatar

Do privacy studies help? A Retrospective look at Canvas Fingerprinting

It seems like every month we hear of some new online privacy violation in the news, on topics such as fingerprinting or web tracking. Many of these news stories highlight academic research. What we don’t see is whether these studies and the subsequent news stories have any impact on privacy.

Our 2014 canvas fingerprinting measurement offers an opportunity for me to provide that insight, as we ended up receiving a surprising amount of press coverage after releasing the paper. In this post I’ll examine the reaction to the paper and explore which aspects contributed to its positive impact on privacy. I’ll also explore how we can use this knowledge when designing future studies to maximize their impact.

What we found in 2014

The 2014 measurement paper, The Web Never Forgets, is a collaboration with researchers at KU Leuven. In it, we measured the prevalence of three persistent tracking techniques online: canvas fingerprinting, cookie respawning, and cookie syncing [1]. They are persistent in that are hard to control, hard to detect, and resilient to blocking or removing.

We found that 5% of the top 100,000 sites were utilizing the HTML5 Canvas API as a fingerprinting vector. The overwhelming majority of which, 97%, was caused by the top two providers. The ability to use the HTML5 Canvas as a fingerprinting vector was first introduced in a 2012 paper by Mowery and Shacham. In the time between that 2012 paper and our 2014 measurement, approximately 20 sites and trackers started using canvas to fingerprint their visitors.

Several examples of the text written to the canvas for fingerprinting purposes. Each of these images would be converted to strings and then hashed to create an identifier.

The reaction to our study

Shortly after we released our paper publicly, we saw a significant amount of press coverage, including articles on ProPublica, BBC, Der Spiegel, and more. The amount of coverage our paper received was a surprise for us; we weren’t the creators of the method, and we certainly weren’t the first to report on the fingerprintability of browsers [2]. Just two days later, AddThis stopped using canvas fingerprinting. The second largest provider at the time, Ligatus, also stopped using the technique.

As can be expected, we saw many users take their frustrations to Twitter. There are users who wondered why publishers would fingerprint them:

complained about AddThis:

and expressed their dislike for canvas fingerprinting in general:

We even saw a user question as to why Mozilla does not protect against canvas fingerprinting in Firefox:

However a general technical solution which preserves the API’s usefulness and usability doesn’t exist [3]. Instead the best solutions are either blocking the feature or blocking the trackers which use it.

The developer community responded by releasing canvas blocking extensions for Firefox and Chrome, tools which are used by over 18,000 users in total. AdBlockPlus and Disconnect both commented that the large trackers are already on their block lists, with Disconnect mentioning that the additional, lesser-known parties from our study would be added to their lists.

Why was our study so impactful?

Much of the online privacy problem is actually a transparency problem. By default, users have very little information on the privacy practices of the websites they visit, and of the trackers included on those sites. Without this information users are unable to differentiate between sites which take steps to protect their privacy and sites which don’t. This leads to less of an incentive for site owners to protect the privacy of their users, as online privacy often comes at the expense of additional ad revenue or third-party features.

With our study, we were not only able to remove this information asymmetry [4], but were able to do so in a way that was relatable to users. The visual representation of canvas fingerprinting proved particularly helpful in removing that asymmetry of information; it was very intuitive to see how the shapes drawn to a canvas could produce a unique image. The ProPublica article even included a demo where users could see their fingerprint built in real time.

While writing the paper we made it a point to include not only the trackers responsible for fingerprinting, but to also include the sites on which the fingerprinting was taking place. Instead of reading that tracker A was responsible for fingerprinting, they could understand that it occurs when they visit publishers X, Y and Z. If a user is frustrated by a technique, and is only familiar with the tracker responsible, there isn’t much they can do. By knowing the publishers on which the technique is used, they can voice their frustrations or choose to visit alternative sites. Publishes, which have in interest in keeping users, will then have an incentive to change their practices.

The technique wasn’t only news to users, even some site owners were unaware that it was being used on their sites. ProPublica updated their original story with a message from YouPorn stating, “[the website was] completely unaware that AddThis contained a tracking software…”, and had since removed it. This shows that measurement work can even help remove the information asymmetry between trackers and the sites upon which they track.

How are things now?

In a re-run of the measurements in January 2016 [5], I’ve observed that the number of distinct trackers utilizing canvas fingerprinting has more than doubled since our 2014 measurement. While the overall number of publisher sites on which the tracking occurs is still below that of our previous measurement, the use of the technique has at least partially revived since AddThis and Ligatus stopped the practice.

This made me curious if we see similar trends for other tracking techniques. In our 2014 paper we also studied cookie respawning [6]. This technique was well studied in the past, both in 2009 and 2011, making it a good candidate to analyze the longitudinal effects of measurement.  As is the case with our measurement, these studies also received a bit of press coverage when released.

The 2009 study, which found HTTP cookie respawning on 6 of the top 100 sites, resulted in a $2.4 million settlement. The 2011 follow-up study found that the use of respawning decreased to just 2 sites in the top 100, and likewise resulted in a $500 thousand settlement. In 2014 we observed respawning on 7 of the top 100 sites, however none of these sites or trackers were US-based entities. This suggests that lawsuits can have an impact, but that impact may be limited by the global nature of the web.

What we’ve learned

Providing transparency into privacy violations online has the potential for huge impact. We saw users unhappy with the trackers that use canvas fingerprinting, with the sites that include those trackers, and even with the browsers they use to visit those sites. It is important that studies visit a large number of sites, and list those on which the privacy violation occurs.

The pressure of transparency affects the larger players more than the long tail. A tracker which is present on a large number of sites, or is present on sites which receive more traffic is more likely to be the focus of news articles or subject to lawsuits. Indeed, our 2016 measurements support it: we’ve seen a large increase in the number of parties involved, but the increase is limited to parties with a much smaller presence.

In the absence of a lawsuit, policy change, or technical solution, we see that canvas fingerprinting use is beginning to grow again. Without constant monitoring and transparency, level of privacy violations can easily creep back to where they were. A single, well-connected tracker can re-introduce a tracking technique to a large number of first-parties.

The developer community will help, we just need to provide them with the data they need. Our detection methodology served as the foundation for blocking tools, which intercept the same calls we used for detection. The script lists we included in our paper and on our website were incorporated into current blocklists.

In a follow-up post, I’ll discuss the work we’re doing to make regular, large scale measurements of web tracking a reality. I’ll show how the tools we’ve built make it possible to run automated, million site crawls can run every month, and I’ll introduce some new results we’re planning to release.

 

[1] The paper’s website provides a short description of each of these techniques.

[2] See: the EFF’s Panopticlick, and academic papers Cookieless Monster and FPDetective.

[3] For example, adding noise to canvas readouts has the potential to cause problems for non-tracking use cases and can still be defeated by a determined tracker. The Tor Browser’s solution of prompting the user on certain canvas calls does work, however it requires a user’s understanding that the technique can be used for tracking and provides for a less than optimal user experience.

[4] For a nice discussion of information asymmetry and the web: Privacy and the Market for Lemons, or How Websites Are Like Used Cars

[5] These measurements were run using the canvas instrumentation portion of OpenWPM.

[6] For a detailed description of cookie respawning, I suggest reading through Ashkan Soltani’s blog post on the subject.

Thanks to Arvind Narayanan for his helpful comments.

avatar

How Will Consumers Use Faster Internet Speeds?

This week saw an exciting announcement about the experimental deployment of DOCSIS 3.1 in limited markets in the United States, including Philadelphia, Atlanta, and parts of northern California, which will bring gigabit-per-second Internet speeds to many homes over the existing cable infrastructure. The potential for gigabit speeds over the existing cable networks bring hope that more consumers will ultimately enjoy much higher-speed Internet connectivity both in the United States and elsewhere.

This development is also a pointed response to the not-so-implicit pressure from the Federal Communications Commission to deploy higher-speed Internet connectivity, which includes other developments such as the redefinition of broadband to a downstream throughput rate of 25 megabits per second, up from a previous (and somewhat laughable) definition of 4 Mbps; many commissioners have also stated their intentions to raise the threshold for the definition of a broadband network to a downstream throughput of 100 Mbps, as a further indication that ISPs will see increasing pressure for higher speed links to home networks. Yet, the National Cable and Telecommunications Association has also claimed in an FCC filing that such speeds are far more than a “typical” broadband user would require.

These developments and posturing beg the question: How will consumers change their behavior in response to faster downstream throughput from their Internet service providers? 

Ph.D. student Sarthak Grover, postdoc Roya Ensafi, and I set out to study this question with a cohort of about 6,000 Comcast subscribers in Salt Lake City, Utah, from October through December 2014. The study involved what is called a randomized controlled trial, an experimental method commonly used in scientific experiments where a group of users is randomly divided into a control group (whose user experience no change in conditions) and a treatment group (whose users are subject to a change in conditions).  Assuming the cohort is large enough and represents a cross-section of the demographic of interest, and that the users for the treatment group are selected at random, it is possible to observe differences between the two groups’ outcomes and conclude how the treatment affects the outcome.

In the case of this specific study, the control group consisted of about 5,000 Comcast subscribers who were paying for (and receiving) 105 Mbps downstream throughput; the treatment group, on the other hand, comprised about 1,500 Comcast subscribers who were paying for 105 Mbps but at the beginning of the study period were silently upgraded to 250 Mbps. In other words, users in the treatment group were receiving faster Internet service but was unaware of the faster downstream throughput of their connections. We explored how this treatment affected user behavior and made a few surprising discoveries:

“Moderate” users tend to adjust their behavior more than the “heavy” users. We expected that subscribers who downloaded the most data in the 250 Mbps service tier would be the ones causing the largest difference in mean demand between the two groups of users (previous studies have observed this phenomenon, and we do observe this behavior for the most aggressive users). To our surprise, however, the median subscribers in the two groups exhibited much more significant differences in traffic demand, particularly at peak times.  Notably, the 40% of subscribers with lowest peak demands more than double their daily peak traffic demand in response to service-tier upgrades (i.e., in the treatment group).

With the exception of the most aggressive peak-time subscribers, the subscribers who are below the 40th percentile in terms of peak demands increase their peak demand more than users who initially had higher peak demands.

This result suggests a surprising trend: it’s not the aggressive data hogs who account for most of the increased use in response to faster speeds, but rather the “typical” Internet user, who tends to use the Internet more as a result of the faster speeds. Our dataset does not contain application information, so it is difficult to say what, exactly is responsible for the higher data usage of the median user. Yet, the result uncovers an oft-forgotten phenomena of faster links: even existing applications that do not need to “max out” the link capacity (e.g., Web browsing, and even most video streaming) can benefit from a higher capacity link, simply because they will see better performance overall (e.g., faster load times and more resilience to packet loss, particularly when multiple parallel connections are in use). It might just be that the typical user is using the Internet more with the faster connection simply because the experience is better, not because they’re interested in filling the link to capacity (at least not yet!).

Users may use faster speeds for shorter periods of time, not always during “prime time”. There has been much ado about prime-time video streaming usage, and we most certainly see those effects in our data. To our surprise, the average usage per subscriber during prime-time hours was roughly the same between the treatment and control groups, yet outside of prime time, the difference in usage was much more pronounced between the two groups, with average usage per subscriber in the treatment group exhibiting 25% more usage than that in the control group for non-prime-time weekday hours.  We also observe that the peak-to-mean ratios for usage in the treatment group are significantly higher than they are in the control group, indicating that users with faster speeds may periodically (and for short times) take advantage of the significantly higher speeds, even though they are not sustaining a high rate that exhausts the higher capacity.

These results are interesting for last-mile Internet service providers because they suggest that the speeds at the edge may not currently be the limiting factor for user traffic demand. Specifically, the changes in peak traffic outside of prime-time hours also suggest that even the (relatively) lower-speed connections (e.g., 105 Mbps) may be sufficient to satisfy the demands of users during prime-time hours. Of course, the constraints on prime-time demand (much of which is largely streaming) likely result from other factors, including both available content and perhaps the well-known phenomena of congestion in the middle of the network, rather than in the last mile. All of this points to the increasing importance of resolving the performance issues that we see as a result of interconnection. In the best case, faster Internet service moves the bottleneck from the last mile to elsewhere in the network (e.g., interconnection points, long-haul transit links); but, in reality, it seems that the bottlenecks are already there, and we should focus on mitigating those points of congestion.

Further reading and study. You’ll be able to read more about our study in the following paper: A Case Study of Traffic Demand Response to Broadband Service-Plan Upgrades. S. Grover, R. Ensafi, N. Feamster. Passive and Active Measurement Conference (PAM). Heraklion, Crete, Greece. March 2016. (We will post an update when the final paper is published in early 2016.) There is plenty of room for follow-up work, of course; notably, the data we had access to did not have information about application usage, and only reflected byte-level usage at fifteen-minute intervals. Future studies could (and should) continue to study the effects of higher-speed links by exploring how the usage of specific applications (e.g., streaming video, file sharing, Web browsing) changes in response to higher downstream throughput.

avatar

When coding style survives compilation: De-anonymizing programmers from executable binaries

In a recent paper, we showed that coding style is present in source code and can be used to de-anonymize programmers. But what if only compiled binaries are available, rather than source code? Today we are releasing a new paper showing that coding style can survive compilation. Consequently, we can utilize these stylistic fingerprints via machine learning and de-anonymize programmers of executable binaries with high accuracy. This finding is of concern for privacy-aware programmers who would like to remain anonymous.

 

Update: Video of the talk at the Chaos Communication Congress

 

How to represent the coding style in an executable binary.

Executable binaries of compiled source code on their own are difficult to analyze because they lack human readable information. Nevertheless reverse engineering methods make it possible to disassemble and decompile executable binaries. After applying such reverse engineering methods to executable binaries, we can generate numeric representations of authorial style from features preserved in binaries.

We use a dataset consisting of source code samples of 600 programmers which are available on the website of the annual programming competition Google Code Jam (GCJ). This dataset contains information such as how many rounds the contestants were able to advance, inferring their programming skill, while all contestants were implementing algorithmic solutions to the same programming tasks. Since all the contestants implement the same functionality, the main difference between their samples is their coding style. Such ground truth provides a controlled environment to analyze coding style.

We compile the source code samples of GCJ programmers to executable binaries. We disassemble these binaries to obtain their assembly instructions. We also decompile the binaries to generate approximations of the original source code and the respective control flow graphs. We subsequently parse the decompiled code using a fuzzy parser to obtain abstract syntax trees from which structural properties of code can be derived.

These data sources provide a combination of high level program flow information as well as low level assembly instructions and structural syntactic properties. We convert these properties to numeric values to represent the prevalent coding style in each executable binary.

We cast programmer de-anonymization as a machine learning problem.

We apply the machine learning workflow depicted in the figure below to de-anonymize programmers. There are three main stages in our programmer de-anonymization method. First, we extract a large variety of features from each executable binary to represent coding style with feature vectors, which is possible after applying state-of-the-art reverse engineering methods. Second, we train a random forest classifier on these vectors to generate accurate author models. Third, the random forest attributes authorship to the vectorial representations of previously unseen executable binaries.

The contribution of our work is that we show coding style is embedded in executable binaries as a fingerprint, making de-anonymization possible. The novelty of our method is that our feature set incorporates reverse engineering methods to generate a rich and accurate representation of properties hidden in binaries.

 

 

Results.

We are able to de-anonymize executable binaries of 20 programmers with 96% correct classification accuracy. In the de-anonymization process, the machine learning classifier trains on 8 executable binaries for each programmer to generate numeric representations of their coding styles. Such a high accuracy with this small amount of training data has not been reached in previous attempts. After scaling up the approach by increasing the dataset size, we de-anonymize 600 programmers with 52% accuracy. There has been no previous attempt to de-anonymize such a large binary dataset. The abovementioned executable binaries are compiled without any compiler optimizations, which are options to make binaries smaller and faster while transforming the source code more than plain compilation. As a result, compiler optimizations further normalize authorial style. For the first time in programmer de-anonymization, we show that we can still identify programmers of optimized executable binaries. While we can de-anonymize 100 programmers from unoptimized executable binaries with 78% accuracy, we can de-anonymize them from optimized executable binaries with 64% accuracy. We also show that stripping and removing symbol information from the executable binaries reduces the accuracy to 66%, which is a surprisingly small drop. This suggests that coding style survives complicated transformations.

Other interesting contributions of the paper.

  • By comparing advanced and less advanced programmers’, we found that more advanced programmers are easier to de-anonymize and they have a more distinct coding style.
  • We also de-anonymize GitHub users in the wild, which we explain in detail in the paper. These promising results are encouraging us to extend our method to large real world datasets of various natures for future work.
  • Why does de-anonymization work so well? It’s not because the decompiled source code looks anything like the original. Rather, the feature vector obtained from disassembly and decompilation can be used to predict, using machine learning, the features in the original source code — with over 80% accuracy. This shows that executable binaries preserve transformed versions of the original source code features. 

I would like to thank Arvind Narayanan for his valuable comments.

 

avatar

CITP Call for Visitors and Affiliates for 2016-17

The Center for Information Technology Policy is an interdisciplinary research center at Princeton that sits at the crossroads of engineering, the social sciences, law, and policy.

We are seeking applicants for various residential visiting positions and for non-residential affiliates. For more information about these positions, please see our general information page and our lists of current and past visitors.

We are happy to hear from anyone working at the intersection of digital technology and public life, including experts in computer science, sociology, economics, law, political science, public policy, information studies, communication, and other related disciplines.

We have a particular interest this year in candidates working on issues related to the Internet of Things (IoT).

Visitors

Note that you should apply to only one of the Visiting IT Policy Fellow position (if you would be on leave from a full-time position, as with a professor on sabbatical) and the IT Policy Researcher position (if Princeton University would be your primary affiliation during your visit, as with a postdoctoral researcher or a professional between two jobs). Applicants to either of those positions may also apply to the Microsoft Visiting Professor posting.

All applicants should submit a current curriculum vitae, a research plan (including a description of potential courses to be taught if applying for the Visiting Professorship), and a cover letter describing background, interest in the program, and any funding support for the visit. CITP has secured limited resources from a range of sources to support visitors. However, many of our visitors are on paid sabbatical from their own institutions or otherwise provide some or all of their own outside funding.
For full consideration, completed applications must be received by January 29, 2016.

Microsoft Visiting Professor of Information Technology Policy
To apply, please go to Jobs at Princeton, click on “Search Open Positions,” and enter requisition number 1501013.

Visiting IT Policy Fellow
To apply, please go to Jobs at Princeton, click on “Search Open Positions,” and enter requisition number 1501014.

IT Policy Researcher
To apply, please go to Jobs at Princeton, click on “Search Open Positions,” and enter requisition number 1501016.

Affiliates

Technology policy researchers and experts who wish to have a formal affiliation with CITP, but cannot be in residence in Princeton, may apply to become a CITP Affiliate. The affiliation typically will last for two years. Affiliates do not have any formal appointment at Princeton University.

Applicants should email applications to Please send a current curriculum vitae and a cover letter describing background and interest in the program. For full consideration, completed applications must be received by January 29, 2016.

avatar

New Professors’ Letter Opposing The Defend Trade Secrets Act of 2015

As Freedom to Tinker readers may recall, I’ve been very concerned about the problems associated with the proposed Defend Trade Secrets Act. Ostensibly designed to combat cyberespionage against United States corporations, it is instead not a solution to that problem, and fraught with downsides. Today, over 40 colleagues in the academic world joined Eric Goldman, Chris Seaman, Sharon Sandeen and me in raising a variety of concerns about the DTSA in the following letter:

Professors’ Letter in Opposition to the Defend Trade Secrets Act of 2015.

Importantly, this new letter incorporates our 2014 opposition letter. As we explained,

While we agree that effective legal protection for U.S. businesses’ legitimate trade secrets is important to American innovation, we believe that the DTSA—which would represent the most significant expansion of federal law in intellectual property since the Lanham Act in 1946—will not solve the problems identified by its sponsors. Instead of addressing cyberespionage head-on, passage of the DTSA is likely to create new problems that could adversely impact domestic innovation, increase the duration and cost of trade secret litigation, and ultimately negatively affect economic growth. Therefore, the undersigned call on Congress to reject the DTSA.

We also call on Congress to hold hearings “that focus on the costs of the legislation and whether the DTSA addresses the cyberespionage problem that it is allegedly designed to combat. Specifically, Congress should evaluate the DTSA through the lens of employees, small businesses, and startup companies that are most likely to be adversely affected by the legislation.”

I will continue to blog on the DTSA as events warrant, and encourage Freedom to Tinker readers to contact their members of Congress and urge them to vote against the DTSA.