October 6, 2022

Verizon's tracking header: Can they do better?

Verizon’s practice of injecting a unique ID into the HTTP headers of traffic originating on their wireless network has alarmed privacy advocates and researchers. Jonathan Mayer detailed how this header is already being used by third-parties to create zombie cookies. In this post, I summarize just how much information Verizon collects and shares under their marketing programs. I’ll show how the implementation of the header makes previous tracking methods trivial and explore the possibility of a more secure design.

[Read more…]

How cookies can be used for global surveillance

Today we present an updated version of our paper [0] examining how the ubiquitous use of online tracking cookies can allow an adversary conducting network surveillance to target a user or surveil users en masse.  In the initial version of the study, summarized below, we examined the technical feasibility of the attack. Now we’ve made the attack model more complete and nuanced as well as analyzed the effectiveness of several browser privacy tools in preventing the attack. Finally, inspired by Jonathan Mayer and Ed Felten’s The Web is Flat study, we incorporate the geographic topology of the Internet into our measurements of simulated web traffic and our adversary model, providing a more realistic view of how effective this attack is in practice. [Read more…]

Cookies that give you away: The surveillance implications of web tracking

[Today we have another announcement of an exciting new research paper. Undergraduate Dillon Reisman, for his senior thesis, applied our web measurement platform to study some timely questions. -Arvind Narayanan]

Over the past three months we’ve learnt that NSA uses third-party tracking cookies for surveillance (1, 2). These cookies, provided by a third-party advertising or analytics network (e.g. doubleclick.com, scorecardresearch.com), are ubiquitous on the web, and tag users’ browsers with unique pseudonymous IDs. In a new paper, we study just how big a privacy problem this is. We quantify what an observer can learn about a user’s web traffic by purely passively eavesdropping on the network, and arrive at surprising answers.
[Read more…]

Web measurement for fairness and transparency

[This is the first in a series of posts giving some examples of security-related research in the Princeton computer science department. We’re actively recruiting top-notch students to enter our Ph.D. program, as well as postdocs and visiting scholars. We don’t have enough bandwidth here on the blog to feature everything we do, so we’ll be highlighting a few examples over the next couple of weeks.]

Everything we do on the web is tracked, profiled, and analyzed. But what do companies do with that information? To what extent do they use it in ways that benefit us, versus discriminatory ways? While many concerns have been raised, not much is known quantitatively. That’s why at Princeton we’re building an infrastructure to detect, measure and reverse engineer differential treatment of web users.

[Read more…]