November 26, 2020

Can the exfiltration of personal data by web trackers be stopped?

by Günes Acar, Steven Englehardt, and Arvind Narayanan.

In a series of posts on this blog in 2017/18, we revealed how web trackers exfiltrate personal information from web pages, browser password managers, form inputs, and the Facebook Login API. Our findings resulted in many fixes and privacy improvements to browsers, websites, third parties, and privacy protection tools. However, the root causes of these privacy failures remain unchanged, because they stem from the design of the web itself. In a paper at the 2020 Privacy Enhancing Technologies Symposium, we recap our findings and propose two potential paths forward.

Root causes of failures

In the two years since our research, no fundamental fixes have been implemented that will stop third parties from exfiltrating personal information. Thus, it remains possible that similar privacy vulnerabilities have been introduced even as the specific ones we identified have mostly been fixed. Here’s why.

The web’s security model allows a website to either fully trust a third party (by including the third-party script in a first party context) or not at all (by isolating the third party resource in an iframe). Unfortunately, this model does not capture the range of trust relationships that we see on the web. Since many third parties cannot provide their services (such as analytics) if they are isolated, the first parties have no choice but to give them full privileges even though they do not trust them fully.

The model also assumes transitivity of trust. But in reality, a user may trust a website and the website may trust a third party, but the user may not trust the third party. 

Another root cause is the economic reality of the web that drives publishers to unthinkingly adopt code from third parties. Here’s a representative quote from the marketing materials of FullStory, one of the third parties we found exfiltrating personal information:

Set-up is a thing of beauty. To get started, place one small snippet of code on your site. That’s it. Forever. Seriously.

A key reason for the popularity of these third parties is that the small publishers that rely on them lack the budget to hire technical experts internally to replicate their functionality. Unfortunately, while it’s possible to effortlessly embed a third party, auditing or even understanding the privacy implications requires time and expertise.

This may also help explain the limited adoption of technical solutions such as Caja or ConScript that better capture the partial trust relationship between first and third parties. Unfortunately, such solutions require the publisher to reason carefully about the privileges and capabilities provided to each individual third party. 

Two potential paths forward

In the absence of a fundamental fix, there are two potential approaches to minimize the risk of large-scale exploitation of these vulnerabilities. One approach is to regularly scan the web. After all, our work suggests that this kind of large-scale detection is possible and that vulnerabilities will be fixed when identified.

The catch is that it’s not clear who will do the thankless, time-consuming, and technically complex work of running regular scans, detecting vulnerabilities and leaks, and informing thousands of affected parties. As researchers, our role is to develop the methods and tools to do so, but running them on a regular basis does not constitute publishable research. While we have undertaken significant efforts to maintain our scanning tool OpenWPM as a resource for the community and make scan data available, regular and comprehensive vulnerability detection requires far more resources than we can afford.

In fact, in addition to researchers, many other groups have missions that are aligned with the goal of large-scale web privacy investigations: makers of privacy-focused browsers and mobile apps, blocklist maintainers, advocacy groups, and investigative journalists. Some examples of efforts that we’re aware of include Mozilla’s maintenance of OpenWPM since 2018, Brave’s research on tracking protection, and DuckDuckGo’s dataset about trackers. However, so far, these efforts fall short of addressing the full range of web privacy vulnerabilities, and data exfiltration in particular. Perhaps a coalition of these groups can create the necessary momentum.

The second approach is legal, namely, regulation that incentivizes first parties to take responsibility for the privacy violations on their websites. This is in contrast to the reactive approach above which essentially shifts the cost of privacy to public-interest groups or other external entities.

Indeed, our findings represent potential violations of existing laws, notably the GDPR in the EU and sectoral laws such as HIPAA (healthcare) and FERPA (education) in the United States. However, it is unclear whether the first parties or the third parties would be liable. According to several session replay companies, they are merely data processors and not joint controllers under the GDPR. In addition, since there are a large number of first parties exposing user data and each violation is relatively small in scope, regulators have not paid much attention to these types of privacy failures. In fact, they risk effectively normalizing such privacy violations due to the lack of enforcement. Thus, either stepped-up enforcement of existing laws or new laws that establish stricter rules could shift incentives, resulting in stronger preventive measures and regular vulnerability scanning by web developers themselves. 

Either approach will be an uphill battle. Regularly scanning the web requires a source of funding, whereas stronger regulation and enforcement will raise the cost of running websites and will face opposition from third-party service providers reliant on the current lax approach. But fighting uphill battles is nothing new for privacy advocates and researchers.

Update: Here’s the video of Günes Acar’s talk about this research at PETS 2020.

Do Mobile News Alerts Undermine Media’s Role in Democracy? Madelyn Sanfilippo at CITP

Why do different people sometimes get different articles about the same event, sometimes from the same news provider? What might that mean for democracy?

Speaking at CITP today is Dr. Madelyn Rose Sanfilippo, a postdoctoral research associate here at CITP. Madelyn empirically studies the governance of sociotechnical systems, as well as outcomes, inequality, and consequences within these systems–through mixed method research design.

Today, Madelyn tells us about a large scale project with Yafit Lev-Aretz  to examine how push notifications and personalized distribution and consumption of news might influence readers and democracy. The project is funded by the Tow Center for Digital Journalism at Columbia University and the Knight Foundation.

Why Do Push Notification Matters for Democracy?

Americans’ trust in media have been diverging in recent years, even as society worries about the risks to democracy from echo chambers. Madelyn also tells us about changes in how Americans get their news.

Push notifications are one of those changes– news organizations that send alerts to people’s computers and to our mobile phones about news they think are important. And we get a lot of them. In 2017, Tow Center researcher Pete Brown found that people get almost one push notification per minute on their phones– interrupting us with news.

In 2017, 85% of Americans were getting news via their mobile devices, and while it’s not clear how many of that came from push notifications, mobile phones tend to come with news apps that have push notifications enabled by default.

When Madelyn and Yafit  started to analyze push notifications, they noticed something fascinating: the same publisher often pushes different headlines to different platforms. They also found that news publishers use language with less objectivity and more subjective, emotional content in those notifications.

Madelyn and Yafit especially wanted to know if media outlets covered breaking news differently based on political affiliation of their readers. Comparing notifications of disasters, gun violence, and terrorism, they found differences in the number of push notifications published by publishers with higher and lower affiliation. They also found differences in the machine-coded subjectivity and objectivity of how these publishers covered those stories.

Composite subjectivity of different sources (higher is more subjective)

Do Push Notifications Create Political Filter Bubbles?

Finally, Madelyn and Yafit wanted to know if the personalization of push notifications shaped what people might be aware of. First, Madelyn explains to us that personalization takes multiple forms:

  • Curation: sometimes which articles we see is curated by personalized algorithms (like Google News)
  • Sometimes the content itself is personalized, where two people see very different text even though they’re reading the same article

Together, they found that location based personalization is common. Madelyn tells us about three different notifications that NBC news sent to people the morning after the Democratic primary. Not only did national audiences get different notifications, but different cities received notes that mentioned Democrat and Republican candidates differently. Aside from midterms, Madelyn and her colleagues found out that sports news is often location-personalized.

Behavioral Personalization

Madelyn tells us that many news publishers also personalize news articles based on information about their readers, including their reading behavior and surveys. They found that some news publishers personalize messages based on what they consider to be a person’s reading level. They also found evidence that publishers tailor news based on personal information that they never provided to the publisher.

Governing News Personalization

How can we ensure that news publishers are serving democracy in the decisions that they make and the knowledge they contribute to society? In many publishers, decisions about the structure of news personalization are made by the business side of the organization.

Madelyn tells us about future research she hopes to do. She’s looking at the means available to news readers to manage these notifications as well as policy avenues for governing news personalization.

Madelyn also thanks her funders for supporting this collaboration with Yafit Lev-Aretz: the Knight Foundation and the Tow Center for Digital Journalism.

Privacy, ethics, and data access: A case study of the Fragile Families Challenge

This blog post summarizes a paper describing the privacy and ethics process by which we organized the Fragile Families Challenge. The paper will appear in a special issue of the journal Socius.

Academic researchers, companies, and governments holding data face a fundamental tension between risk to respondents and benefits to science. On one hand, these data custodians might like to share data with a wide and diverse set of researchers in order to maximize possible benefits to science. On the other hand, the data custodians might like to keep data locked away in order to protect the privacy of those whose information is in the data. Our paper is about the process we used to handle this fundamental tension in one particular setting: the Fragile Families Challenge, a scientific mass collaboration designed to yield insights that could improve the lives of disadvantaged children in the United States. We wrote this paper not because we believe we eliminated privacy risk, but because others might benefit from our process and improve upon it.

One scientific objective of the Fragile Families Challenge was to maximize predictive performance of adolescent outcomes (i.e. high school GPA) measured at approximately age 15 given a set of background variables measured from birth through age 9. We aimed to do so using the Common Task Framework (see Donoho 2017, Section 6): we would share data with researchers who would build predictive models using observed outcomes for half of the cases (the training set). These researchers would receive instant feedback on out-of-sample performance in ⅛ of the cases (the leaderboard set) and ultimately be evaluated by performance in ⅜ of the cases which we would keep hidden until the end of the Challenge (the holdout set). If scientific benefit was the only goal, the optimal design might be to include every possible variable in the background set and share with anyone who wanted access with no restrictions.

Scientific benefit may be maximized by sharing data widely, but risk to respondents is also maximized by doing so. Although methods of data sharing with provable privacy guarantees are an active area of research, we believed that solutions that could offer provable guarantees were not possible in our setting without a substantial loss of scientific benefit (see Section 2.4 of our paper). Instead, we engaged in a privacy and ethics process that involved threat modeling, threat mitigation, and third-party guidance, all undertaken within an ethical framework.

 

Threat modeling

Our primary concern was a risk of re-identification. Although our data did not contain obvious identifiers, we worried that an adversary could find an auxiliary dataset containing identifiers as well as key variables also present in our data. If so, they could link our dataset to the identifiers (either perfectly or probabilistically) to re-identify at least some rows in the data. For instance, Sweeney was able to re-identify Massachusetts medical records data by linking to identified voter records using the shared variables zip code, date of birth, and sex. Given the vast number of auxiliary datasets (red) that exist now or may exist in the future, it is likely that some research datasets (blue) could be re-identified. It is difficult to know in advance which key variables (purple) may aid the adversary in this task.

To make our worries concrete, we engaged in threat modeling: we reasoned about who might have both (a) the capability to conduct such an attack and (b) the incentive to do so. We even tried to attack our own data.  Through this process we identified five main threats (the rows in the figure below). A privacy researcher, for instance, would likely have the skills to re-identify the data if they could find auxiliary data to do so, and might also have an incentive to re-identify, perhaps to publish a paper arguing that we had been too cavalier about privacy concerns. A nosy neighbor who knew someone in the data might be able to find that individual’s case in order to learn information about their friend which they did not already know. We also worried about other threats that are detailed in the full paper.

 

Threat mitigation

To mitigate threats, we took several steps to (a) reduce the likelihood of re-identification and to (b) reduce the risk of harm in the event of re-identification. While some of these defenses were statistical (i.e. modifications to data designed to support aims [a] and [b]), many instead focused on social norms and other aspects of the project that are more difficult to quantify. For instance, we structured the Challenge with no monetary prize, to reduce an incentive to re-identify the data in order to produce remarkably good predictions. We used careful language and avoided making extreme claims to have “anonymized” the data, thereby reducing the incentive for a privacy researcher to correct us. We used an application process to only share the data with those likely to contribute to the scientific goals of the project, and we included an ethical appeal in which potential participants learned about the importance of respecting the privacy of respondents and agreed to use the data ethically. None of these mitigations eliminated the risk, but they all helped to shift the balance of risks and benefits of data sharing in a way consistent with ethical use of the data. The figure below lists our main mitigations (columns), with check marks to indicate the threats (rows) against which they might be effective.  The circled check mark indicates the mitigation that we thought would be most effective against that particular adversary.

 

Third-party guidance

A small group of researchers highly committed to a project can easily convince themselves that they are behaving ethically, even if an outsider would recognize flaws in their logic. To avoid groupthink, we conducted the Challenge under the guidance of third parties. The entire process was conducted under the oversight and approval of the Institutional Review Board of Princeton University, a requirement for social science research involving human subjects. To go beyond what was required, we additionally formed a Board of Advisers to review our plan and offer advice. This Board included experts from a wide range of fields.

Beyond the Board, we solicited informal outside advice from a diverse set of anyone we could talk to who might have thoughts about the process, and this proved valuable.  For example, at the advice of someone with experience planning high-risk operations in the military, we developed a response plan in case something went wrong. Having this plan in place meant that we could respond quickly and forcefully should something unexpected have occurred.

 

Ethics

After the process outlined above, we still faced an ethical question: should we share the data and proceed with the Fragile Families Challenge? This was a deep and complex question to which a fully satisfactory answer was likely to be elusive. Much of our final decision drew on the principles of the Belmont Report, a standard set of principles used in social science research ethics. While not perfect, the Belmont Report serves as a reasonable benchmark because it is the standard that has developed in the scientific community regarding human subjects research. The first principle in the Belmont Report is respect for persons. Because families in the Fragile Families Study had consented for their data to be used for research, sharing the data with researchers in a scientific project agreed with this principle. The second principle is beneficence, which requires that the risks of research be balanced against potential benefits. The threat mitigations we carried out were designed with beneficence in mind. The third principle is justice: that the benefits of research should flow to a similar population that bears the risks. Our sample included many disadvantaged urban American families, and the scientific benefits of the research might ultimately inform better policies to help those in similar situations. It would be wrong to exclude this population from the potential to benefit from research, so we reasoned that the project was in line with the principle of justice. For all of these reasons, we decided with our Board of Advisers that proceeding with the project would be ethical.

 

Conclusion

To unlock the power of data while also respecting respondent privacy, those providing access to data must navigate the fundamental tension between scientific benefits and risk to respondents. Our process did not offer provable privacy guarantees, and it was not perfect. Nevertheless, our process to address this tension may be useful to others in similar situations as data stewards. We believe the core principles of threat modeling, threat mitigation, and third-party guidance within an ethical framework will be essential to such a task, and we look forward to learning from others in the future who build on what we have done to improve the process of navigating this tension.

You can read more about our process in our pre-print: Lundberg, Narayanan, Levy, and Salganik (2018) “Privacy, ethics, and data access: A case study of the Fragile Families Challenge.