December 11, 2019

Archives for 2019

CITP Call for Visitors 2020-21

The Center for Information Technology Policy is an interdisciplinary research center at Princeton University that sits at the crossroads of engineering, the social sciences, law, and policy.

CITP seeks applicants for various visiting positions each year. Visitors are expected to live in or near Princeton and to be in residence at CITP on a daily basis. They will conduct research and participate actively in CITP’s programs.

For all visitors, we are happy to hear from anyone working at the intersection of digital technology and public life, including experts in computer science, sociology, economics, law, political science, public policy, information studies, communication, and other related disciplines.

We have a particular interest this year in candidates working on issues related to Artificial Intelligence (AI), Blockchain and Cryptocurrencies.

There are three job postings for CITP visitors: 1) Microsoft Visiting Researcher Scholar/Visiting Professor of Information Technology Policy, 2) Visiting IT Policy Fellow, and 3) Postdoctoral Research Associate or more senior IT policy researcher. For more information about these positions and to apply, please see our hiring page.

For full consideration, all applications should be received by December 31, 2019.

Enhancing the Security of Data Breach Notifications and Settlement Notices

[This post was jointly written by Ryan Amos, Mihir Kshirsagar, Ed Felten, and Arvind Narayanan.]

We couldn’t help noticing that the recent Yahoo and Equifax data breach settlement notifications look a lot like phishing emails. The notifications make it hard for users to distinguish real settlement notifications from scams. For example, they direct users to URLs on unfamiliar domains that are not clearly owned by the company that was breached nor any other trusted entity. Practices like this lower the bar for scammers to create fake phishing emails, potentially victimizing users twice. To illustrate the severity of this problem, Equifax mixed up domain names and posted a link to a phishing website to their Twitter account. Our discussion paper presents two recommendations to stakeholders to address this issue.

First, we recommend creating a centralized database of settlements and breaches, with an authoritative URL for each one, so that users have a way to verify the notices distributed. Such a database has precedent in the Consumer Product Safety Commission (CPSC) consumer recall list. When users receive notice of a data breach, this database would serve as a reliable authority to verify the information included in the notice. A centralized database has additional value outside the data breach context as courts and government agencies increasingly turn to electronic notices to inform the public, and scammers (predictably) respond by creating false notices.

Second, we recommend that no settlement or breach notice include a URL to a new domain. Instead, such notices should include a URL to a page on a trusted, recognizable domain, such as a government-run domain or the breached party’s domain. That page, in turn, can redirect users to a dedicated domain for breach information, if desired. This helps users avoid phishing by allowing them to safely ignore links to unrecognized domains. After the settlement period is over, any redirections should be automatically removed to avoid abandoned domains from being reused by scammers.

Content Moderation for End-to-End Encrypted Messaging

Thursday evening, the Attorney General, the Acting Homeland Security Secretary, and top law enforcement officials from the U.K. and Australia sent an open letter to Mark Zuckerberg. The letter emphasizes the scourge of child abuse content online, and the officials call on Facebook to press pause on end-to-end encryption for its messaging platforms.

The letter arrived the same week as a widely shared New York Times article, describing how reports of child abuse content are multiplying. The article provides a heartbreaking account of how the National Center for Missing and Exploited Children (NCMEC) and law enforcement agencies are overburdened and under-resourced in addressing horrible crimes against children.

Much of the public discussion about content moderation and end-to-end encryption over the past week has appeared to reflect two common technical assumptions:

  1. Content moderation is fundamentally incompatible with end-to-end encrypted messaging.
  2. Enabling content moderation for end-to-end encrypted messaging fundamentally poses the same challenges as enabling law enforcement access to message content.

In a new discussion paper, I provide a technical clarification for each of these points.

  1. Forms of content moderation may be compatible with end-to-end encrypted messaging, without compromising important security principles or undermining policy values.
  2. Enabling content moderation for end-to-end encrypted messaging is a different problem from enabling law enforcement access to message content. The problems involve different technical properties, different spaces of possible designs, and different information security and public policy implications.

I aim to demonstrate these clarifications by formalizing specific content moderation properties for end-to-end encrypted messaging, then offering at least one possible protocol design for each property.

  • User Reporting: If a user receives a message that he or she believes contains harmful content, can the user report that message to the service provider?
  • Known Content Detection: Can the service provider automatically detect when a user shares content that has previously been labeled as harmful?
  • Classifier-based Content Detection: Can the service provider detect when a user shares new content that has not been previously identified as harmful, but that an automated classifier predicts may be harmful?
  • Content Tracing: If the service provider identifies a message that contains harmful content, and the message has been forwarded by a sequence of users, can the service provider trace which users forwarded the message?
  • Popular Content Collection: Can the service provider curate a set of content that has been shared by a large number of users, without knowing which users shared the content?

The discussion paper is inherently preliminary and an agenda for further interdisciplinary research (including my own). I am not yet prepared to normatively advocate for or against the protocol designs that I describe. I am not claiming that these concepts provide sufficient content moderation capabilities, the same content moderation capabilities as current systems, or sufficient robustness against evasion. I am also not claiming that these designs adequately address information security risks or public policy values, such as free speech, international human rights, or economic competitiveness.

I do not know if there is a viable path forward for content moderation and end-to-end encrypted messaging that will be acceptable to technology platforms, law enforcement, NCMEC, civil society groups, information security experts, and other stakeholders. I do have confidence that, if such a path exists, we will only find it through open research and dialogue.