February 22, 2017

New Workshop on Technology and Consumer Protection

[Joe Calandrino is a veteran of Freedom to Tinker and CITP. As long time readers will remember,  he did his Ph.D. here, advised by Ed Felten. He recently joined the FTC as research director of OTech, the Office of Technology Research and Investigation. Today we have an exciting announcement. — Arvind Narayanan.]

Arvind Narayanan and I are thrilled to announce a new Workshop on Technology and Consumer Protection (ConPro ’17) to be co-hosted with the IEEE Symposium on Security and Privacy (Oakland) in May 2017:

Advances in technology come with countless benefits for society, but these advances sometimes introduce new risks as well. Various characteristics of technology, including its increasing complexity, may present novel challenges in understanding its impact and addressing its risks. Regulatory agencies have broad jurisdiction to protect consumers against certain harmful practices (typically called “deceptive and unfair” practices in the United States), but sophisticated technical analysis may be necessary to assess practices, risks, and more. Moreover, consumer protection covers an incredibly broad range of issues, from substantiation of claims that a smartphone app provides advertised health benefits to the adequacy of practices for securing sensitive customer data.

The Workshop on Technology and Consumer Protection (ConPro ’17) will explore computer science topics with an impact on consumers. This workshop has a strong security and privacy slant, with an overall focus on ways in which computer science can prevent, detect, or address the potential for technology to deceive or unfairly harm consumers. Attendees will skew towards academic and industry researchers but will include researchers from government agencies with a consumer protection mission, including the Federal Trade Commission—the U.S. government’s primary consumer protection body. Research advances presented at the workshop may help improve the lives of consumers, and discussions at the event may help researchers understand how their work can best promote consumer welfare given laws and norms surrounding consumer protection.

We have an outstanding program committee representing an incredibly wide range of computer science disciplines—from security, privacy, and e-crime to usability and algorithmic fairness—and touching on fields across the social sciences. The workshop will be an opportunity for these different disciplinary perspectives to contribute to a shared goal. Our call for papers discusses relevant topics, and we encourage anyone conducting research in these areas to submit their work by the January 10 deadline.

Computer science research—and computer security research in particular—excels at advancing innovative technical strategies to mitigate potential negative effects of digital technologies on society, but measures beyond strictly technical fixes also exist to protect consumers. How can our research goals, methods, and tools best complement laws, regulations, and enforcement? We hope this workshop will provide an excellent opportunity for computer scientists to consider these questions and find even better ways for our field to serve society.

"You Might Also Like:" Privacy Risks of Collaborative Filtering

Ann Kilzer, Arvind Narayanan, Ed Felten, Vitaly Shmatikov, and I have released a new research paper detailing the privacy risks posed by collaborative filtering recommender systems. To examine the risk, we use public data available from Hunch, LibraryThing, Last.fm, and Amazon in addition to evaluating a synthetic system using data from the Netflix Prize dataset. The results demonstrate that temporal changes in recommendations can reveal purchases or other transactions of individual users.

To help users find items of interest, sites routinely recommend items similar to a given item. For example, product pages on Amazon contain a “Customers Who Bought This Item Also Bought” list. These recommendations are typically public, and they are the product of patterns learned from all users of the system. If customers often purchase both item A and item B, a collaborative filtering system will judge them to be highly similar. Most sites generate ordered lists of similar items for any given item, but some also provide numeric similarity scores.

Although item similarity is only indirectly related to individual transactions, we determined that temporal changes in item similarity lists or scores can reveal details of those transactions. If you’re a Mozart fan and you listen to a Justin Bieber song, this choice increases the perceived similarity between Justin Bieber and Mozart. Because similarity lists and scores are based on perceived similarity, your action may result in changes to these scores or lists.

Suppose that an attacker knows some of your past purchases on a site: for example, past item reviews, social networking profiles, or real-world interactions are a rich source of information. New purchases will affect the perceived similarity between the new items and your past purchases, possibility causing visible changes to the recommendations provided for your previously purchased items. We demonstrate that an attacker can leverage these observable changes to infer your purchases. Among other things, these attacks are complicated by the fact that multiple users simultaneously interact with a system and updates are not immediate following a transaction.

To evaluate our attacks, we use data from Hunch, LibraryThing, Last.fm, and Amazon. Our goal is not to claim privacy flaws in these specific sites (in fact, we often use data voluntarily disclosed by their users to verify our inferences), but to demonstrate the general feasibility of inferring individual transactions from the outputs of collaborative filtering systems. Among their many differences, these sites vary dramatically in the information that they reveal. For example, Hunch reveals raw item-to-item correlation scores, but Amazon reveals only lists of similar items. In addition, we examine a simulated system created using the Netflix Prize dataset. Our paper outlines the experimental results.

While inference of a Justin Bieber interest may be innocuous, inferences could expose anything from dissatisfaction with a job to health issues. Our attacks assume that a victim reveals certain past transactions, but users may publicly reveal certain transactions while preferring to keep others private. Ultimately, users are best equipped to determine which transactions would be embarrassing or otherwise problematic. We demonstrate that the public outputs of recommender systems can reveal transactions without user knowledge or consent.

Unfortunately, existing privacy technologies appear inadequate here, failing to simultaneously guarantee acceptable recommendation quality and user privacy. Mitigation strategies are a rich area for future work, and we hope to work towards solutions with others in the community.

Worth noting is that this work suggests a risk posed by any feature that adapts in response to potentially sensitive user actions. Unless sites explicitly consider the data exposed, such features may inadvertently leak details of these underlying actions.

Our paper contains additional details. This work was presented earlier today at the 2011 IEEE Symposium on Security and Privacy. Arvind has also blogged about this work.

Ninth Circuit Ruling in MDY v. Blizzard

The Ninth Circuit has ruled on the MDY v. Blizzard case, which involves contract, copyright, and DMCA claims. As with the district court ruling, I’ll withhold comment due to my involvement as an expert in the case, but the decision may be of interest to FTT readers.

[Editor: The EFF has initial reactions here. Techdirt also has an overview.]