Christo Wilson and I are pleased to announce that the Workshop on Technology and Consumer Protection (ConPro ’20) is returning for a fourth year, co-located with the IEEE Symposium on Security and Privacy in May 2020. As in past years, ConPro seeks a diverse range of technical research with implications for consumer protection. Past talks […]
The Third Workshop on Technology and Consumer Protection
Arvind Narayanan and I are pleased to announce that the Workshop on Technology and Consumer Protection (ConPro ’19) will return for a third year! The workshop will once again be co-located with the IEEE Symposium on Security and Privacy, occurring in May 2019. ConPro is a forum for a diverse range of computer science research […]
The Second Workshop on Technology and Consumer Protection
Arvind Narayanan and I are excited to announce that the Workshop on Technology and Consumer Protection (ConPro ’18) will return in May 2018, once again co-located with the IEEE Symposium on Security and Privacy. The first ConPro brought together researchers from a wide range of disciplines, united by a shared goal of promoting consumer welfare […]
New Workshop on Technology and Consumer Protection
[Joe Calandrino is a veteran of Freedom to Tinker and CITP. As long time readers will remember, he did his Ph.D. here, advised by Ed Felten. He recently joined the FTC as research director of OTech, the Office of Technology Research and Investigation. Today we have an exciting announcement. — Arvind Narayanan.] Arvind Narayanan and […]
"You Might Also Like:" Privacy Risks of Collaborative Filtering
Ann Kilzer, Arvind Narayanan, Ed Felten, Vitaly Shmatikov, and I have released a new research paper detailing the privacy risks posed by collaborative filtering recommender systems. To examine the risk, we use public data available from Hunch, LibraryThing, Last.fm, and Amazon in addition to evaluating a synthetic system using data from the Netflix Prize dataset. The results demonstrate that temporal changes in recommendations can reveal purchases or other transactions of individual users.
To help users find items of interest, sites routinely recommend items similar to a given item. For example, product pages on Amazon contain a “Customers Who Bought This Item Also Bought” list. These recommendations are typically public, and they are the product of patterns learned from all users of the system. If customers often purchase both item A and item B, a collaborative filtering system will judge them to be highly similar. Most sites generate ordered lists of similar items for any given item, but some also provide numeric similarity scores.
Although item similarity is only indirectly related to individual transactions, we determined that temporal changes in item similarity lists or scores can reveal details of those transactions. If you’re a Mozart fan and you listen to a Justin Bieber song, this choice increases the perceived similarity between Justin Bieber and Mozart. Because similarity lists and scores are based on perceived similarity, your action may result in changes to these scores or lists.
Suppose that an attacker knows some of your past purchases on a site: for example, past item reviews, social networking profiles, or real-world interactions are a rich source of information. New purchases will affect the perceived similarity between the new items and your past purchases, possibility causing visible changes to the recommendations provided for your previously purchased items. We demonstrate that an attacker can leverage these observable changes to infer your purchases. Among other things, these attacks are complicated by the fact that multiple users simultaneously interact with a system and updates are not immediate following a transaction.
To evaluate our attacks, we use data from Hunch, LibraryThing, Last.fm, and Amazon. Our goal is not to claim privacy flaws in these specific sites (in fact, we often use data voluntarily disclosed by their users to verify our inferences), but to demonstrate the general feasibility of inferring individual transactions from the outputs of collaborative filtering systems. Among their many differences, these sites vary dramatically in the information that they reveal. For example, Hunch reveals raw item-to-item correlation scores, but Amazon reveals only lists of similar items. In addition, we examine a simulated system created using the Netflix Prize dataset. Our paper outlines the experimental results.
While inference of a Justin Bieber interest may be innocuous, inferences could expose anything from dissatisfaction with a job to health issues. Our attacks assume that a victim reveals certain past transactions, but users may publicly reveal certain transactions while preferring to keep others private. Ultimately, users are best equipped to determine which transactions would be embarrassing or otherwise problematic. We demonstrate that the public outputs of recommender systems can reveal transactions without user knowledge or consent.
Unfortunately, existing privacy technologies appear inadequate here, failing to simultaneously guarantee acceptable recommendation quality and user privacy. Mitigation strategies are a rich area for future work, and we hope to work towards solutions with others in the community.
Worth noting is that this work suggests a risk posed by any feature that adapts in response to potentially sensitive user actions. Unless sites explicitly consider the data exposed, such features may inadvertently leak details of these underlying actions.
Our paper contains additional details. This work was presented earlier today at the 2011 IEEE Symposium on Security and Privacy. Arvind has also blogged about this work.