April 23, 2024

Goodbye, Stanford. Hello, Princeton!

[Editor’s note: The Center for Information Technology Policy (CITP) is delighted to welcome Arvind Narayanan as an Assistant Professor in Computer Science, and an affiliated faculty member in CITP. Narayanan is a leading researcher in digital privacy, data anonymization, and technology policy. His work has been widely published, and includes a paper with CITP co-authors […]

"You Might Also Like:" Privacy Risks of Collaborative Filtering

Ann Kilzer, Arvind Narayanan, Ed Felten, Vitaly Shmatikov, and I have released a new research paper detailing the privacy risks posed by collaborative filtering recommender systems. To examine the risk, we use public data available from Hunch, LibraryThing, Last.fm, and Amazon in addition to evaluating a synthetic system using data from the Netflix Prize dataset. The results demonstrate that temporal changes in recommendations can reveal purchases or other transactions of individual users.

To help users find items of interest, sites routinely recommend items similar to a given item. For example, product pages on Amazon contain a “Customers Who Bought This Item Also Bought” list. These recommendations are typically public, and they are the product of patterns learned from all users of the system. If customers often purchase both item A and item B, a collaborative filtering system will judge them to be highly similar. Most sites generate ordered lists of similar items for any given item, but some also provide numeric similarity scores.

Although item similarity is only indirectly related to individual transactions, we determined that temporal changes in item similarity lists or scores can reveal details of those transactions. If you’re a Mozart fan and you listen to a Justin Bieber song, this choice increases the perceived similarity between Justin Bieber and Mozart. Because similarity lists and scores are based on perceived similarity, your action may result in changes to these scores or lists.

Suppose that an attacker knows some of your past purchases on a site: for example, past item reviews, social networking profiles, or real-world interactions are a rich source of information. New purchases will affect the perceived similarity between the new items and your past purchases, possibility causing visible changes to the recommendations provided for your previously purchased items. We demonstrate that an attacker can leverage these observable changes to infer your purchases. Among other things, these attacks are complicated by the fact that multiple users simultaneously interact with a system and updates are not immediate following a transaction.

To evaluate our attacks, we use data from Hunch, LibraryThing, Last.fm, and Amazon. Our goal is not to claim privacy flaws in these specific sites (in fact, we often use data voluntarily disclosed by their users to verify our inferences), but to demonstrate the general feasibility of inferring individual transactions from the outputs of collaborative filtering systems. Among their many differences, these sites vary dramatically in the information that they reveal. For example, Hunch reveals raw item-to-item correlation scores, but Amazon reveals only lists of similar items. In addition, we examine a simulated system created using the Netflix Prize dataset. Our paper outlines the experimental results.

While inference of a Justin Bieber interest may be innocuous, inferences could expose anything from dissatisfaction with a job to health issues. Our attacks assume that a victim reveals certain past transactions, but users may publicly reveal certain transactions while preferring to keep others private. Ultimately, users are best equipped to determine which transactions would be embarrassing or otherwise problematic. We demonstrate that the public outputs of recommender systems can reveal transactions without user knowledge or consent.

Unfortunately, existing privacy technologies appear inadequate here, failing to simultaneously guarantee acceptable recommendation quality and user privacy. Mitigation strategies are a rich area for future work, and we hope to work towards solutions with others in the community.

Worth noting is that this work suggests a risk posed by any feature that adapts in response to potentially sensitive user actions. Unless sites explicitly consider the data exposed, such features may inadvertently leak details of these underlying actions.

Our paper contains additional details. This work was presented earlier today at the 2011 IEEE Symposium on Security and Privacy. Arvind has also blogged about this work.

CITP Seeks Visiting Faculty, Scholars or Policy Experts for 2010-2011

The Center for Information Technology Policy (CITP) at Princeton University seeks candidates for positions as visiting faculty members or researchers, or postdoctoral research associates for the 2010-2011 academic year.

About CITP

Digital technologies and public life are constantly reshaping each other—from net neutrality and broadband adoption, to copyright and file sharing, to electronic voting and beyond.

Realizing digital technology’s promise requires a constant sharing of ideas, competencies and norms among the technical, social, economic and political domains.

The Center for Information Technology Policy is Princeton University’s effort to meet this challenge. Its new home, which opened in September 2008, is a state of the art facility designed from the ground up for openness and collaboration. Located at the intellectual and physical crossroads of Princeton’s engineering and social science communities, the Center’s research, teaching and public programs are building the intellectual and human capital that our technological future demands.

To see what this mission can mean in practice, take a look at our website, at http://citp.princeton.edu.

About the Search

The Center has secured limited resources from a range of sources to support visiting faculty, scholars or policy experts for up to one-year appointments during the 2010-2011 academic year. We are interested in applications from academic faculty and researchers as well as from individuals who have practical experience in the policy arena. The rank and status of the successful applicant(s) will be determined on a case-by-case basis. We are particularly interested in hearing from faculty members at other universities and from individuals who have first-hand experience in public service in the technology policy area.

The successful applicant(s) will conduct research, engage in public programs, and may teach a seminar during their appointment subject to review and approval by the Dean of the Faculty. They’ll play an important role at a pivotal time in the development of this new center. They may be appointed to a visiting faculty or visiting fellow position, a term-limited research position, or a postdoctoral appointment, depending on qualifications.

We are happy to hear from anyone who works at the intersection of digital technology and public life. In addition to our existing strengths in computer science and sociology, we are particularly interested in identifying engineers, economists, lawyers, civil servants and policy analysts whose research interests are complementary to our existing activities.

If you are interested, please submit a CV and cover letter, stating background, intended research, and salary requirements, to https://jobs.princeton.edu.

Princeton University is an equal opportunity employer and complies with applicable EEO and affirmative action regulations. For information about applying to Princeton and voluntarily self-identifying, please see http://www.princeton.edu/dof/about_us/dof_job_openings/

Deadline: March 1, 2010.

Election Day; More Unguarded Voting Machines

It’s Election Day in New Jersey. As usual, I visited several polling places in Princeton over the last few days, looking for unguarded voting machines. It’s been well demonstrated that a bad actor who can get physical access to a New Jersey voting machine can modify its behavior to steal votes, so an unguarded voting machine is a vulnerable voting machine.

This time I visited six polling places. What did I find?

The good news — and there was a little — is that in one of the six polling places, the machines were properly secured. I’m not sure where the machines were, but I know that they were not visible anywhere in the accessible areas of the building. Maybe the machines were locked in a storage room, or maybe they hadn’t been delivered yet, but anyway they were probably safe. This is the first time I have ever found a local polling place, the night before the election, with properly secured voting machines.

At the other five polling places, things weren’t so good. At three places, the machines were unguarded in an area open to the public. I walked right up to them and had private time with them. In two other places, the machines were visible from outside the building and protected only by an outside door with an easily defeated lock. I didn’t defeat the locks myself — I wasn’t going to cross that line — but I’ll bet you could have opened them quickly with tools you probably have in your car.

The final scorecard: ten machines totally unprotected, eight machines poorly protected, two machines well-protected. That’s an improvement, but then again any protection at all would have been an improvement. We still have a long way to go.

Breaking Vanish: A Story of Security Research in Action

Today, seven colleagues and I released a new paper, “Defeating Vanish with Low-Cost Sybil Attacks Against Large DHTs“. The paper’s authors are Scott Wolchok (Michigan), Owen Hofmann (Texas), Nadia Heninger (Princeton), me, Alex Halderman (Michigan), Christopher Rossbach (Texas), Brent Waters (Texas), and Emmett Witchel (Texas).

Our paper is the next chapter in an interesting story about the making, breaking, and possible fixing of security systems.

The story started with a system called Vanish, designed by a team at the University of Washington (Roxana Geambasu, Yoshi Kohno, Amit Levy, and Hank Levy). Vanish tries to provide “vanishing data objects” (VDOs) that can be created at any time but will only be usable within a short time window (typically eight hours) after their creation. This is an unusual kind of security guarantee: the VDO can be read by anybody who sees it in the first eight hours, but after that period expires the VDO is supposed to be unrecoverable.

Vanish uses a clever design to do this. It takes your data and encrypts it, using a fresh random encryption key. It then splits the key into shares, so that a quorum of shares (say, seven out of ten shares) is required to reconstruct the key. It takes the shares and stores them at random locations in a giant worldwide system called the Vuze DHT. The Vuze DHT throws away items after eight hours. After that the shares are gone, so the key cannot be reconstructed, so the VDO cannot be decrypted — at least in theory.

What is this Vuze DHT? It’s a worldwide peer-to-peer network, containing a million or so computers, that was set up by Vuze, a company that uses the BitTorrent protocol to distribute (licensed) video content. Vuze needs a giant data store for its own purposes, to help peers find the videos they want, and this data store happens to be open so that Vanish can use it. The million-computer extent of the Vuze data store was important, because it gave the Vanish designers a big haystack in which to hide their needles.

Vanish debuted on July 20 with a splashy New York Times article. Reading the article, Alex Halderman and I realized that some of our past thinking about how to extract information from large distributed data structures might be applied to attack Vanish. Alex’s student Scott Wolchok grabbed the project and started doing experiments to see how much information could be extracted from the Vuze DHT. If we could monitor Vuze and continuously record almost all of its contents, then we could build a Wayback Machine for Vuze that would let us decrypt VDOs that were supposedly expired, thereby defeating Vanish’s security guarantees.

Scott’s experiments progressed rapidly, and by early August we were pretty sure that we were close to demonstrating a break of Vanish. The Vanish authors were due to present their work in a few days, at the Usenix Security conference in Montreal, and we hoped to demonstrate a break by then. The question was whether Scott’s already heroic sleep-deprived experimental odyssey would reach its destination in time.

We didn’t want to ambush the Vanish authors with our break, so we took them aside at the conference and told them about our preliminary results. This led to some interesting technical discussions with the Vanish team about technical details of Vuze and Vanish, and about some alternative designs for Vuze and Vanish that might better resist attacks. We agreed to keep them up to date on any new results, so they could address the issue in their talk.

As it turned out, we didn’t establish a break before the Vanish team’s conference presentation, so they did not have to modify their presentation much, and Scott finally got to catch up on his sleep. Later, we realized that evidence to establish a break had actually been in our experimental logs before the Vanish talk, but we hadn’t been clever enough to spot it at the time. Science is hard.

Some time later, I ran into my ex-student Brent Waters, who is now on the faculty at the University of Texas. I mentioned to Brent that Scott, Alex and I had been studying attacks on Vanish and we thought we were pretty close to making an attack work. Amazingly, Brent and some Texas colleagues (Owen Hoffman, Christopher Rossbach, and Emmett Witchel) had also been studying Vanish and had independently devised attacks that were pretty similar to what Scott, Alex, and I had.

We decided that it made sense to join up with the Texas team, work together on finishing and testing the attacks, and then write a joint paper. Nadia Heninger at Princeton did some valuable modeling to help us understand our experimental results, so we added her to the team.

Today we are releasing our joint paper. It describes our attacks and demonstrates that the attacks do indeed defeat Vanish. We have a working system that can decrypt Vanishing data objects (made with the original version of Vanish) after they are supposedly unrecoverable.

Our paper also discusses what went wrong in the original Vanish design. The people who designed Vanish are smart and experienced, but they obviously made some kind of mistake in their original work that led them to believe that Vanish was secure — a belief that we now know is incorrect. Our paper talks about where we think the Vanish authors went wrong, and what security practitioners can learn from the Vanish experience so far.

Meanwhile, the Vanish authors went back to the drawing board and came up with a bunch of improvements to Vanish and Vuze that make our attacks much more expensive. They wrote their own paper about their experience with Vanish and their new modifications to it.

Where does this leave us?

For now, Vanish should be considered too risky to rely on. The standard for security is not “no currently demonstrated attacks”, it is “strong evidence that the system resists all reasonable attacks”. By updating Vanish to resist our attacks, the Vanish authors showed that their system is not a dead letter. But in my view they are still some distance from showing that Vanish is secure . Given the complexity of underlying technologies such as Vuze, I wouldn’t be surprised if more attacks turn out to be possible. The latest version of Vanish might turn out to be sound, or to be unsound, or the whole approach might turn out to be flawed. It’s too early to tell.

Vanish is an interesting approach to a real problem. Whether this approach will turn out to work is still an open question. It’s good to explore this question — and I’m glad that the Vanish authors and others are doing so. At this point, Vanish is of real scientific interest, but I wouldn’t rely on it to secure my data.

[Update (Sept. 30, 2009): I rewrote the paragraphs describing our discussions with the Vanish team at the conference. The original version may have given the wrong impression about our intentions.]