September 29, 2022

2020 Workshop on Technology and Consumer Protection

Christo Wilson and I are pleased to announce that the Workshop on Technology and Consumer Protection (ConPro ’20) is returning for a fourth year, co-located with the IEEE Symposium on Security and Privacy in May 2020.

As in past years, ConPro seeks a diverse range of technical research with implications for consumer protection. Past talks have covered dating fraud, ad targeting, mobile app data practices, privacy policy readability, algorithmic fairness, social media phishing, unwanted calls, cryptocurrency security, and much more.

Unlike past years, ConPro 2020 will accept talk proposals for early stage research ideas in addition to short papers. Do you have a new project or idea that you’d like to refine? Are you curious about which project directions could yield the greatest impact? Pitch a talk for ConPro, and get feedback and suggestions from its diverse, engaged audience.

Each year of ConPro, I’ve been heartened by the enthusiasm towards research that can help improve consumer welfare. If this is important to you too, we hope you’ll submit a paper or talk proposal. We’re always excited to expand our community! The submission deadline is January 23, 2020.

The Third Workshop on Technology and Consumer Protection

Arvind Narayanan and I are pleased to announce that the Workshop on Technology and Consumer Protection (ConPro ’19) will return for a third year! The workshop will once again be co-located with the IEEE Symposium on Security and Privacy, occurring in May 2019.

ConPro is a forum for a diverse range of computer science research with consumer protection implications. Last year, papers covered topics ranging from online dating fraud to the readability of security guidance. Panelists and invited speakers explored topics from preventing caller-ID spoofing to protecting unique communities.

We see ConPro as a workshop in the classic sense, providing substantive feedback and new ideas. Presentations have sparked suggestions for follow-up work and collaboration opportunities. Attendees represent a wide range of research areas, spurring creative ideas and interesting conversation. For example, comments about crowdworker concerns this year led to discussion of best practices for research making use of those workers.

Although our community has grown, we aim to keep discussion and feedback a central part of the workshop. Our friends in the legal community have had some success with larger events focused on feedback and discussion, such as PLSC. We plan to take lessons from those cases.

The success of ConPro in past years—amazing research, attendees, discussion, and PCs—makes us excited for next year. The call for papers lists some relevant topics, but if you do computer science research with consumer protection implications, it’s relevant (but be sure those implications are clear). The submission deadline is January 23, 2019. We hope you’ll submit a paper and join us in San Francisco!

The Second Workshop on Technology and Consumer Protection

Arvind Narayanan and I are excited to announce that the Workshop on Technology and Consumer Protection (ConPro ’18) will return in May 2018, once again co-located with the IEEE Symposium on Security and Privacy.

The first ConPro brought together researchers from a wide range of disciplines, united by a shared goal of promoting consumer welfare through empirical computer science research. The topics ranged from potentially misleading online transactions to emerging biomedical technologies. Discussions were consistently insightful. For example, one talk explored the observed efficacy of various technical and non-technical civil interventions against online crime. Several—including a panel with technical and policy experts—considered steps that researchers can take to make their work more usable by policymakers, such as examining and documenting the agreement between researched practices and a company’s public statements.

We think the first workshop was a success. Participants were passionate about the social impact of their own research, and just as passionate in encouraging similarly thoughtful but dramatically different work. We aim to foster and build this engaged and supportive community.

As a result, we are thrilled to be organizing a second ConPro. Our interests lie wherever computer science intersects with consumer protection, including security, e-crime, algorithmic fairness, privacy, usability, and much more. Our stellar program committee reflects this range of interests. Check out the call for papers for more information. The submission deadline is January 23, 2018, and we look forward to reading this year’s great work!

New Workshop on Technology and Consumer Protection

[Joe Calandrino is a veteran of Freedom to Tinker and CITP. As long time readers will remember,  he did his Ph.D. here, advised by Ed Felten. He recently joined the FTC as research director of OTech, the Office of Technology Research and Investigation. Today we have an exciting announcement. — Arvind Narayanan.]

Arvind Narayanan and I are thrilled to announce a new Workshop on Technology and Consumer Protection (ConPro ’17) to be co-hosted with the IEEE Symposium on Security and Privacy (Oakland) in May 2017:

Advances in technology come with countless benefits for society, but these advances sometimes introduce new risks as well. Various characteristics of technology, including its increasing complexity, may present novel challenges in understanding its impact and addressing its risks. Regulatory agencies have broad jurisdiction to protect consumers against certain harmful practices (typically called “deceptive and unfair” practices in the United States), but sophisticated technical analysis may be necessary to assess practices, risks, and more. Moreover, consumer protection covers an incredibly broad range of issues, from substantiation of claims that a smartphone app provides advertised health benefits to the adequacy of practices for securing sensitive customer data.

The Workshop on Technology and Consumer Protection (ConPro ’17) will explore computer science topics with an impact on consumers. This workshop has a strong security and privacy slant, with an overall focus on ways in which computer science can prevent, detect, or address the potential for technology to deceive or unfairly harm consumers. Attendees will skew towards academic and industry researchers but will include researchers from government agencies with a consumer protection mission, including the Federal Trade Commission—the U.S. government’s primary consumer protection body. Research advances presented at the workshop may help improve the lives of consumers, and discussions at the event may help researchers understand how their work can best promote consumer welfare given laws and norms surrounding consumer protection.

We have an outstanding program committee representing an incredibly wide range of computer science disciplines—from security, privacy, and e-crime to usability and algorithmic fairness—and touching on fields across the social sciences. The workshop will be an opportunity for these different disciplinary perspectives to contribute to a shared goal. Our call for papers discusses relevant topics, and we encourage anyone conducting research in these areas to submit their work by the January 10 deadline.

Computer science research—and computer security research in particular—excels at advancing innovative technical strategies to mitigate potential negative effects of digital technologies on society, but measures beyond strictly technical fixes also exist to protect consumers. How can our research goals, methods, and tools best complement laws, regulations, and enforcement? We hope this workshop will provide an excellent opportunity for computer scientists to consider these questions and find even better ways for our field to serve society.

"You Might Also Like:" Privacy Risks of Collaborative Filtering

Ann Kilzer, Arvind Narayanan, Ed Felten, Vitaly Shmatikov, and I have released a new research paper detailing the privacy risks posed by collaborative filtering recommender systems. To examine the risk, we use public data available from Hunch, LibraryThing, Last.fm, and Amazon in addition to evaluating a synthetic system using data from the Netflix Prize dataset. The results demonstrate that temporal changes in recommendations can reveal purchases or other transactions of individual users.

To help users find items of interest, sites routinely recommend items similar to a given item. For example, product pages on Amazon contain a “Customers Who Bought This Item Also Bought” list. These recommendations are typically public, and they are the product of patterns learned from all users of the system. If customers often purchase both item A and item B, a collaborative filtering system will judge them to be highly similar. Most sites generate ordered lists of similar items for any given item, but some also provide numeric similarity scores.

Although item similarity is only indirectly related to individual transactions, we determined that temporal changes in item similarity lists or scores can reveal details of those transactions. If you’re a Mozart fan and you listen to a Justin Bieber song, this choice increases the perceived similarity between Justin Bieber and Mozart. Because similarity lists and scores are based on perceived similarity, your action may result in changes to these scores or lists.

Suppose that an attacker knows some of your past purchases on a site: for example, past item reviews, social networking profiles, or real-world interactions are a rich source of information. New purchases will affect the perceived similarity between the new items and your past purchases, possibility causing visible changes to the recommendations provided for your previously purchased items. We demonstrate that an attacker can leverage these observable changes to infer your purchases. Among other things, these attacks are complicated by the fact that multiple users simultaneously interact with a system and updates are not immediate following a transaction.

To evaluate our attacks, we use data from Hunch, LibraryThing, Last.fm, and Amazon. Our goal is not to claim privacy flaws in these specific sites (in fact, we often use data voluntarily disclosed by their users to verify our inferences), but to demonstrate the general feasibility of inferring individual transactions from the outputs of collaborative filtering systems. Among their many differences, these sites vary dramatically in the information that they reveal. For example, Hunch reveals raw item-to-item correlation scores, but Amazon reveals only lists of similar items. In addition, we examine a simulated system created using the Netflix Prize dataset. Our paper outlines the experimental results.

While inference of a Justin Bieber interest may be innocuous, inferences could expose anything from dissatisfaction with a job to health issues. Our attacks assume that a victim reveals certain past transactions, but users may publicly reveal certain transactions while preferring to keep others private. Ultimately, users are best equipped to determine which transactions would be embarrassing or otherwise problematic. We demonstrate that the public outputs of recommender systems can reveal transactions without user knowledge or consent.

Unfortunately, existing privacy technologies appear inadequate here, failing to simultaneously guarantee acceptable recommendation quality and user privacy. Mitigation strategies are a rich area for future work, and we hope to work towards solutions with others in the community.

Worth noting is that this work suggests a risk posed by any feature that adapts in response to potentially sensitive user actions. Unless sites explicitly consider the data exposed, such features may inadvertently leak details of these underlying actions.

Our paper contains additional details. This work was presented earlier today at the 2011 IEEE Symposium on Security and Privacy. Arvind has also blogged about this work.