June 21, 2018

Workshop on Technical Applications of Contextual Integrity

The theory of contextual integrity (CI) has inspired work across the legal, privacy, computer science and HCI research communities.  Recognizing common interests and common challenges, the time seemed ripe for a meeting to discuss what we have learned from the projects using CI and how to move forward to leverage CI for enhancing privacy preserving systems and policies. On 11 December, 2017  the Center for Information Technology Policy hosted an inaugural workshop on Technical Applications of Contextual Integrity. The workshop gathered over twenty researchers from Princeton University, New York University, Cornell Tech, University of Maryland, Data & Society, and AI Now to present their ongoing and completed projects, discuss and share ideas, and explore successes and challenges when using the CI framework. The meeting, which included faculty, postdocs, and graduate students, was kicked off with a welcome and introduction by Ed Felten, CITP Director.

The agenda comprised of two main parts. In the first half of the workshop, representatives of various projects gave a short presentation on the status of their work, describe any challenges encountered, and lessons learned in the process. The second half included a planning session of a full day event to take place in the Spring to allow for a bigger discussion and exchange of ideas.

The workshop presentations touched on a wide variety of topics which included: ways operationalizing CI, discovering contextual norms behind children’s online activities, capturing users’ expectation towards smart toys and smart-home devices, as well as demonstrating how CI can be used to analyze regulation acts, applying CI to establish research ethics guidelines, conceptualizing privacy within common government arrangement.

More specifically:

Yan Shvartzshnaider discussed Verifiable and ACtionable Contextual Integrity Norms Engine (VACCINE), a framework for building adaptable and modular Data Leakage Prevention (DLP) systems.

Darakshan Mir discussed a framework for community-based participatory framework for discovery of contextual informational norms in small and veranubale communities.

Sebastian Benthall shared the key takeaways from conducting a survey on existing computer science literature work that uses Contextual Integrity.

Paula Kift discussed how the theory of contextual Integrity can be used to analyze the recently passed Cybersecurity Information Sharing Act (CISA) to reveals some fundamental gaps in the way it conceptualizes privacy.

Ben Zevenbergen talked about his work on applying the theory of contextual integrity to help establish guidelines for Research Ethics.

Madelyn Sanfilippo discussed conceptualizing privacy within a commons governance arrangement using Governing Knowledge Commons (GKC) framework.

Priya Kumar presented recent work on using the Contextual Integrity to identify gaps in children’s online privacy knowledge.

Sarah Varghese and Noah Apthorpe discussed their works on discovering privacy norms in IoT Devices using Contextual Integrity.

The roundtable discussion covered a wide range of open questions such as what are the limitations of CI as a theory, possible extensions, integration into other frameworks, conflicting interpretations of the CI parameters, possible research directions, and interesting collaboration ideas.

This a first attempt to see how much interest there is from the wider research community in a CI-focused event. We were overwhelmed with the incredible response! The participants expressed huge interest in the bigger event in Spring 2018 and put forward a number of suggestions for the format of the workshop.  The initial idea is to organize the bigger workshop as a co-joint event with an established conference, another suggestion was to have it as part of a hands-on workshop that brings together industry and academia. We are really excited about the event that will bring together a large sample of CI-related research work both academically and geographically which will allow a much broader discussion. 

The ultimate goal of this and other future initiatives is to foster communication between the various communities of researchers and practitioners using the theory of CI as a framework to reason about privacy and a language for sharing of ideas.

For the meantime, please check out the http://privaci.info website that will serve as a central repository for news, up to date related work for the community. We will be updating it in coming months.

We look forward to your feedback and suggestions. If you’re interested in hearing about the Spring workshop or presenting your work, want to help or have any suggestion please get in touch!

Twitter: @privaci_way

Email:

LinkedIn reveals your personal email to your connections

[Huge thanks to Dillon ReismanArvind Narayanan, and Joanna Huey for providing great feedback on early drafts.]

LinkedIn makes the primary email address associated with an account visible to all direct connections, as well as to people who have your email address in their contacts lists. By default, the primary email address is the one that was used to sign up for LinkedIn. While the primary address may be changed to another email in your account settings, there is no way to prevent your contacts from visiting your profile and viewing there whatever email you chose to be primary. In addition, the current data archive export feature of LinkedIn allows users to download their connections’ email addresses in bulk. It seems that the archive export includes all emails associated with an account, not just the one designated as primary.

It appears that many of these addresses are personal, rather than professional. This post uses the contextual integrity (CI) privacy framework to consider whether the access given by LinkedIn violates the privacy norms of using a professional online social network.
[Read more…]

Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms

[This post reports on joint work with Schrasing Tong, Thomas Wies (NYU), Paula Kift (NYU), Helen Nissenbaum (NYU), Lakshminarayanan Subramanian (NYU), Prateek Mittal (Princeton) — Yan]

To appear in the proceedings of the Fourth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2016)

We would like to thank Joanna Huey for helpful comments and feedback.

Motivation

The advent of social apps, smart phones and ubiquitous computing has brought a great transformation to our day-to-day life. The incredible pace with which the new and disruptive services continue to emerge challenges our perception of privacy. To keep apace with this rapidly evolving cyber reality, we need to devise agile methods and frameworks for developing privacy-preserving systems that align with evolving user’s privacy expectations.

Previous efforts [1,2,3] have tackled this with the assumption that privacy norms are provided through existing sources such law, privacy regulations and legal precedents. They have focused on formally expressing privacy norms and devising a corresponding logic to enable automatic inconsistency checks and efficient enforcement of the logic.

However, because many of the existing regulations and privacy handbooks were enacted well before the Internet revolution took place, they often lag behind and do not adequately reflect the application of logic in modern systems. For example, the Family Rights and Privacy Act (FERPA) was enacted in 1974, long before Facebook, Google and many other online applications were used in an educational context. More recent legislation faces similar challenges as novel services introduce new ways to exchange information, and consequently shape new, unconsidered information flows that can change our collective perception of privacy.

Crowdsourcing Contextual Privacy Norms

Armed with the theory of Contextual Integrity (CI) in our work, we are exploring ways to uncover societal norms by leveraging the advances in crowdsourcing technology.  

In our recent paper, we present the methodology that we believe can be used to extract a societal notion of privacy expectations. The results can be used to fine tune the existing privacy guidelines as well as get a better perspective on the users’ expectations of privacy. [Read more…]