April 19, 2024

Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms

[This post reports on joint work with Schrasing Tong, Thomas Wies (NYU), Paula Kift (NYU), Helen Nissenbaum (NYU), Lakshminarayanan Subramanian (NYU), Prateek Mittal (Princeton) — Yan]

To appear in the proceedings of the Fourth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2016)

We would like to thank Joanna Huey for helpful comments and feedback.

Motivation

The advent of social apps, smart phones and ubiquitous computing has brought a great transformation to our day-to-day life. The incredible pace with which the new and disruptive services continue to emerge challenges our perception of privacy. To keep apace with this rapidly evolving cyber reality, we need to devise agile methods and frameworks for developing privacy-preserving systems that align with evolving user’s privacy expectations.

Previous efforts [1,2,3] have tackled this with the assumption that privacy norms are provided through existing sources such law, privacy regulations and legal precedents. They have focused on formally expressing privacy norms and devising a corresponding logic to enable automatic inconsistency checks and efficient enforcement of the logic.

However, because many of the existing regulations and privacy handbooks were enacted well before the Internet revolution took place, they often lag behind and do not adequately reflect the application of logic in modern systems. For example, the Family Rights and Privacy Act (FERPA) was enacted in 1974, long before Facebook, Google and many other online applications were used in an educational context. More recent legislation faces similar challenges as novel services introduce new ways to exchange information, and consequently shape new, unconsidered information flows that can change our collective perception of privacy.

Crowdsourcing Contextual Privacy Norms

Armed with the theory of Contextual Integrity (CI) in our work, we are exploring ways to uncover societal norms by leveraging the advances in crowdsourcing technology.  

In our recent paper, we present the methodology that we believe can be used to extract a societal notion of privacy expectations. The results can be used to fine tune the existing privacy guidelines as well as get a better perspective on the users’ expectations of privacy.

CI defines privacy as collection of norms (privacy rules) that reflect appropriate information flows between different actors. Norms capture who shares what, with whom, in what role, and under which conditions. For example, while you are comfortable sharing your medical information with your doctor, you might be less inclined to do so with your colleagues.

We use CI as a proxy to reason about privacy in the digital world and a gateway to understanding how people perceive privacy in a systematic way. Crowdsourcing is a great tool for this method. We are able to ask hundreds of people how they feel about a particular information flow, and then we can capture their input and map it directly onto the CI parameters. We used a simple template to write Yes-or-No questions to ask our crowdsourcing participants:

“Is it acceptable for the [sender] to share the [subject’s] [attribute] with [recipient] [transmission principle]?”

For example:

“Is it acceptable for the student’s professor to share the student’s record of attendance with the department chair if the student is performing poorly? ”

In our experiments, we leveraged Amazon’s Mechanical Turk (AMT) to ask 450 turkers over 1400 such questions. Each question represents a specific contextual information flow that users can approve, disapprove or mark under the Doesn’t Make Sense category; the last category could be used when 1) the sender is unlikely to have the information, 2) the receiver would already have the information, or 3) the question is ambiguous. 

Approximation of Users’ Privacy Expectations

In our evaluation we show that by converting the answers into CI-based privacy logic we are able to effectively analyze and detect privacy norms that users are more likely to approve and care about.

More specifically, we introduced three indicators that provide an estimate of users’ “acceptance” of the entrenched or enforced norms: the norm approval score, the user approval score and the divergence score.

We show that, using these indicators, we can identify norms that were approved or disapproved by majority of users, while also pinpointing contentious norms that require further attention.

The norm approval score (NA) is the ratio of the number of users approving the norm (by providing a positive answer to the corresponding question) to the total number of responses to that norm.  We can filter approved norms based on a certain NA threshold, e.g., a simple majority (>50%) or two-thirds majority (>67%).

The table below lists the five norms with the highest NA values and the five with the highest norm disapproval values (i.e., the ratio of the number of users disapproving the norm to the total number of responses).

CI parameters corresponding to top five approved and disapproved norms.

The numbers in the Transmission Principle (TP) column of represent the following transmission principles: 1) with the requirement of confidentiality; 2) if subject is performing poorly; 3) with a request from the subject; 4) with subject’s knowledge; and 5) with subject’s consent. We see, for example, that the surveyed community strongly approves an informational flow where a professor provides a graduate school with a student’s attendance record with the student’s permission. However, they strongly oppose a TA sharing a student’s grades with classmates, if the student is performing poorly.

The user approval score (UA) is the ratio of the number of norms a user approved to the total number of norms she evaluated. It reflects individual users’ norm approving records, while the divergence score (DS) compares individual preferences to the opinion of the overall community. It counts how many times the user’s preferences differed from the community preferences, where community preferences are defined by a chosen NA threshold, so a lower DS score means higher agreement with the community as a whole.

The figure below plots DS vs UA with the NA threshold set at 66%. Users that have a high UA score also have a higher DS, meaning that users who agreed with a high proportion of norms diverged more from the community consensus. This correlation can be explained by the fact that norms are less frequently approved when the NA threshold is high, so users with a high UA on average will differ more from the community.

Users' approval vs Divergence Score for each user for the 66% NA threshold.

The next figure depicts total DS across all possible thresholds: for each NA threshold, we calculated and aggregated the DS score for each of the users. The final number was normalized by the number of the users. We can use this plot to choose an appropriate NA threshold by picking a value that yields the lowest aggregate DS for the community; in this case, a threshold in the 40% to 60% range yields the lowest disagreement from the community.

Total DS across all possible thresholds. The final number was normalized by the number of the users.

Verification of Extracted Rules

We then used formal verification methods to check for consistency in crowdsourced privacy logic. We relied on automated theorem provers to detect any potential logical inconsistencies in the rules, e.g., to check that the approved information flows are not blocked by the ones that were disapproved. More specifically, we encode the resulting CI rules into first-order logic that can be mapped onto Effectively Propositional Logic (EPR) [4], where each CI rule is represented as a conjunction that involves the auxiliary predicates and (dis)equalities for given CI parameters.

Future Directions

This is a first step in a project that seeks to explore how modern technology can better match practices to individual users’ privacy expectations and also how to adopt privacy norms supported by the community as a whole. We hope that this framework will be useful in developing systems that incorporate CI principles and that reflect individual and communal norms. Crowdsourcing offers an efficient way to seed such systems with initial data on community norms, and those norms can then be further refined—and evolved—through ongoing user feedback.