February 19, 2018

Archives for 2016

Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms

[This post reports on joint work with Schrasing Tong, Thomas Wies (NYU), Paula Kift (NYU), Helen Nissenbaum (NYU), Lakshminarayanan Subramanian (NYU), Prateek Mittal (Princeton) — Yan]

To appear in the proceedings of the Fourth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2016)

We would like to thank Joanna Huey for helpful comments and feedback.

Motivation

The advent of social apps, smart phones and ubiquitous computing has brought a great transformation to our day-to-day life. The incredible pace with which the new and disruptive services continue to emerge challenges our perception of privacy. To keep apace with this rapidly evolving cyber reality, we need to devise agile methods and frameworks for developing privacy-preserving systems that align with evolving user’s privacy expectations.

Previous efforts [1,2,3] have tackled this with the assumption that privacy norms are provided through existing sources such law, privacy regulations and legal precedents. They have focused on formally expressing privacy norms and devising a corresponding logic to enable automatic inconsistency checks and efficient enforcement of the logic.

However, because many of the existing regulations and privacy handbooks were enacted well before the Internet revolution took place, they often lag behind and do not adequately reflect the application of logic in modern systems. For example, the Family Rights and Privacy Act (FERPA) was enacted in 1974, long before Facebook, Google and many other online applications were used in an educational context. More recent legislation faces similar challenges as novel services introduce new ways to exchange information, and consequently shape new, unconsidered information flows that can change our collective perception of privacy.

Crowdsourcing Contextual Privacy Norms

Armed with the theory of Contextual Integrity (CI) in our work, we are exploring ways to uncover societal norms by leveraging the advances in crowdsourcing technology.  

In our recent paper, we present the methodology that we believe can be used to extract a societal notion of privacy expectations. The results can be used to fine tune the existing privacy guidelines as well as get a better perspective on the users’ expectations of privacy. [Read more…]

Sign up now for the first workshop on Data and Algorithmic Transparency

I’m excited to announce that registration for the first workshop on Data and Algorithmic Transparency is now open. The workshop will take place at NYU on Nov 19. It convenes an emerging interdisciplinary community that seeks transparency and oversight of data-driven algorithmic systems through empirical research.

Despite the short notice of the workshop’s announcement (about six weeks before the submission deadline), we were pleasantly surprised by the number and quality of the submissions that we received. We ended up accepting 15 papers, more than we’d originally planned to, and still had to turn away good papers. The program includes both previously published work and original papers submitted to the workshop, and has just the kind of multidisciplinary mix we were looking for.

We settled on a format that’s different from the norm but probably familiar to many of you. We have five panels, one on each of the five main themes that emerged from the papers. The panels will begin with brief presentations, with the majority of the time devoted to in-depth discussions led by one or two commenters who will have read the papers beforehand and will engage with the authors. We welcome the audience to participate; to enable productive discussion, we encourage you to read or skim the papers beforehand. The previously published papers are available to read; the original papers will be made available in a few days.

I’m very grateful to everyone on our program committee for their hard work in reviewing and selecting papers. We received very positive feedback from authors on the quality of reviews of the original papers, and I was impressed by the work that the committee put in.

Finally, note that the workshop will take place at NYU rather than Columbia as originally announced. We learnt some lessons on the difficulty of finding optimal venues in New York City on a limited budget. Thanks to Solon Barocas and Augustin Chaintreau for their efforts in helping us find a suitable venue!

See you in three weeks, and don’t forget the related and colocated DTL and FAT-ML events.

The AT&T Deal Is About the Data

Most of the mainstream media coverage of the proposed AT&T acquisition of Time Warner has missed an important risk. Much of the discussion has focused on the potential market power the combined entity would have to raise prices, limit choice or otherwise disadvantage consumers.

A primary motivation for the deal, however, as readers of Freedom to Tinker well understand, is the desire to access more and deeper data about consumer behavior. The motivation to combine companies is not monopolistic control, but rather a timely effort to become a player in the lucrative, $77 billion world of targeted digital advertising, now controlled by Google and Facebook.

Some media, especially those covering the FCC and FTC, have begun detailing the data privacy issues raised by the deal. Hopefully, mainstream media will soon follow suit.

Here are some links:

BNA: FCC Privacy Rules Could Hamper AT&T-Time Warner Data Mining

Inside Sources: Mega Mergers Like AT&T-Time Warner are Becoming a Problem for Privacy Regulation

Bloomberg: Privacy Rule Imperils Data Riches as AT&T Pursues Time Warner

Fortune: Media Companies Want U.S. to Force AT&T-Time Warner to Share Customer Data