June 24, 2017

NYC to Collect GPS Data on Car Service Passengers—Good Intentions Gone Awry or Something Else?

During the holiday season, New York City through its Taxi & Limousine Commission (the “TLC”) proposed a new rule expanding data reporting obligations for car service platform companies including Uber and Lyft. If the rule is adopted, car services will now have to report the GPS coordinates of both passenger pick-up and drop-off locations to the city government. Under NY’s Freedom of Information Law, that data in bulk will also be subject to full public release.

This proposal is either a classic case of good intentions gone awry or a clandestine effort to track millions of car service riders while riding roughshod over passenger privacy.

The stated justification for the new rule is to combat “driver fatigue” and improve car service safety. While the goal is laudable and important, the proposed data collection does not match the purpose and makes no sense. Does anyone really think GPS data measures a driver’s hours on the job or is relevant for the calculation of a trip’s duration? If the data collection were really designed to address driver fatigue, then the relevant data would be shift length (driver start/stop times, ride durations, possibly trip origination), not pick up/drop off locations.

The reporting, though, of this GPS data to the city government poses a real and serious threat to passenger privacy. The ride patterns can be mined to identify specific individuals and where they travel. In 2014, for instance, The Guardian reported that the TLC released anonymized taxi ride data that was readily reverse engineered to identify drivers. A 2015 paper shows that mobility patterns can also be used to identify gender and ethnicity. Numerous examples—from the Netflix release of subscriber film ratings  that were reverse engineered to identify subscribers to the re-identification of patients from supposedly anonymous health records—show that bulk data can often be identified to specific individuals. Disturbingly, the TLC proposal only makes one innocuous reference to protecting “privacy and confidentiality” and yet includes neither any privacy safeguards against identification of individual passengers from ride patterns nor any exemption from the NY State Freedom of Information Law.

If this weren’t worrisome enough for privacy, here’s the flashing red light. The TLC proposal mentions in passing that the data might be useful for “other enforcement actions.” But, the examples given for “other enforcement actions” do not map to the data being collected. For instance, the proposal says the GPS data “will facilitate investigating passenger complaints or complaints from a pedestrian or other motorist about unsafe driving, including for incidents alleged to have occurred during or between trips, by allowing TLC to determine the location of a vehicle at a particular time.” The pick-up and drop-off locations will not work for this goal. Likewise, the proposal says that “[b]y understanding when for-hire trips to and from the airports occur TLC can better target resources to ensure that passengers are picked up at the airport only by drivers authorized to do so.” This too is a strange justification to collect individual passenger records for every ride throughout the city! This goal would be satisfied much more effectively by seeking aggregate drop-off data for the particular areas of concern to the TLC.

This vague enforcement language and the mismatch between the proposal and the articulated goals strongly suggests that the rule may be a smokescreen for a new mass surveillance program of individuals traveling within New York City. Only two years ago, the NY Police Department was caught deploying a controversial program to track cars throughout the city using EZ Pass readers on traffic lights. This proposed new rule looks like a surreptitious expansion of that program to car service passengers. The TLC rule, if adopted, would provide a surveillance data trove that makes an end run around judicial oversight, subpoenas, and warrants.

It’s time to put the brakes on the city’s collection of trip location data for car service rides.

Privacy: A Personality, Not Property, Right

The European Court of Justice’s decision in Google v. Costeja González appears to compel search engines to remove links to certain impugned search results at the request of individual Europeans (and potentially others beyond Europe’s borders). What is more, Costeja may inadvertently and ironically have the effect of appointing American companies as private censors and arbiters of the European public interest.

Google and other private entities are therefore saddled incomprehensibly with the gargantuan task of determining how to “balance the need for transparency with the need to protect people’s identities,” and Costeja’s failure to provide adequate interpretive guidelines further leads to ad hoc approaches by these companies. In addition, transparency and accountability are notoriously difficult to cultivate when balancing delicate constitutional values, such as freedom of expression and privacy. Indeed, even the constitutional courts and policy makers who typically perform this balancing struggle with it—think of the controversy associated with so-called “judicial activism.” The difficulty skyrockets when the balancers are instead inexperienced and reticent corporate actors, who presumably lack the requisite public legitimacy for such matters, especially when dealing with foreign (non-U.S.) nationals.

The Costeja decision attempts to paper over the growing divergence between Anglo-American and continental approaches to privacy. Its poor results highlight internal normative contradictions within the continental tradition and illustrate the urgency of re-conceptualizing digital privacy in a more transystemically viable fashion. [Read more…]

Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms

[This post reports on joint work with Schrasing Tong, Thomas Wies (NYU), Paula Kift (NYU), Helen Nissenbaum (NYU), Lakshminarayanan Subramanian (NYU), Prateek Mittal (Princeton) — Yan]

To appear in the proceedings of the Fourth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2016)

We would like to thank Joanna Huey for helpful comments and feedback.

Motivation

The advent of social apps, smart phones and ubiquitous computing has brought a great transformation to our day-to-day life. The incredible pace with which the new and disruptive services continue to emerge challenges our perception of privacy. To keep apace with this rapidly evolving cyber reality, we need to devise agile methods and frameworks for developing privacy-preserving systems that align with evolving user’s privacy expectations.

Previous efforts [1,2,3] have tackled this with the assumption that privacy norms are provided through existing sources such law, privacy regulations and legal precedents. They have focused on formally expressing privacy norms and devising a corresponding logic to enable automatic inconsistency checks and efficient enforcement of the logic.

However, because many of the existing regulations and privacy handbooks were enacted well before the Internet revolution took place, they often lag behind and do not adequately reflect the application of logic in modern systems. For example, the Family Rights and Privacy Act (FERPA) was enacted in 1974, long before Facebook, Google and many other online applications were used in an educational context. More recent legislation faces similar challenges as novel services introduce new ways to exchange information, and consequently shape new, unconsidered information flows that can change our collective perception of privacy.

Crowdsourcing Contextual Privacy Norms

Armed with the theory of Contextual Integrity (CI) in our work, we are exploring ways to uncover societal norms by leveraging the advances in crowdsourcing technology.  

In our recent paper, we present the methodology that we believe can be used to extract a societal notion of privacy expectations. The results can be used to fine tune the existing privacy guidelines as well as get a better perspective on the users’ expectations of privacy. [Read more…]