November 20, 2018

Disaster Information Flows: A Privacy Disaster?

By Madelyn R. Sanfilippo and Yan Shvartzshnaider

Last week, the test of the Presidential Alert system, which many objected to on partisan grounds, brought the Wireless Emergency Alert system (WEA) into renewed public scrutiny. WEA, which distributes mobile push notifications about various emergencies, crises, natural disasters, and amber alerts based on geographic relevance, became operational in 2012, through a public private partnership between the FCC, FEMA, and various telecommunications companies. All customers of participating wireless providers are automatically enrolled, though it is possible to opt out of all but Presidential Alerts.

Presidential alerts were just one of a set of updates designed to address recent events that have connected the trusted communication channel to the fear politics around fake news and misinformation, such as the January 2018 false alarm, when a ballistic missile warning was mistakenly disseminated to the state of Hawaii as a mobile emergency alert. The resulting chaos and outrage, led the FCC to revise protocols for tests of the system, distribution, and emergency alert formats, among other improvements.

In updating WEA, three priorities are addressed: (1) routine “live code testing” to ensure function and minimize confusion; (2) incorporate additional and local participants  into new and existing official channels; and (3) prevent misinformation or false alarms, by authentication, unifying format and overriding opt-out preferences in distributing the presidential alert. The objective is to provide trustworthy information during crises. Yet the specific changes have triggered concerns that allowing partisan officials and mimicking format conventions like character limits undermine the stated objectives, by facilitating imitation for disinformation, rather than engendering confidence in official alerts, as stated in a legal complaint about Presidential Alerts.

With the increased scrutiny around these changes, additional concerns around privacy and surveillance relative to disaster information communication practices have arisen. WEA structures information flows from multiple Federal agencies, along with agency specific Apps, based on aggregated personally identifiable information, including geo-location information, all of which are governed by privacy regulations, including the Privacy Act of 1974, and policies that focus on protecting accidental or malicious disclosure of Personal Identifiable Information (PII) and Sensitive Personally Identifiable Information (SPII). Policies and regulations enumerate a list of trusted partners with which the data will be shared and from whom it may be gathered during emergencies.

The specific types of information that can be gathered about individuals by FEMA, despite the diversity of sources and contexts involved, are precisely defined, such as: name; social media account information; address of geo-location; job title; phone numbers, email addresses, or other contact information; date and time of post; and additional relevant details, including individuals’ physical condition.

Furthermore, information sharing policies governance is less precise with regard to flows, than types.  This is particularly important because expectations change drastically when disaster hits.  Everyday information flows are governed by established norms within a particular context. Yet, disasters change our priorities and norms, as our survival instincts kick in. For example, our norms can oscillate between two extremes; on one side, we do not want to tracked in our daily activities, but during disasters, many feel comfortable broadcasting locations and possibly medical conditions to everyone in the area in order to be found and survive. Previous research had shown that users’ tend to be more lenient towards sharing information, that they normally wouldn’t with emergency services and other relevant agencies that are involved in the recovery situation.

While governance restricts disclosure of personally identifiable information information without users’ explicit consent, disclosures are exempt from asking for an explicit consent if efforts fall under “Routine Use” such as “Disaster Missions.” “Routine use” exclusion has broad implications, given the broad and permissive definitions, including: allowing “information sharing with external partners to allow them to provide benefits and services” (Routine Use H); allowing “FEMA to share information with external partners so FEMA can learn what our external partners have already provided to disaster survivors,” as well as disclosing “applicant information to a 3rd party” in order “To prevent a duplication of benefits” (Routine Use I); and requiring 3rd parties to disclose personal information to FEMA, relative to assistance provided.

The advent of the web along with the popularity of social media present a unique opportunity for agencies like FEMA, as they attempt to leverage new technologies and available user information to assist in preparation and recovery efforts. Increasingly, emergency agencies rely on disaster information flows from and to various opt-in apps–including Nextdoor, which allows calls for help when 911 is down; Life 360, which is helpful in tracking evacuations; and those from the Red Cross–during crises.

Additional categories of supplementary third party services and applications include:

Social networks: FEMA uses public data available on social media to help its operation. Twitter, Google and Facebook are also investing further resources to deliver features for users and emergency services specific to disasters. Apple and Google have also promoted various other emergency and disaster response mobile apps during this ongoing hurricane season.

3rd Party applications: Numerous diverse 3rd parties exist in this increasingly sociotechnical domain of FEMA partnerships relative to disaster communication, response, and recovery. Red Cross Apps provide one of the most popular supplements to WEA notifications and FEMA apps, sharing critical response data with other emergency response organizations and agencies. Ostensibly this standardizes critical information flows between stakeholders. However, it highlights individual users’ privacy concessions and challenges the regulatory schema on-the-books, particularly given that users of many of these emergency apps who opt-in for self-reporting are then tracked persistently, until they opt-out or uninstall, rather than the end of emergency.

IoT devices, drones:   Increasingly, drones and IoT monitor disasters in concert with third party applications are being deployed to complement FEMA and other agencies service in the field.  The information flows between involved stakeholders might not always align with users’ expectations.

In order to better balance pressing public safety concerns with long term consequences we need to understand information flows in practice around disasters. The following questions will be considered in our future work, structured through the contextual integrity framework:

What do disaster information flows look like in practice? There are many diverse official and third party channels. Despite good intentions, few have thoroughly considered whether the information flows they facilitate conform to users’ privacy expectations, or if not, whether they might lead to a privacy disaster, pun intended. This is especially critical in crisis situations, during which safety concerns tend to overshadow individuals’ privacy preferences.

How do rules-in-use about Information flows between stakeholders compare to governance on the books? Loopholes in requiring partners of agencies like FEMA to fully disclose the information they communicate around disasters, including PII and SPII used to personalize communications. Despite the imposed restrictions on gathering personal information and routine uses, it is important to raise additional questions about how broadly permissive social acceptance of reduced privacy under crisis conditions might be conflict with actual understanding of information flows in practice.

Where do we store information and for how long? Temporal aspects of privacy and the persistent location-monitoring associated with emergency channels raise real questions about perceptions on appropriate information flows around disasters and emergencies.

Building Respectful Products using Crypto: Lea Kissner at CITP

How can we build respect into products and systems? What role does cryptography play in respectful design?

Speaking today at CITP is Lea Kissner (@LeaKissner), global lead of Privacy Technology at Google. Lea has spent the last 11 years designing and building security and privacy for Google projects from the grittiest layers of infrastructure to the shiniest user features — and cleaning up when something goes awry. She earned a Ph.D. in cryptography at Carnegie Mellon and a B.S. in CS from UC Berkeley.

As head of privacy at Google, Lea is crafts privacy reviews, defines what privacy means at Google, and leads a team that supports privacy across Google. Her team also creates tools and infrastructure that manage privacy across the company. If you’ve reviewed your privacy on Google, deleted your data, or shared any information with Google, Lea and her team have shaped your experience.

How does Lea think about privacy? When working to build products that respect users, Lea reminds us that it’s important for people to feel safe. This is a full-stack problem, all the way from humans and societies down to the level of hardware. Since society varies widely, people have very expectations around privacy and security, but not in the ways you would anticipate. Lea talks about many assumptions that don’t apply globally: not all languages have a word for privacy, people don’t always have control over their physical devices, and they often operate in settings of conflict.

Lea next talks about the case of online harassment. She describes hate speech as a distributed denial of service attack, a way to suppress speech they don’t like. Many platforms enable this kind of harassment, allowing anyone to send messages to anyone and enabling mass harassment. Sometimes it’s possible for platforms to develop policies to manage these problems, but platforms are often unable to intervene in cases of conflicting values.

Lea tells us about one project she worked on during the Arab uprisings. When people’s faces appeared in videos of protests, those people sometimes faced substantial risks when videos became widely viewed. Lea’s team worked with YouTube to implement software that allowed content creators to blur the faces of people appearing in videos.

Next, Lea describes the ways that her team links research with practical benefits to people. Her team’s ethnographers study differences in situations and norms. These observations shape how her team designs systems. As they create more systems, they then create design patterns, then do user testing on those patterns. Research with humans is important at both ends of the work: when understanding the meaning and nature of the challenges, and when testing systems.

Finally, Lea argues that we need to make privacy and security easy for people to do. Right now, cryptography processes are hard for people to use, and hard for people to implement. Her team focuses on creating systems to minimize the number of things that humans need to do in order to stay secure.

How Cryptography Projects can Fail

Lea next tells us about common failures in privacy and security.

The first way to fail is to create your own cryptography system. That’s a dangerous thing to do, says Lea. Why do people do this? Some think they’re smart and know enough just enough to be dangerous. Some think it’s cool to roll their own. Some don’t understand how cryptography works. Sometimes it seems too expensive (in terms of computation and network) for them to use a third-party system. To make good crypto easier, Lea’s team has created Tink, a multi-language, cross-platform library that provides cryptographic APIs that are secure, easy to use correctly, and hard(er) to misuse.

Lea urges us, “Do me a solid. Don’t give people excuses to roll their own crypto.”

Another area where people fail is in privacy-preserving computation. Lea tells us the story of a feature within Google where people wanted to send messages to someone whose phone number they have. Simple, right? Lea unpacks how complex such features can be, how easy it is to enable privacy breaches, and how expensive it can be to offer privacy. She describes a system that stores a large number of phone numbers associated with user IDs. By storing information with encrypted user IDs, it’s possible to enable people to manage their privacy. When Lea’s team estimated the impact of this privacy feature, they realized that it would require more than all of Google’s total computational power. They’re still working on that one.

Privacy is easier to implement in structured analysis of databases such as advertising metrics, says Lea. Google has had more success adopting privacy practices in areas like advertising dashboards that don’t involve real-time user experiences.

Hardware failures are a major source of privacy and security failures. Lea tells us about the squirrels and sharks that have contributed to Amazon and Yahoo data failures by nibbling on cables. She then talks to us about sources of failures from software errors, as well as key errors. Lea tells us about Google’s Key Management Server, which knows about data objects and the keys that pertain to those objects. Keys in this service need to be accessed quickly and globally.

How do generalized key management servers fail? First, encrypted data compresses poorly. If a million people send each other the same image, a typical storage system can compress it efficiently, storing it only once. An encrypted storage system has to encrypt and store each image individually. Second, people who store information often like to index and search for information. Exact matches are easy, but if you need to retrieve a range of things from a period of time, you need an index, and to create an index, the software needs to know what’s inside the encrypted data. Sharding, backing up, and caching data is also very difficult when information is encrypted.

Next, Lea tells us about the problem of key rotation. People need to be able to change their keys in any usable encryption system. When rotating keys, for every single object, you need to decrypt it using the key and then re-encrypt it using a new key. During this process, you can’t shut down an entire service in order to re-do the encryption. Within a large organization like Google, key rotation should be regular, but if it needs to be coordinated across a large number of people. Lea’s team tried something like this, but it ended up being too complex for the company’s needs. After trying this, they moved key management to the storage level, where it would be possible to manage and rotate keys independently of software teams.

What do we learn from this? Lea tells us that cryptography is a tool for turning things into key management problems. She encourages us to avoid rolling our own cryptography, to design scalable privacy-preserving systems, plan for key management up front, and evaluate the success of a design in the full stack, working from humans all the way to the hardware.

PrivaCI Challenge: Context Matters

by  Yan Shvartzshnaider and Marshini Chetty

In this post, we describe the Privacy through Contextual Integrity (PrivaCI) challenge that took place as part of the symposium on applications of contextual integrity sponsored by Center for Information Technology Policy and Digital Life Initiative at Princeton University. We summarize the key takeaways from the unfolded discussion.

We welcome your feedback on any of the aspects of the challenge, as we seek to improve the challenge to serve as a pedagogical and methodological tool to elicit discussion around privacy in a systematic and structured way.

See below the Additional Material and Resources section for links to learning more about the theory of Contextual Integrity and the challenge instruction web page.

What Is the PrivaCI Challenge?

The PrivaCI challenge is designed for evaluating information technologies and to discuss legitimate responses. It puts into practice the approach formulated by the theory of Contextual Integrity for providing “a rigorous, substantive account of factors determining when people will perceive new information technologies and system as threats to privacy (Nissenbaum, H., 2009).”

In the symposium, we used the challenge to discuss and evaluate recent-privacy relevant events. The challenge included 8 teams and 4 contextual scenarios. Each team was presented with a use case/context scenario which then they discussed using the theory of CI. This way each contextual scenario was discussed by a couple of teams.

 

PrivaCI challenge at the symposium on applications of Contextual Integrity

 

To facilitate a structured discussion we asked the group to fill in the following template:

Context Scenario: The template included a brief summary of a context scenario which in our case was based on one of the four privacy news related stories with a link to the original story.

Contextual Informational Norms and privacy expectations: During the discussion, the teams had to identify the relevant contextual information norms and privacy expectations and provide examples of information flows violating these norms.

Example of flows violating the norms: We asked each flow to be broken down into relevant CI Params, i.e., Identify the actors involved (senders, receivers, subjects), Attributes, Transmission Principle.

Possible solutions: Finally, the teams were asked to think of possible solutions to the problem which incorporates previous or ongoing research projects of your teammates.

What Were The Privacy-Related Scenarios Discussed?

We briefly summarize the four case studies/privacy-related scenarios and discuss some of the takeaways here from the group discussions.

  1. St. Louis Uber driver has put a video of hundreds of his passengers online without letting them know.
    https://www.stltoday.com/news/local/metro/st-louis-uber-driver-has-put-video-of-hundreds-of/article_9060fd2f-f683-5321-8c67-ebba5559c753.html
  2. “Saint Louis University will put 2,300 Echo Dots in student residences. The school has unveiled plans to provide all 2,300 student residences on campus (both dorms and apartments).”
    https://www.engadget.com/2018/08/16/saint-louis-university-to-install-2300-echo-dots/
  3. Google tracks your movements even if users set the settings to prevent it. https://apnews.com/828aefab64d4411bac257a07c1af0ecb
  4. Facebook asked large U.S. banks to share financial information on their customers.
    https://www.wsj.com/articles/facebook-to-banks-give-us-your-data-well-give-you-our-users-1533564049

 

Identifying Governing Norms

Much of the discussion focused on the relevant governing norms. For some groups, identifying norms was a relatively straightforward task. For example, in the Uber driver scenario, a group listed: “We do not expect to be filmed in private (?) spaces like Uber/Lyft vehicles.” In the Facebook case, one of the groups articulated a norm as “Financial information should only be shared between financial institutions and individuals, by default, AND Facebook is a social space where personal financial information is not shared.”

Other groups, could not always identify norms that were violated. For example, in the same “Google tracks your movements, like it or not” scenario, one of the teams could not formulate what norms were breached. Nevertheless, they felt uncomfortable with the overall notion of being tracked. Similarly, a group analyzing the scenario where “Facebook has asked large U.S. banks to share detailed financial information about their customers” found that the notion of an information flow traversing between social and financial spheres unacceptable. Nevertheless, they were not sure about the governing norms.

The unfolded discussion included whether norms usually correspond to “best” practice, due diligence. It might be even possible for Facebook to claim that it is all legal and no laws were breached in the process, but this by itself does not mean there was no violation of a norm.

We emphasized the fact that norms are not always grounded in law. An information flow can still violate a norm, despite being specified in a privacy policy or even if it is considered legal, or a “best” practice. Norms are influenced by many other factors. If we feel uneasy about an information flow, it probably violates some deeper norm that we might not be consciously aware of. This requires a deeper analysis.

Norms and privacy expectations vary among members of groups and across groups

The challenge showcases the norms and privacy expectations may vary. Some members of the group, and across groups, had different privacy expectations for the same context scenario. For example, in the Uber scenario, some members of the group, expected drivers to film their passengers for security purposes, while others did not expect to be filmed at all. In this case, we followed the CI decision heuristic which “recommends assessing [alternative flows’] respective merits as a function of the of their meaning and significance in relation to the aims, purposes, and values of the context.” It was interesting to see how by explaining the values of a “violating” information flows, it was possible to get the members of the team to consider their validity in a certain context under very specific conditions. For example, it might be acceptable for a taxi driver to record their passengers onto a secure server (without Internet access) for safety reasons.

Contextual Integrity offers a framework to capture contextual information norms

The challenge revealed additional aspects regarding the way groups approach the norm identification task. Two separate teams listed the following statement as norms: “Consistency between presentation of service and actual functioning,” and “Privacy controls actually do something.” These outline general expectations and fall under the deceptive practice of the Federal Trade Commission (FTC) act; nevertheless these expectations are difficult to capture and asses using the CI framework because they do not articulate in terms of appropriate information flows. This also might be a limitation of the task itself, due to time limitation, the groups were asked to articulate the norms in general sentences, rather than specify them using the five CI parameters.

Norm violating information flows

Once norms were identified, the groups were asked to specify possible information flows that violate them. It was encouraging to see that most teams were able to articulate the violating information flows in a correct manner, i.e., specifying the parameters that correspond to the flow. A team working on the Google’s location tracking scenario could pinpoint the violating information flow: Google should not generate flow without users’ awareness or consent, i.e., the flow can happen under specific conditions. Similar violations identified in other scenarios. For example, in the case, where an Uber driver was streaming live videos of his passengers onto the internet site. Here also the change in transmission principle and the recipient prompted a feeling of privacy violation among the group.

Finally, we asked the groups to propose possible solutions to mitigate the problem. Most of the solutions included asking users for permissions, notifying or designing an opt-in only system. The most critical takeaway from the discussion on the fact that norms and users’ privacy expectation evolve as new information flows are introduced, their merits need to be discussed in terms of the functions they serve.

Summary

The PrivaCI Challenge was a success! It served as an icebreaker for the participants to know each other a little better and also offered a structured way to brainstorm and discuss specific cases. The goal of the challenge exercise was to introduce a systematic way of using the CI framework to evaluate a system in a given scenario. We believe similar challenges can be used as a methodology to introduce and discuss Contextual Integrity in an educational setting or even possibly during the design stage of a product to reveal possible privacy violations.

Additional material and resources

You can access the challenge description and the template here: http://privaci.info/ci_symposium/challenge

The symposium program is available here.

To learn more about the theory of Contextual Integrity and how it differs from other existing privacy frameworks we recommend reading “Privacy in Context: Technology, Policy, and the Integrity of Social Life” by Helen Nissenbaum.

To participate in the discussion on CI, follow @privaci_way on Twitter.
Visit the website: http://privaci.info
Join the privaci_research mailing list.

References

Nissenbaum, H., 2009. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.