February 22, 2018

(Mis)conceptions About the Impact of Surveillance

Does surveillance impact behavior? Or is its effect, if real, only temporary or trivial? Government surveillance is back in the news thanks to the so-called “Nunes memo”, making this is a perfect time to examine new research on the impact of surveillance. This includes my own recent work, as my doctoral research at the Oxford Internet Institute, University of Oxford  examined “chilling effects” online, that is, how online surveillance, and other regulatory activities, may impact, chill, or deter people’s activities online.

Though the controversy surrounding the Nunes memo critiquing FBI surveillance under the Foreign Intelligence Surveillance Act (FISA) is primarily political, it takes place against the backdrop of the wider debate about Congressional reauthorization of FISA’s Section 702, which allows the U.S. Government to intercept and collect emails, phone records, and other communications of foreigners residing abroad, without a warrant. On that count, civil society groups have expressed concerns about the impact of government surveillance like that available under FISA, including “chilling effects” on rights and freedoms. Indeed, civil liberties and rights activists have long argued, and surveillance experts like David Lyon long explained, that surveillance and similar threats can have these corrosive impacts.

Yet, skepticism about such claims is common and persistent. As Kaminski and Witov recently noted, many “evince skepticism over the effects of surveillance” with deep disagreements over the “effects of surveillance” on “intellectual queries” and “development”.  But why?  The answer is complicated but likely lies in the present (thin) state of research on these issues, but also common conceptions, and misconceptions, about surveillance and impact on people and broader society.

Skepticism and assumptions about impact
Skepticism about surveillance impacts like chilling effects is, as noted, is persistent with commentators like Stanford Law’s Michael Sklansky insisting there “little empirical support” for chilling effects associated with surveillance or Leslie Kendrick, of UVA Law, labeling the evidence supporting such claims “flimsy” and calling for more systematic research on point. Part of the problem is precisely this: the impact of surveillance—both mass and targeted forms—is difficult to document, measure, and explore, especially chilling effects or self-censorship. This is because demonstrating self-censorship or chill requires showing a counterfactual state of affairs: that a person would have said something or done something but for some surveillance threat or awareness.

But another challenge, just as important to address, concerns common assumptions and perceptions as to what surveillance impact or chilling effects might look like. Here, both members of the general public as well as experts, judges, and lawyers often assume or expect surveillance to have obvious, apparent, and pervasive impact on our most fundamental democratic rights and freedoms—like clear suppression of political speech or the right to peaceful assembly.

A great example of this assumption, leading to skepticism about whether surveillance may promote self-censorship or have broader societal chilling effects—is here expressed by University of Chicago Law’s Eric Posner. Posner, a leading legal scholar who also incorporates empirical methods in his work, conveys his skepticism about the “threat” posed by National Security Agency (NSA) surveillance in a New York Times “Room for Debate”  discussion, writing:

This brings me to another valuable point you made, which is that when people believe that the government exercises surveillance, they become reluctant to exercise democratic freedoms. This is a textbook objection to surveillance, I agree, but it also is another objection that I would place under “theoretical” rather than real.  Is there any evidence that over the 12 years, during the flowering of the so-called surveillance state, Americans have become less politically active? More worried about government suppression of dissent? Less willing to listen to opposing voices? All the evidence points in the opposite direction… It is hard to think of another period so full of robust political debate since the late 1960s—another era of government surveillance.

For Posner, the mere existence of “robust” political debate and activities in society is compelling evidence against claims about surveillance chill.

Similarly, Sklansky argues not only that there is “little empirical support” for the claim that surveillance would “chill independent thought, robust debate, personal growth, and intimate friendship”— what he terms “the stultification thesis”—but like Posner, he finds persuasive evidence against the claim “all around us”. He cites, for example, the widespread “sharing of personal information” online (which presumably would not happen if surveillance was having a dampening effect); how employer monitoring has not deterred employee emailing nor freedom of information laws deterred “intra-governmental communications”; and how young people, the “digital natives” that have grown up with the internet, social media, and surveillance, are far from stultified and conforming but arguably even more personally expressive and experimental than previous generations.  In light of all that, Sklansky dismisses surveillance chill as simply not “worth worrying about”.

I sometimes call this the “Orwell effect”—the common assumption, likely thanks to the immense impact Orwell’s classic novel 1984 has had on popular culture, that surveillance will have dystopian societal impact, with widespread suppression of personal sharing, expression, and political dissent. When Posner and Sklansky (and others that share these common expectations) do not see these more obvious and far reaching impacts, they then discount more subtle and less apparent impacts and effects that may, over the long term, be just as concerning for democratic rights and freedoms. Of course, theorists and scholars like Daniel Solove have long interrogated and critiqued Orwell’s impact on our understanding of privacy and Sklansky is himself wary of Orwell’s influence, so it is no surprise his work also shapes common beliefs and conceptions about the impact of surveillance.  That influence is compounded by the earlier noted lack of systematic empirical research providing more grounded insights and understanding.

This is not only an academic issue. Government surveillance powers and practices are often justified with reference to other national security concerns and threats like terrorism, as this House brief on the FISA re-authorization illustrates. If concerns about chilling effects associated with surveillance and other negative impacts are minimized or discounted based on misconceptions or thin empirical grounding, then challenging surveillance powers and their expansion is much more difficult, with real concrete implications for rights and freedoms.

So, the challenge for documenting, exploring, and understanding the impact of surveillance is really two-fold. The first is one of research methodology and design: designing research to document the impact of surveillance, and a second concerns common assumptions and perceptions as to what surveillance chilling effects might look like—with even experts like Posner or Sklansky assuming widespread speech suppression and conformity due to surveillance.

New research, new insights
Today, new systematic empirical research on the impact of surveillance is being done, with several recent studies having documented surveillance chilling effects in different contexts, including recent studies by  Stoycheff [1], Marthews and Tucker [2], as well as my own recent research.  This includes an empirical legal study[3] on how the Snowden revelations about NSA surveillance impacted Wikipedia use—which received extensive media coverage in the U.S. and internationally— and a more recent study[4], which I wrote about recently in Slate, that examined among other things how state and corporate surveillance impact or “chill” certain people or groups differently. A lot of this new work was not possible in previous times, as it is based on new forms of data being made available to researchers and insights gleaned from analyzing public leaks and disclosures concerning surveillance like the Snowden revelations.

The story these and other new studies tell when it comes to the impact of surveillance is more complicated and subtle, suggesting the common assumptions of Posner and Sklansky are actually misconceptions. Though more subtle, these impacts are no less concerning and corrosive to democratic rights and freedoms, a point consistent with the work of surveillance studies theorists like David Lyon[5] and warnings from researchers at places like the Citizen Lab[6], Berkman Klein Center[7], and here at the CITP[8].  In subsequent posts, I will discuss these studies more fully, to paint a broader picture of surveillance effects today and, in light of increasingly sophisticated targeting and emerging automation technologies, tomorrow. Stay tuned.

* Jonathon Penney is a Research Affiliate of Princeton’s CITP, a Research Fellow at the Citizen Lab, located at the University of Toronto’s Munk School of Global Affairs, and teaches law as an Assistant Professor at Dalhousie University. He is also a research collaborator with Civil Servant at the MIT Media Lab. Find him on twitter at @jon_penney

[1] Stoycheff, E. (2016). Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring. Journalism & Mass Communication Quarterly. doi: 10.1177/1077699016630255

[2] Marthews, A., & Tucker, C. (2014). Government Surveillance and Internet Search Behavior. MIT Sloane Working Paper No. 14380.

[3] Penney, J. (2016). Chilling Effects: Online Surveillance and Wikipedia Use. Berkeley Tech. L.J., 31, 117-182.

[4] Penney, J. (2017). Internet surveillance, regulation, and chilling effects online: A comparative case study. Internet Policy Review, forthcoming

[5] See for example: Lyon, D. (2015). Surveillance After Snowden. Cambridge, MA: Polity Press; Lyon, D. (2006). Theorizing surveillance: The panopticon and beyond. Cullompton, Devon: Willan Publishing; Lyon, D. (2003). Surveillance After September 11. Cambridge, MA: Polity. See also Marx, G.T., (2002). What’s New About the ‘New Surveillance’? Classifying for Change and Continuity. Surveillance & Society, 1(1), pp. 9-29;  Graham, S. & D. Wood. (2003). Digitising Surveillance: Categorisation, Space, Inequality, Critical Social Policy, 23(2): 227-248.

[6] See for example, recent works: Parsons, C., Israel, T., Deibert, R., Gill, L., and Robinson, B. (2018). Citizen Lab and CIPPIC Release Analysis of the Communications Security Establishment Act. Citizen Lab Research Brief No. 104, January 2018; Parsons, C. (2015). Beyond Privacy: Articulating the Broader Harms of Pervasive Mass Surveillance. Media and Communication, 3(3), 1-11; Deibert, R. (2015). The Geopolitics of Cyberspace After Snowden. Current History, (114) 768 (2015): 9-15; Deibert, R. (2013) Black Code: Inside the Battle for Cyberspace, (Toronto: McClelland & Stewart).  See also

[7] See for example, recent work on the Surveillance Project, Berkman Klein Center for Internet and Society, Harvard University.

[8] See for example, recent work: Su, J., Shukla, A., Goel, S., Narayanan, A., De-anonymizing Web Browsing Data with Social Networks. World Wide Web Conference 2017; Zeide, E. (2017). The Structural Consequences of Big Data-Driven Education. Big Data. June 2017, 5(2): 164-172, https://doi.org/10.1089/big.2016.0061;MacKinnon, R. (2012) Consent of the networked: The worldwide struggle for Internet freedomNew YorkBasic Books.; Narayanan, A. & Shmatikov, V. (2009). See also multiple previous Freedom to Tinker posts discussing research/issues point.

 

How Data Science and Open Science are Transforming Research Ethics: Edward Freeland at CITP

How are data science and  open science movement transforming how researchers manage research ethics? And how are these changes influencing public trust in social research?

 

I’m here at the Center for IT Policy to hear a talk by Edward P. Freeland. Edward is the associate director of the Princeton University Survey Research Center and a lecturer at the Woodrow Wilson School of Public and International Affairs. Edward has been a member of Princeton’s Institutional Review Board since 2005 and currently serves as chair.

Edward starts out by telling us about about his family’s annual Christmas card. Every year, his family loses track of a few people, and he ends up having to try to track someone down. For several years, they sent the postcard to Ed’s wife’s cousin Billy to someone in Hartford CT, but it turns out that the address was not their cousin Billy but a retired neurosurgeon. To resolve this problem this year, Edward and his wife filled out more information about their family members into an app. Along the way, he learned just how much information about people is available on the internet. While technology makes it possible to keep track of family members more easily, some of that data might be more than people want to be known.

How does this relate to research ethics? Edward tells us about the principles that currently shape research ethics in the United States. These principles come from the 1978 Belmont Report, which was prompted in party by the Tuskeegee Syphilis Study, a horrifying medical study that ran for forty years. In the US, universities now have to do research focused on respect for persons, beneficence, and justice.

In practice, what do university ethics boards (IRBs) care about? Edward and his colleagues compiled a list of the issues that ethics boards into a single slide:

When it comes to privacy, what to university ethics boards care about? Federal regulations focus on any disclosure of the human subjects’ responses outside of the research and the risk that it would expose people to. In practice, the ethics board expects researchers to adopt procedural safeguards around who can access data and how it’s protected.

In the past, studies would basically conclude after the researchers publish the research. But the practice of research has been changing. Advocates of open science have worked to reduce fraud, prevent burying of unexpected results, enhance funder/taxpayer impact, strengthen, the integrity of scientific work, work through crowdsourcing or citizen science, and collaborate in new ways. Edward tells about the Open Science Collaboration, which tried in 2015 to replicate a hundred studies from across psychology, and who often failed to do so. Now others are trying to ask similar questions across other fields including cancer research.

In just a few years, the Center for Open Science has supported many researchers and journals to pre-register and publish the details of their research. Other organizations are also developing similar initiatives, such as clinicaltrials.gov.

Many in the open science movement suggest that researchers archive and share data, even after submitting a manuscript. Some people use a data sharing agreement to protect data used by others. Others prepare datafiles from their research for public use. But publishing data introduces privacy risks for participants in research. While US legislation HIPAA covers medical data, there aren’t authoritative norms or guidelines around sharing that data.

Many people turn to anonymization as a way to protect the information of people who participate in research. But does it really work? The landscape of data re-identification is changing from year to year, but the consensus is that anonymization doesn’t tend to work. As Matt Salganik points out in his book Bit By Bit, we should assume that all data are potentially identifiable and potentially sensitive. Where might we need to be concerned about potential problems?

  • People are sometimes recruited to join survey panels where they answer many questions over the years. Because this data is highly-dimensional, it may be very easy to re-identify people
  • Distributed anonymous workforces like Amazon Mechanical Turk also represent a privacy risk. The ID codes aren’t anonymous: you can google people’s IDs and find people’s comments on various Amazon products
  • Re-identification attacks, which draw together data from many sources to find someone, are becoming more common

Public Confidence in Science

How we treat people’s data affects public confidence in science– not only how people interpret what we learn, but also people’s likelihood to participate in research. Edward tells us that survey response rates have been dropping, even when surveys are conducted by the government. American society has always had a fringe movement of people who resisted government data collection. If those people gain access to the levers of power, they may be able to influence the government’s likelihood to collect data that could inform the public on important issues.

Edward tells us that very few people expect their data to be kept private and secure, according to research by Pew. When combined with declining trust in institutions, concerns about privacy may be one reason that fewer people are responding to surveys.

At the same time, many people are organizing to try to resist surveying by the US government. Some political and activist groups have been filming their interactions with survey collectors, harassing them, and claiming that researchers or the government have secret. As researchers try to uphold public trust by doing trustworthy, beneficial research, we need to be aware of the social and political forces that influence how people think about research.

Workshop on Technical Applications of Contextual Integrity

The theory of contextual integrity (CI) has inspired work across the legal, privacy, computer science and HCI research communities.  Recognizing common interests and common challenges, the time seemed ripe for a meeting to discuss what we have learned from the projects using CI and how to move forward to leverage CI for enhancing privacy preserving systems and policies. On 11 December, 2017  the Center for Information Technology Policy hosted an inaugural workshop on Technical Applications of Contextual Integrity. The workshop gathered over twenty researchers from Princeton University, New York University, Cornell Tech, University of Maryland, Data & Society, and AI Now to present their ongoing and completed projects, discuss and share ideas, and explore successes and challenges when using the CI framework. The meeting, which included faculty, postdocs, and graduate students, was kicked off with a welcome and introduction by Ed Felten, CITP Director.

The agenda comprised of two main parts. In the first half of the workshop, representatives of various projects gave a short presentation on the status of their work, describe any challenges encountered, and lessons learned in the process. The second half included a planning session of a full day event to take place in the Spring to allow for a bigger discussion and exchange of ideas.

The workshop presentations touched on a wide variety of topics which included: ways operationalizing CI, discovering contextual norms behind children’s online activities, capturing users’ expectation towards smart toys and smart-home devices, as well as demonstrating how CI can be used to analyze regulation acts, applying CI to establish research ethics guidelines, conceptualizing privacy within common government arrangement.

More specifically:

Yan Shvartzshnaider discussed Verifiable and ACtionable Contextual Integrity Norms Engine (VACCINE), a framework for building adaptable and modular Data Leakage Prevention (DLP) systems.

Darakshan Mir discussed a framework for community-based participatory framework for discovery of contextual informational norms in small and veranubale communities.

Sebastian Benthall shared the key takeaways from conducting a survey on existing computer science literature work that uses Contextual Integrity.

Paula Kift discussed how the theory of contextual Integrity can be used to analyze the recently passed Cybersecurity Information Sharing Act (CISA) to reveals some fundamental gaps in the way it conceptualizes privacy.

Ben Zevenbergen talked about his work on applying the theory of contextual integrity to help establish guidelines for Research Ethics.

Madelyn Sanfilippo discussed conceptualizing privacy within a commons governance arrangement using Governing Knowledge Commons (GKC) framework.

Priya Kumar presented recent work on using the Contextual Integrity to identify gaps in children’s online privacy knowledge.

Sarah Varghese and Noah Apthorpe discussed their works on discovering privacy norms in IoT Devices using Contextual Integrity.

The roundtable discussion covered a wide range of open questions such as what are the limitations of CI as a theory, possible extensions, integration into other frameworks, conflicting interpretations of the CI parameters, possible research directions, and interesting collaboration ideas.

This a first attempt to see how much interest there is from the wider research community in a CI-focused event. We were overwhelmed with the incredible response! The participants expressed huge interest in the bigger event in Spring 2018 and put forward a number of suggestions for the format of the workshop.  The initial idea is to organize the bigger workshop as a co-joint event with an established conference, another suggestion was to have it as part of a hands-on workshop that brings together industry and academia. We are really excited about the event that will bring together a large sample of CI-related research work both academically and geographically which will allow a much broader discussion. 

The ultimate goal of this and other future initiatives is to foster communication between the various communities of researchers and practitioners using the theory of CI as a framework to reason about privacy and a language for sharing of ideas.

For the meantime, please check out the http://privaci.info website that will serve as a central repository for news, up to date related work for the community. We will be updating it in coming months.

We look forward to your feedback and suggestions. If you’re interested in hearing about the Spring workshop or presenting your work, want to help or have any suggestion please get in touch!

Twitter: @privaci_way

Email: