October 16, 2019

Judge Declares Some PACER Fees Illegal but Does Not Go Far Enough

Five years ago, in a post called “Making Excuses for Fees on Electronic Public Records,” I described my attempts to persuade the federal Judiciary to stop charging for access to their web-based system, PACER (“Public Access to Court Electronic Records”). Nearly every search, page view, and PDF download from the system incurs a fee ranging from 10 cents to $3 (or, in some cases, much more). I chronicled the many excuses that the courts have provided for charging what amounts to $150 million in fees every year for something that should—by all reasonable accounts—not cost much to provide.

I thought the courts were violating the law. I suggested that someone file suit. Two years later, the good folks at Gupta/Wessler did (in partnership with Motley Rice). Yesterday, Judge Huvelle of the US District Court for the District of Columbia agreed—in part. You can read her opinion here, and see all documents in the case here. Under her ruling, approximately $200 million will likely be returned to people who paid PACER fees from 2010 to 2016. This is good, but not good enough.

It also does not address the larger constitutional issues that I raise in my forthcoming paper, “The Price of Ignorance: The Constitutional Cost of Fees for Access to Electronic Public Court Records.”

Judge Huvelle is a good and fair judge. She rejected the reasoning of both the plaintiffs and the defendants (the Judiciary). Instead, she substituted her own analysis. Unfortunately, her analysis was both legally and technically flawed. Under her ruling, PACER fee-payers will not recover another $750 million (or so) of fees that I think are unlawful. The rest of this post explains why, and what might be next.

[Read more…]

(Mis)conceptions About the Impact of Surveillance

Does surveillance impact behavior? Or is its effect, if real, only temporary or trivial? Government surveillance is back in the news thanks to the so-called “Nunes memo”, making this is a perfect time to examine new research on the impact of surveillance. This includes my own recent work, as my doctoral research at the Oxford Internet Institute, University of Oxford  examined “chilling effects” online, that is, how online surveillance, and other regulatory activities, may impact, chill, or deter people’s activities online.

Though the controversy surrounding the Nunes memo critiquing FBI surveillance under the Foreign Intelligence Surveillance Act (FISA) is primarily political, it takes place against the backdrop of the wider debate about Congressional reauthorization of FISA’s Section 702, which allows the U.S. Government to intercept and collect emails, phone records, and other communications of foreigners residing abroad, without a warrant. On that count, civil society groups have expressed concerns about the impact of government surveillance like that available under FISA, including “chilling effects” on rights and freedoms. Indeed, civil liberties and rights activists have long argued, and surveillance experts like David Lyon long explained, that surveillance and similar threats can have these corrosive impacts.

Yet, skepticism about such claims is common and persistent. As Kaminski and Witov recently noted, many “evince skepticism over the effects of surveillance” with deep disagreements over the “effects of surveillance” on “intellectual queries” and “development”.  But why?  The answer is complicated but likely lies in the present (thin) state of research on these issues, but also common conceptions, and misconceptions, about surveillance and impact on people and broader society.

Skepticism and assumptions about impact
Skepticism about surveillance impacts like chilling effects is, as noted, persistent with commentators like Stanford Law’s Michael Sklansky insisting there is “little empirical support” for chilling effects associated with surveillance or Leslie Kendrick, of UVA Law, labeling the evidence supporting such claims “flimsy” and calling for more systematic research on point. Part of the problem is precisely this: the impact of surveillance—both mass and targeted forms—is difficult to document, measure, and explore, especially chilling effects or self-censorship. This is because demonstrating self-censorship or chill requires showing a counterfactual state of affairs: that a person would have said something or done something but for some surveillance threat or awareness.

But another challenge, just as important to address, concerns common assumptions and perceptions as to what surveillance impact or chilling effects might look like. Here, both members of the general public as well as experts, judges, and lawyers often assume or expect surveillance to have obvious, apparent, and pervasive impact on our most fundamental democratic rights and freedoms—like clear suppression of political speech or the right to peaceful assembly.

A great example of this assumption, leading to skepticism about whether surveillance may promote self-censorship or have broader societal chilling effects—is here expressed by University of Chicago Law’s Eric Posner. Posner, a leading legal scholar who also incorporates empirical methods in his work, conveys his skepticism about the “threat” posed by National Security Agency (NSA) surveillance in a New York Times “Room for Debate”  discussion, writing:

This brings me to another valuable point you made, which is that when people believe that the government exercises surveillance, they become reluctant to exercise democratic freedoms. This is a textbook objection to surveillance, I agree, but it also is another objection that I would place under “theoretical” rather than real.  Is there any evidence that over the 12 years, during the flowering of the so-called surveillance state, Americans have become less politically active? More worried about government suppression of dissent? Less willing to listen to opposing voices? All the evidence points in the opposite direction… It is hard to think of another period so full of robust political debate since the late 1960s—another era of government surveillance.

For Posner, the mere existence of “robust” political debate and activities in society is compelling evidence against claims about surveillance chill.

Similarly, Sklansky argues not only that there is “little empirical support” for the claim that surveillance would “chill independent thought, robust debate, personal growth, and intimate friendship”— what he terms “the stultification thesis”—but like Posner, he finds persuasive evidence against the claim “all around us”. He cites, for example, the widespread “sharing of personal information” online (which presumably would not happen if surveillance was having a dampening effect); how employer monitoring has not deterred employee emailing nor freedom of information laws deterred “intra-governmental communications”; and how young people, the “digital natives” that have grown up with the internet, social media, and surveillance, are far from stultified and conforming but arguably even more personally expressive and experimental than previous generations.  In light of all that, Sklansky dismisses surveillance chill as simply not “worth worrying about”.

I sometimes call this the “Orwell effect”—the common assumption, likely thanks to the immense impact Orwell’s classic novel 1984 has had on popular culture, that surveillance will have dystopian societal impact, with widespread suppression of personal sharing, expression, and political dissent. When Posner and Sklansky (and others that share these common expectations) do not see these more obvious and far reaching impacts, they then discount more subtle and less apparent impacts and effects that may, over the long term, be just as concerning for democratic rights and freedoms. Of course, theorists and scholars like Daniel Solove have long interrogated and critiqued Orwell’s impact on our understanding of privacy and Sklansky is himself wary of Orwell’s influence, so it is no surprise his work also shapes common beliefs and conceptions about the impact of surveillance.  That influence is compounded by the earlier noted lack of systematic empirical research providing more grounded insights and understanding.

This is not only an academic issue. Government surveillance powers and practices are often justified with reference to other national security concerns and threats like terrorism, as this House brief on the FISA re-authorization illustrates. If concerns about chilling effects associated with surveillance and other negative impacts are minimized or discounted based on misconceptions or thin empirical grounding, then challenging surveillance powers and their expansion is much more difficult, with real concrete implications for rights and freedoms.

So, the challenge for documenting, exploring, and understanding the impact of surveillance is really two-fold. The first is one of research methodology and design: designing research to document the impact of surveillance, and a second concerns common assumptions and perceptions as to what surveillance chilling effects might look like—with even experts like Posner or Sklansky assuming widespread speech suppression and conformity due to surveillance.

New research, new insights
Today, new systematic empirical research on the impact of surveillance is being done, with several recent studies having documented surveillance chilling effects in different contexts, including recent studies by  Stoycheff [1], Marthews and Tucker [2], as well as my own recent research.  This includes an empirical legal study[3] on how the Snowden revelations about NSA surveillance impacted Wikipedia use—which received extensive media coverage in the U.S. and internationally— and a more recent study[4], which I wrote about recently in Slate, that examined among other things how state and corporate surveillance impact or “chill” certain people or groups differently. A lot of this new work was not possible in previous times, as it is based on new forms of data being made available to researchers and insights gleaned from analyzing public leaks and disclosures concerning surveillance like the Snowden revelations.

The story these and other new studies tell when it comes to the impact of surveillance is more complicated and subtle, suggesting the common assumptions of Posner and Sklansky are actually misconceptions. Though more subtle, these impacts are no less concerning and corrosive to democratic rights and freedoms, a point consistent with the work of surveillance studies theorists like David Lyon[5] and warnings from researchers at places like the Citizen Lab[6], Berkman Klein Center[7], and here at the CITP[8].  In subsequent posts, I will discuss these studies more fully, to paint a broader picture of surveillance effects today and, in light of increasingly sophisticated targeting and emerging automation technologies, tomorrow. Stay tuned.

* Jonathon Penney is a Research Affiliate of Princeton’s CITP, a Research Fellow at the Citizen Lab, located at the University of Toronto’s Munk School of Global Affairs, and teaches law as an Assistant Professor at Dalhousie University. He is also a research collaborator with Civil Servant at the MIT Media Lab. Find him on twitter at @jon_penney

[1] Stoycheff, E. (2016). Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring. Journalism & Mass Communication Quarterly. doi: 10.1177/1077699016630255

[2] Marthews, A., & Tucker, C. (2014). Government Surveillance and Internet Search Behavior. MIT Sloane Working Paper No. 14380.

[3] Penney, J. (2016). Chilling Effects: Online Surveillance and Wikipedia Use. Berkeley Tech. L.J., 31, 117-182.

[4] Penney, J. (2017). Internet surveillance, regulation, and chilling effects online: A comparative case study. Internet Policy Review, forthcoming

[5] See for example: Lyon, D. (2015). Surveillance After Snowden. Cambridge, MA: Polity Press; Lyon, D. (2006). Theorizing surveillance: The panopticon and beyond. Cullompton, Devon: Willan Publishing; Lyon, D. (2003). Surveillance After September 11. Cambridge, MA: Polity. See also Marx, G.T., (2002). What’s New About the ‘New Surveillance’? Classifying for Change and Continuity. Surveillance & Society, 1(1), pp. 9-29;  Graham, S. & D. Wood. (2003). Digitising Surveillance: Categorisation, Space, Inequality, Critical Social Policy, 23(2): 227-248.

[6] See for example, recent works: Parsons, C., Israel, T., Deibert, R., Gill, L., and Robinson, B. (2018). Citizen Lab and CIPPIC Release Analysis of the Communications Security Establishment Act. Citizen Lab Research Brief No. 104, January 2018; Parsons, C. (2015). Beyond Privacy: Articulating the Broader Harms of Pervasive Mass Surveillance. Media and Communication, 3(3), 1-11; Deibert, R. (2015). The Geopolitics of Cyberspace After Snowden. Current History, (114) 768 (2015): 9-15; Deibert, R. (2013) Black Code: Inside the Battle for Cyberspace, (Toronto: McClelland & Stewart).  See also

[7] See for example, recent work on the Surveillance Project, Berkman Klein Center for Internet and Society, Harvard University.

[8] See for example, recent work: Su, J., Shukla, A., Goel, S., Narayanan, A., De-anonymizing Web Browsing Data with Social Networks. World Wide Web Conference 2017; Zeide, E. (2017). The Structural Consequences of Big Data-Driven Education. Big Data. June 2017, 5(2): 164-172, https://doi.org/10.1089/big.2016.0061;MacKinnon, R. (2012) Consent of the networked: The worldwide struggle for Internet freedomNew YorkBasic Books.; Narayanan, A. & Shmatikov, V. (2009). See also multiple previous Freedom to Tinker posts discussing research/issues point.

 

On Encryption, Archiving, and Accountability

As Elites Switch to Texting, Watchdogs Fear Loss of Accountability“, says a headline in today’s New York Times. The story describes a rising concern among rule enforcers and compliance officers:

Secure messaging apps like WhatsApp, Signal and Confide are making inroads among lawmakers, corporate executives and other prominent communicators. Spooked by surveillance and wary of being exposed by hackers, they are switching from phone calls and emails to apps that allow them to send encrypted and self-destructing texts. These apps have obvious benefits, but their use is causing problems in heavily regulated industries, where careful record-keeping is standard procedure.

Among those “industries” is the government, where laws often require that officials’ work-related communications be retained, archived, and available to the public under the Freedom of Information Act. The move to secure messaging apps frustrates these goals.

The switch to more secure messaging is happening, and for good reason, because old-school messages are increasingly vulnerable to compromise–the DNC and the Clinton campaign are among the many organizations that have paid a price for underestimating these risks.

The tradeoffs here are real. But this is not just a case of choosing between insecure-and-compliant or secure-and-noncompliant. The new secure apps have three properties that differ from old-school email: they encrypt messages end-to-end from the sender to the receiver; they sometimes delete messages quickly after they are transmitted and read; and they are set up and controlled by the end user rather than the employer.

If the concern is lack of archiving, then the last property–user control of the account, rather than employer control–is the main problem. And of course that has been a persistent problem even with email. Public officials using their personal email accounts for public business is typically not allowed (and when it happens by accident, messages are supposed to be forwarded to official accounts so they will be archived), but unreported use of personal accounts has been all too common.

Much of the reporting on this issue (but not the Times article) makes the mistake of conflating the personal-account problem with the fact that these apps use encryption. There is nothing about end-to-end encryption of data in transit that is inconsistent with archiving. The app could record messages and then upload them to an archive–with this upload also protected by end-to-end encryption as a best practice.

The second property of these apps–deleting messages shortly after use–has more complicated security implications. Again, the message becoming unavailable to the user shortly after use need not conflict with archiving. The message could be uploaded securely to an archive before deleting it from the endpoint device.

You might ask why the user should lose access to a message when that message is still stored in an archive. But this makes some sense as a security precaution. Most compromises of communications happen through the user’s access, for example because an attacker can get the user’s login credentials by phishing. Taking away the user’s access, while retaining access in a more carefully guarded archive, is a reasonable security precaution for sensitive messages.

But of course the archive still poses a security risk. Although an archive ought to be more carefully protected than a user account would be, the archive is also a big, high-value target for attackers. The decision to create an archive should not be taken lightly, but it may be justified if the need for accountability is strong enough and the communications are not overly sensitive.

The upshot of all of this is that the most modern, secure approaches to secure communication are not entirely incompatible with the kind of accountability needed for government and some other users.  Accountable versions of these types of services could be created. These would be less secure than the current versions, but more secure than old-school communications. The barriers to creating these are institutional, not technical.