September 28, 2022

Judge Declares Some PACER Fees Illegal but Does Not Go Far Enough

Five years ago, in a post called “Making Excuses for Fees on Electronic Public Records,” I described my attempts to persuade the federal Judiciary to stop charging for access to their web-based system, PACER (“Public Access to Court Electronic Records”). Nearly every search, page view, and PDF download from the system incurs a fee ranging from 10 cents to $3 (or, in some cases, much more). I chronicled the many excuses that the courts have provided for charging what amounts to $150 million in fees every year for something that should—by all reasonable accounts—not cost much to provide.

I thought the courts were violating the law. I suggested that someone file suit. Two years later, the good folks at Gupta/Wessler did (in partnership with Motley Rice). Yesterday, Judge Huvelle of the US District Court for the District of Columbia agreed—in part. You can read her opinion here, and see all documents in the case here. Under her ruling, approximately $200 million will likely be returned to people who paid PACER fees from 2010 to 2016. This is good, but not good enough.

It also does not address the larger constitutional issues that I raise in my forthcoming paper, “The Price of Ignorance: The Constitutional Cost of Fees for Access to Electronic Public Court Records.”

Judge Huvelle is a good and fair judge. She rejected the reasoning of both the plaintiffs and the defendants (the Judiciary). Instead, she substituted her own analysis. Unfortunately, her analysis was both legally and technically flawed. Under her ruling, PACER fee-payers will not recover another $750 million (or so) of fees that I think are unlawful. The rest of this post explains why, and what might be next.

[Read more…]

(Mis)conceptions About the Impact of Surveillance

Does surveillance impact behavior? Or is its effect, if real, only temporary or trivial? Government surveillance is back in the news thanks to the so-called “Nunes memo”, making this is a perfect time to examine new research on the impact of surveillance. This includes my own recent work, as my doctoral research at the Oxford Internet Institute, University of Oxford  examined “chilling effects” online, that is, how online surveillance, and other regulatory activities, may impact, chill, or deter people’s activities online.

Though the controversy surrounding the Nunes memo critiquing FBI surveillance under the Foreign Intelligence Surveillance Act (FISA) is primarily political, it takes place against the backdrop of the wider debate about Congressional reauthorization of FISA’s Section 702, which allows the U.S. Government to intercept and collect emails, phone records, and other communications of foreigners residing abroad, without a warrant. On that count, civil society groups have expressed concerns about the impact of government surveillance like that available under FISA, including “chilling effects” on rights and freedoms. Indeed, civil liberties and rights activists have long argued, and surveillance experts like David Lyon long explained, that surveillance and similar threats can have these corrosive impacts.

Yet, skepticism about such claims is common and persistent. As Kaminski and Witov recently noted, many “evince skepticism over the effects of surveillance” with deep disagreements over the “effects of surveillance” on “intellectual queries” and “development”.  But why?  The answer is complicated but likely lies in the present (thin) state of research on these issues, but also common conceptions, and misconceptions, about surveillance and impact on people and broader society.

Skepticism and assumptions about impact
Skepticism about surveillance impacts like chilling effects is, as noted, persistent with commentators like Stanford Law’s Michael Sklansky insisting there is “little empirical support” for chilling effects associated with surveillance or Leslie Kendrick, of UVA Law, labeling the evidence supporting such claims “flimsy” and calling for more systematic research on point. Part of the problem is precisely this: the impact of surveillance—both mass and targeted forms—is difficult to document, measure, and explore, especially chilling effects or self-censorship. This is because demonstrating self-censorship or chill requires showing a counterfactual state of affairs: that a person would have said something or done something but for some surveillance threat or awareness.

But another challenge, just as important to address, concerns common assumptions and perceptions as to what surveillance impact or chilling effects might look like. Here, both members of the general public as well as experts, judges, and lawyers often assume or expect surveillance to have obvious, apparent, and pervasive impact on our most fundamental democratic rights and freedoms—like clear suppression of political speech or the right to peaceful assembly.

A great example of this assumption, leading to skepticism about whether surveillance may promote self-censorship or have broader societal chilling effects—is here expressed by University of Chicago Law’s Eric Posner. Posner, a leading legal scholar who also incorporates empirical methods in his work, conveys his skepticism about the “threat” posed by National Security Agency (NSA) surveillance in a New York Times “Room for Debate”  discussion, writing:

This brings me to another valuable point you made, which is that when people believe that the government exercises surveillance, they become reluctant to exercise democratic freedoms. This is a textbook objection to surveillance, I agree, but it also is another objection that I would place under “theoretical” rather than real.  Is there any evidence that over the 12 years, during the flowering of the so-called surveillance state, Americans have become less politically active? More worried about government suppression of dissent? Less willing to listen to opposing voices? All the evidence points in the opposite direction… It is hard to think of another period so full of robust political debate since the late 1960s—another era of government surveillance.

For Posner, the mere existence of “robust” political debate and activities in society is compelling evidence against claims about surveillance chill.

Similarly, Sklansky argues not only that there is “little empirical support” for the claim that surveillance would “chill independent thought, robust debate, personal growth, and intimate friendship”— what he terms “the stultification thesis”—but like Posner, he finds persuasive evidence against the claim “all around us”. He cites, for example, the widespread “sharing of personal information” online (which presumably would not happen if surveillance was having a dampening effect); how employer monitoring has not deterred employee emailing nor freedom of information laws deterred “intra-governmental communications”; and how young people, the “digital natives” that have grown up with the internet, social media, and surveillance, are far from stultified and conforming but arguably even more personally expressive and experimental than previous generations.  In light of all that, Sklansky dismisses surveillance chill as simply not “worth worrying about”.

I sometimes call this the “Orwell effect”—the common assumption, likely thanks to the immense impact Orwell’s classic novel 1984 has had on popular culture, that surveillance will have dystopian societal impact, with widespread suppression of personal sharing, expression, and political dissent. When Posner and Sklansky (and others that share these common expectations) do not see these more obvious and far reaching impacts, they then discount more subtle and less apparent impacts and effects that may, over the long term, be just as concerning for democratic rights and freedoms. Of course, theorists and scholars like Daniel Solove have long interrogated and critiqued Orwell’s impact on our understanding of privacy and Sklansky is himself wary of Orwell’s influence, so it is no surprise his work also shapes common beliefs and conceptions about the impact of surveillance.  That influence is compounded by the earlier noted lack of systematic empirical research providing more grounded insights and understanding.

This is not only an academic issue. Government surveillance powers and practices are often justified with reference to other national security concerns and threats like terrorism, as this House brief on the FISA re-authorization illustrates. If concerns about chilling effects associated with surveillance and other negative impacts are minimized or discounted based on misconceptions or thin empirical grounding, then challenging surveillance powers and their expansion is much more difficult, with real concrete implications for rights and freedoms.

So, the challenge for documenting, exploring, and understanding the impact of surveillance is really two-fold. The first is one of research methodology and design: designing research to document the impact of surveillance, and a second concerns common assumptions and perceptions as to what surveillance chilling effects might look like—with even experts like Posner or Sklansky assuming widespread speech suppression and conformity due to surveillance.

New research, new insights
Today, new systematic empirical research on the impact of surveillance is being done, with several recent studies having documented surveillance chilling effects in different contexts, including recent studies by  Stoycheff [1], Marthews and Tucker [2], as well as my own recent research.  This includes an empirical legal study[3] on how the Snowden revelations about NSA surveillance impacted Wikipedia use—which received extensive media coverage in the U.S. and internationally— and a more recent study[4], which I wrote about recently in Slate, that examined among other things how state and corporate surveillance impact or “chill” certain people or groups differently. A lot of this new work was not possible in previous times, as it is based on new forms of data being made available to researchers and insights gleaned from analyzing public leaks and disclosures concerning surveillance like the Snowden revelations.

The story these and other new studies tell when it comes to the impact of surveillance is more complicated and subtle, suggesting the common assumptions of Posner and Sklansky are actually misconceptions. Though more subtle, these impacts are no less concerning and corrosive to democratic rights and freedoms, a point consistent with the work of surveillance studies theorists like David Lyon[5] and warnings from researchers at places like the Citizen Lab[6], Berkman Klein Center[7], and here at the CITP[8].  In subsequent posts, I will discuss these studies more fully, to paint a broader picture of surveillance effects today and, in light of increasingly sophisticated targeting and emerging automation technologies, tomorrow. Stay tuned.

* Jonathon Penney is a Research Affiliate of Princeton’s CITP, a Research Fellow at the Citizen Lab, located at the University of Toronto’s Munk School of Global Affairs, and teaches law as an Assistant Professor at Dalhousie University. He is also a research collaborator with Civil Servant at the MIT Media Lab. Find him on twitter at @jon_penney

[1] Stoycheff, E. (2016). Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring. Journalism & Mass Communication Quarterly. doi: 10.1177/1077699016630255

[2] Marthews, A., & Tucker, C. (2014). Government Surveillance and Internet Search Behavior. MIT Sloane Working Paper No. 14380.

[3] Penney, J. (2016). Chilling Effects: Online Surveillance and Wikipedia Use. Berkeley Tech. L.J., 31, 117-182.

[4] Penney, J. (2017). Internet surveillance, regulation, and chilling effects online: A comparative case study. Internet Policy Review, forthcoming

[5] See for example: Lyon, D. (2015). Surveillance After Snowden. Cambridge, MA: Polity Press; Lyon, D. (2006). Theorizing surveillance: The panopticon and beyond. Cullompton, Devon: Willan Publishing; Lyon, D. (2003). Surveillance After September 11. Cambridge, MA: Polity. See also Marx, G.T., (2002). What’s New About the ‘New Surveillance’? Classifying for Change and Continuity. Surveillance & Society, 1(1), pp. 9-29;  Graham, S. & D. Wood. (2003). Digitising Surveillance: Categorisation, Space, Inequality, Critical Social Policy, 23(2): 227-248.

[6] See for example, recent works: Parsons, C., Israel, T., Deibert, R., Gill, L., and Robinson, B. (2018). Citizen Lab and CIPPIC Release Analysis of the Communications Security Establishment Act. Citizen Lab Research Brief No. 104, January 2018; Parsons, C. (2015). Beyond Privacy: Articulating the Broader Harms of Pervasive Mass Surveillance. Media and Communication, 3(3), 1-11; Deibert, R. (2015). The Geopolitics of Cyberspace After Snowden. Current History, (114) 768 (2015): 9-15; Deibert, R. (2013) Black Code: Inside the Battle for Cyberspace, (Toronto: McClelland & Stewart).  See also

[7] See for example, recent work on the Surveillance Project, Berkman Klein Center for Internet and Society, Harvard University.

[8] See for example, recent work: Su, J., Shukla, A., Goel, S., Narayanan, A., De-anonymizing Web Browsing Data with Social Networks. World Wide Web Conference 2017; Zeide, E. (2017). The Structural Consequences of Big Data-Driven Education. Big Data. June 2017, 5(2): 164-172, https://doi.org/10.1089/big.2016.0061;MacKinnon, R. (2012) Consent of the networked: The worldwide struggle for Internet freedomNew YorkBasic Books.; Narayanan, A. & Shmatikov, V. (2009). See also multiple previous Freedom to Tinker posts discussing research/issues point.

 

On Encryption, Archiving, and Accountability

As Elites Switch to Texting, Watchdogs Fear Loss of Accountability“, says a headline in today’s New York Times. The story describes a rising concern among rule enforcers and compliance officers:

Secure messaging apps like WhatsApp, Signal and Confide are making inroads among lawmakers, corporate executives and other prominent communicators. Spooked by surveillance and wary of being exposed by hackers, they are switching from phone calls and emails to apps that allow them to send encrypted and self-destructing texts. These apps have obvious benefits, but their use is causing problems in heavily regulated industries, where careful record-keeping is standard procedure.

Among those “industries” is the government, where laws often require that officials’ work-related communications be retained, archived, and available to the public under the Freedom of Information Act. The move to secure messaging apps frustrates these goals.

The switch to more secure messaging is happening, and for good reason, because old-school messages are increasingly vulnerable to compromise–the DNC and the Clinton campaign are among the many organizations that have paid a price for underestimating these risks.

The tradeoffs here are real. But this is not just a case of choosing between insecure-and-compliant or secure-and-noncompliant. The new secure apps have three properties that differ from old-school email: they encrypt messages end-to-end from the sender to the receiver; they sometimes delete messages quickly after they are transmitted and read; and they are set up and controlled by the end user rather than the employer.

If the concern is lack of archiving, then the last property–user control of the account, rather than employer control–is the main problem. And of course that has been a persistent problem even with email. Public officials using their personal email accounts for public business is typically not allowed (and when it happens by accident, messages are supposed to be forwarded to official accounts so they will be archived), but unreported use of personal accounts has been all too common.

Much of the reporting on this issue (but not the Times article) makes the mistake of conflating the personal-account problem with the fact that these apps use encryption. There is nothing about end-to-end encryption of data in transit that is inconsistent with archiving. The app could record messages and then upload them to an archive–with this upload also protected by end-to-end encryption as a best practice.

The second property of these apps–deleting messages shortly after use–has more complicated security implications. Again, the message becoming unavailable to the user shortly after use need not conflict with archiving. The message could be uploaded securely to an archive before deleting it from the endpoint device.

You might ask why the user should lose access to a message when that message is still stored in an archive. But this makes some sense as a security precaution. Most compromises of communications happen through the user’s access, for example because an attacker can get the user’s login credentials by phishing. Taking away the user’s access, while retaining access in a more carefully guarded archive, is a reasonable security precaution for sensitive messages.

But of course the archive still poses a security risk. Although an archive ought to be more carefully protected than a user account would be, the archive is also a big, high-value target for attackers. The decision to create an archive should not be taken lightly, but it may be justified if the need for accountability is strong enough and the communications are not overly sensitive.

The upshot of all of this is that the most modern, secure approaches to secure communication are not entirely incompatible with the kind of accountability needed for government and some other users.  Accountable versions of these types of services could be created. These would be less secure than the current versions, but more secure than old-school communications. The barriers to creating these are institutional, not technical.

Questions for the FBI on Encryption Mandates

I wrote on Monday about how to analyze a proposal to mandate access to encrypted data. FBI Director James Comey, at the University of Texas last week, talked about encryption policy and his hope that some kind of exceptional access for law enforcement will become available. (Here’s a video.)  Let’s look at what Director Comey said about how a mandate might work.

Here is an extended quote from Director Comey’s answer to an audience question (starting at 51:02 in the video, emphasis added):

The technical thing, look, I really do think we haven’t given this the shot it deserves. President Obama commissioned some work at the end of his Administration because he’d heard a lot from people on device encryption, [that] it’s too hard.  [No], it’s not too hard. It’s not too hard. It requires a change in business model but it is, according to experts inside the U.S. government and a lot of people who will meet with us privately in the private sector, no one actually wants to be seen with us but we meet them out behind the 7/11, they tell us, look, it’s a business model decision.

Take the FBI’s business model. We equip our agents with mobile devices that I think are great mobile devices and we’ve worked hard to make them secure. We have designed it so that we have the ability to access the content. And so I don’t think we have a fatally flawed mobile system in the FBI, and I think nearly every enterprise that is represented here probably has the same. You retain the ability to access the content. So look, one of the worlds I could imagine, I don’t know whether this makes sense, one of the worlds I could imagine is a requirement that if you’re going to sell a device or market a device in the United States, you must be able to comply with judicial process. You figure out how to do it.

And maybe that doesn’t make sense, absent an international component to it, but I just don’t think we, and look, I get it, the makers of devices and the makers of fabulous apps that are riding on top of our devices, on top of our networks, really don’t have an incentive to deal with, to internalize the public safety harm. And I get that. My job is to worry about public safety. Their job is to worry about innovating and selling more units, I totally get that. Somehow we have to bring together, and see if we can’t optimize those two things. And really, given my role, I should not be the one to say, here’s what the technology should look like, nor should they say, no I don’t really care about that public safety aspect.

And what I don’t want to have happen, and I know you agree with me no matter what you think about this, now I think you’re going to agree with what I’m about to say, is we can’t have this conversation after something really bad happens. And look, I don’t want to be a pessimist, but bad things are going to happen. And even I, the Director of the FBI, do not believe that we can have thoughtful conversations about optimizing things we care about in the wake of a serious, serious attack of any kind.

The bolded text is the closest Director Comey came to describing how he imagines a mandate working. He doesn’t suggest that it’s anything like a complete proposal–and anyway that would be too much to ask from an off-the-cuff answer to an audience question. But let’s look at what would be required to turn it into a proposal that can be analyzed. In other words, let’s extrapolate from Director Comey’s answer and try to figure out how he and his team might try to build out a specific proposal based on what he suggested.

The notional mandate would apply at least to retailers (“if you’re going to sell … or market a device”) who sell smartphones to the public “in the United States.” That would include Apple (for sales in Apple Stores), big box retailers like Best Buy, mobile phone carriers’ shops, online retailers like Amazon, and the smaller convenience stores and kiosks that sell cheap smartphones.

Retailers would be required “comply with judicial process.” At a minimum, that would presumably mean that if presented with a smartphone that they had sold, they could extract from it any data encrypted by the user. Which data, and under what circumstances? That would have to be specified, but it’s worth noting that there is a limited amount the retailer can do to control how a user encrypts data on the device. So unless we require retailers to prevent the installation of new software onto the device (and thereby put app stores, and most app sellers, out of business), there would need to be major carve-outs to limit the mandate’s reach to include only cases where the retailer had some control. For example, the mandate might apply only to data encrypted by the software present on the device at the time of sale. That could create an easy loophole for users who wanted to prevent extraction of their encrypted data (by installing encryption software post-sale), but at least it would avoid imposing an impossible requirement on the retailer. (Veterans of the 1990s crypto wars will remember how U.S. software products often shipped without strong crypto, to comply with export controls, but post-sale plug-ins adding crypto were widely available.)

Other classes of devices, such as laptops, tablets, smart devices, and server computers, would either have to be covered, with careful consideration of how they are sold and configured, or they would be excluded, limiting the coverage of the rule. There would need to be rules about devices brought into the United States by their user-owners, or if those devices were not covered, then some law enforcement value would be lost. And the treatment of used devices would have to be specified, including both devices made before the mandate took effect (which would probably need to be exempted, creating another loophole) and post-mandate devices re-sold by a user of merchant: would the original seller or the re-seller be responsible, and what if the reseller is an individual?

Notice that we had to make all of these decisions, and face the attendant unpleasant tradeoffs, before we even reached the question of how to design the technical mechanism to implement key escrow, and how that would affect the security and privacy interests of law-abiding users. The crypto policy discussion often gets hung up on this one issue–the security implications of key escrow–but it is far from the only challenge that needs to be addressed, and the security implications of a key escrow mechanism are far from the only potential drawbacks to be considered.

Director Comey didn’t go to Austin to present an encryption mandate proposal.  But if he or others do decide to push seriously for a mandate, they ought to be able to lay out the details of how they would do it.

 

 

How to Analyze An Encryption Access Proposal

It looks like the idea of requiring law enforcement access to encrypted data is back in the news, with the UK government apparently pushing for access in the wake of the recent London attack. With that in mind, let’s talk about how one can go about analyzing a proposed access mandate.

The first thing to recognize is that although law enforcement is often clear about what result they want–getting access to encrypted data–they are often far from clear about how they propose to get that result. There is no magic wand that can give encrypted data to law enforcement and nobody else, while leaving everything else about the world unchanged. If a mandate were to be imposed, this would happen via regulation of companies’ products or behavior.

The operation of a mandate would necessarily be a three stage process: the government imposes specific mandate language, which induces changes in product design and behavior by companies and users, thereby leading to consequences that affect the public good.

Expanding this a bit, we can lay out some questions that a mandate proposal should be prepared to answer:

  1. mandate language: What requirements are imposed, and on whom? Which types of devices and products are covered and which are not? What specifically is required of a device maker? Of an operating system developer? Of a network provider? Of a retailer selling devices? Of an importer of devices? Of a user?
  2. changes in product design and behavior:  How will companies and users react to the mandate? For example, how will companies change the design of their products to comply with the mandate while maintaining their competitive position and serving their customers? How will criminals and terrorists change their behavior? How will law-abiding users adapt? What might foreign governments do to take advantage of these changes?
  3. consequences: What consequences will result from the design and behavioral changes that are predicted? How will the changes affect public safety? Cybersecurity? Personal privacy? The competitiveness of domestic companies? Human rights and free expression?

These questions are important because they expose the kinds of tradeoffs that would have to be made in imposing a mandate. As an example, covering a broad range of devices might allow recovery of more encrypted data (with a warrant), but it might be difficult to write requirements that make sense across a broad spectrum of different device types. As another example, all of the company types that you might regulate come with challenges: some are mostly located outside your national borders, others lack technical sophistication, others touch only a subset of the devices of interest, and so on. Difficult choices abound–and if you haven’t thought about how you would make those choices, then you aren’t in a position to assert that the benefits of a mandate are worth the downsides.

To date, the FBI has not put forward any specific approach. Nor has the UK government, to my knowledge. All they have offered in their public statements are vague assertions that a good approach must exist.

If our law enforcement agencies want to have a grown-up conversation about encryption mandates, they can start by offering a specific proposal, at least for purposes of discussion. Then the serious policy discussion can begin.