March 22, 2018

Blockchain: What is it good for?

Blockchain and cryptocurrencies are surrounded by world-historic levels of hype and snake oil. For people like me who take the old-fashioned view that technical claims should be backed by sound arguments and evidence, it’s easy to fall into the trap of concluding that there is no there there–and that blockchain and cryptocurrencies are fundamentally useless. This post is my attempt to argue that if we strip away the fluff, some valuable computer science ideas remain.

Let’s start by setting aside the currency part, for now, and focusing on blockchains. The core idea goes back to at least the 1990s: replicate a system’s state across a set of machines; use some kind of distributed consensus algorithm to agree on an append-only log of events that change the state; and use cryptographic hash-chaining to make the log tamper-evident. Much of the legitimate excitement about “blockchain” is driven by the use of this approach to enhance transparency and accountability, by making certain types of actions in a system visible. If an action is recorded in your blockchain, everyone can see it. If it is not in your blockchain, it is ignored as invalid.

An example of this basic approach is certificate transparency, in which certificate authorities (“CAs,” which vouch for digital certificates connecting a cryptographic key to the owner of a DNS name) must publish the certificates they issue on a public list, and systems refuse to accept certificates that are not on the list. This ensures that if a CA issues a certificate without permission from a name’s legitimate owner, the bogus certificate cannot be used without publishing it and thereby enabling the legitimate owner to raise an alarm, potentially leading to public consequences for the misbehaving CA.

In today’s world, with so much talk about the policy advantages of technological transparency, the use of blockchains for transparency can an important tool.

What about cryptocurrencies? There is a lot of debate about whether systems like Bitcoin are genuinely useful as a money transfer technology. Bitcoin has many limitations: transactions take a long time to confirm, and the mining-based consensus mechanism burns a lot of energy. Whether and how these limitations can be overcome is a subject of current research.

Cryptocurrencies are most useful when coupled with “smart contracts,” which allow parties to define the behavior of a virtual actor in code, and have the cryptocurrency’s consensus system enforce that the virtual actor behaves according to its code. The name “smart contract” is misleading, because these mechanisms differ significantly from legal contracts.  (A legal contract is an explicit agreement among an enumerated set of parties that constrains the behavior of those parties and is enforced by ex post remedies. A “smart contract” doesn’t require explicit agreement from parties, doesn’t enumerate participating parties, doesn’t constrain behavior of existing parties but instead creates a new virtual party whose behavior is constrained, and is enforced by ex ante prevention of deviations.) It is precisely these differences that make “smart contracts” useful.

From a computer science standpoint, what is exciting about “smart contracts” is that they let us make conditional payments an integral part of the toolbox for designing distributed protocols. A party can be required to escrow a deposit as a condition of participating in some process, and the return of that deposit, in part or in whole, can be conditioned on the party performing arbitrary required steps, as long as compliance can be checked by a computation.

Another way of viewing the value of “smart contracts” is by observing that we often define correctness for a new distributed protocol by postulating a hypothetical trusted third party who “referees” the protocol, and then proving some kind of equivalence between the new referee-free protocol we have designed and the notional refereed protocol. It sure would be nice if we could just turn the notional referee into a smart contract and let the consensus system enforce correctness.

But all of this requires a “smart contract” system that is efficient and scalable–otherwise the cost of using “smart contracts” will be excessive. Existing systems like Ethereum scale poorly. This too is a problem that will need to be overcome by new research. (Spoiler alert: We’ll be writing here about a research solution in the coming months.)

These are not the only things that blockchain and cryptocurrencies are good for. But I hope they are convincing examples. It’s sad that the hype and snake oil has gotten so extreme that it can be hard to see the benefits. The benefits do exist.


Are voting-machine modems truly divorced from the Internet?

(This article is written jointly with my colleague Kyle Jamieson, who specializes in wireless networks.)

[See also: The myth of the hacker-proof voting machine]

The ES&S model DS200 optical-scan voting machine has a cell-phone modem that it uses to upload election-night results from the voting machine to the “county central” canvassing computer.  We know it’s a bad idea to connect voting machines (and canvassing computers) to the Internet, because this allows their vulnerabilities to be exploited by hackers anywhere in the world.  (In fact, a judge in New Jersey ruled in 2009 that the state must not connect its voting machines and canvassing computers to the internet, for that very reason.)  So the question is, does DS200’s cell-phone modem, in effect, connect the voting machine to the Internet?

The vendor (ES&S) and the counties that bought the machine say, “no, it’s an analog modem.”  That’s not true; it appears to be a Multitech MTSMC-C2-N3-R.1 (Verizon C2 series modem), a fairly complex digital device.  But maybe what they mean is “it’s just a phone call, not really the Internet.”  So let’s review how phone calls work:

The voting machine calls the county-central computer using its cell-phone modem to the nearest tower; this connects through Verizon’s “Autonomous System” (AS), part of the packet-switched Internet, to a cell tower (or land-line station) near the canvassing computer.

Verizon attempts to control access to the routers internal to its own AS, using firewall rules on the border routers.  Each border router runs (probably) millions of lines of software; as such it is subject to bugs and vulnerabilities.  If a hacker finds one of these vulnerabilities, he can modify messages as they transit the AS network:

Do border routers actually have vulnerabilities in practice?  Of course they do!  US-CERT has highlighted this as an issue of importance.  It would surprising if the Russian mafia or the FBI were not equipped to exploit such vulnerabilities.

Even easier than hacking through router bugs is just setting up an imposter cell-phone “tower” near the voting machine; one commonly used brand of these, used by many police departments, is called “Stingray.”

I’ve labelled the hacker as “MitM” for “man-in-the-middle.”  He is well positioned to alter vote totals as they are uploaded.  Of course, he will do better to put his Stingray near the county-central canvassing computer, so he can hack all the voting machines in the county, not just one near his Stingray:

So, in summary: phone calls are not unconnected to the Internet; the hacking of phone calls is easy (police departments with Stingray devices do it all the time); and even between the cell-towers (or land-line stations), your calls go over parts of the Internet.  If your state laws, or a court with jurisdiction, say not to connect your voting machines to the Internet, then you probably shouldn’t use telephone modems either.

(Mis)conceptions About the Impact of Surveillance

Does surveillance impact behavior? Or is its effect, if real, only temporary or trivial? Government surveillance is back in the news thanks to the so-called “Nunes memo”, making this is a perfect time to examine new research on the impact of surveillance. This includes my own recent work, as my doctoral research at the Oxford Internet Institute, University of Oxford  examined “chilling effects” online, that is, how online surveillance, and other regulatory activities, may impact, chill, or deter people’s activities online.

Though the controversy surrounding the Nunes memo critiquing FBI surveillance under the Foreign Intelligence Surveillance Act (FISA) is primarily political, it takes place against the backdrop of the wider debate about Congressional reauthorization of FISA’s Section 702, which allows the U.S. Government to intercept and collect emails, phone records, and other communications of foreigners residing abroad, without a warrant. On that count, civil society groups have expressed concerns about the impact of government surveillance like that available under FISA, including “chilling effects” on rights and freedoms. Indeed, civil liberties and rights activists have long argued, and surveillance experts like David Lyon long explained, that surveillance and similar threats can have these corrosive impacts.

Yet, skepticism about such claims is common and persistent. As Kaminski and Witov recently noted, many “evince skepticism over the effects of surveillance” with deep disagreements over the “effects of surveillance” on “intellectual queries” and “development”.  But why?  The answer is complicated but likely lies in the present (thin) state of research on these issues, but also common conceptions, and misconceptions, about surveillance and impact on people and broader society.

Skepticism and assumptions about impact
Skepticism about surveillance impacts like chilling effects is, as noted, persistent with commentators like Stanford Law’s Michael Sklansky insisting there is “little empirical support” for chilling effects associated with surveillance or Leslie Kendrick, of UVA Law, labeling the evidence supporting such claims “flimsy” and calling for more systematic research on point. Part of the problem is precisely this: the impact of surveillance—both mass and targeted forms—is difficult to document, measure, and explore, especially chilling effects or self-censorship. This is because demonstrating self-censorship or chill requires showing a counterfactual state of affairs: that a person would have said something or done something but for some surveillance threat or awareness.

But another challenge, just as important to address, concerns common assumptions and perceptions as to what surveillance impact or chilling effects might look like. Here, both members of the general public as well as experts, judges, and lawyers often assume or expect surveillance to have obvious, apparent, and pervasive impact on our most fundamental democratic rights and freedoms—like clear suppression of political speech or the right to peaceful assembly.

A great example of this assumption, leading to skepticism about whether surveillance may promote self-censorship or have broader societal chilling effects—is here expressed by University of Chicago Law’s Eric Posner. Posner, a leading legal scholar who also incorporates empirical methods in his work, conveys his skepticism about the “threat” posed by National Security Agency (NSA) surveillance in a New York Times “Room for Debate”  discussion, writing:

This brings me to another valuable point you made, which is that when people believe that the government exercises surveillance, they become reluctant to exercise democratic freedoms. This is a textbook objection to surveillance, I agree, but it also is another objection that I would place under “theoretical” rather than real.  Is there any evidence that over the 12 years, during the flowering of the so-called surveillance state, Americans have become less politically active? More worried about government suppression of dissent? Less willing to listen to opposing voices? All the evidence points in the opposite direction… It is hard to think of another period so full of robust political debate since the late 1960s—another era of government surveillance.

For Posner, the mere existence of “robust” political debate and activities in society is compelling evidence against claims about surveillance chill.

Similarly, Sklansky argues not only that there is “little empirical support” for the claim that surveillance would “chill independent thought, robust debate, personal growth, and intimate friendship”— what he terms “the stultification thesis”—but like Posner, he finds persuasive evidence against the claim “all around us”. He cites, for example, the widespread “sharing of personal information” online (which presumably would not happen if surveillance was having a dampening effect); how employer monitoring has not deterred employee emailing nor freedom of information laws deterred “intra-governmental communications”; and how young people, the “digital natives” that have grown up with the internet, social media, and surveillance, are far from stultified and conforming but arguably even more personally expressive and experimental than previous generations.  In light of all that, Sklansky dismisses surveillance chill as simply not “worth worrying about”.

I sometimes call this the “Orwell effect”—the common assumption, likely thanks to the immense impact Orwell’s classic novel 1984 has had on popular culture, that surveillance will have dystopian societal impact, with widespread suppression of personal sharing, expression, and political dissent. When Posner and Sklansky (and others that share these common expectations) do not see these more obvious and far reaching impacts, they then discount more subtle and less apparent impacts and effects that may, over the long term, be just as concerning for democratic rights and freedoms. Of course, theorists and scholars like Daniel Solove have long interrogated and critiqued Orwell’s impact on our understanding of privacy and Sklansky is himself wary of Orwell’s influence, so it is no surprise his work also shapes common beliefs and conceptions about the impact of surveillance.  That influence is compounded by the earlier noted lack of systematic empirical research providing more grounded insights and understanding.

This is not only an academic issue. Government surveillance powers and practices are often justified with reference to other national security concerns and threats like terrorism, as this House brief on the FISA re-authorization illustrates. If concerns about chilling effects associated with surveillance and other negative impacts are minimized or discounted based on misconceptions or thin empirical grounding, then challenging surveillance powers and their expansion is much more difficult, with real concrete implications for rights and freedoms.

So, the challenge for documenting, exploring, and understanding the impact of surveillance is really two-fold. The first is one of research methodology and design: designing research to document the impact of surveillance, and a second concerns common assumptions and perceptions as to what surveillance chilling effects might look like—with even experts like Posner or Sklansky assuming widespread speech suppression and conformity due to surveillance.

New research, new insights
Today, new systematic empirical research on the impact of surveillance is being done, with several recent studies having documented surveillance chilling effects in different contexts, including recent studies by  Stoycheff [1], Marthews and Tucker [2], as well as my own recent research.  This includes an empirical legal study[3] on how the Snowden revelations about NSA surveillance impacted Wikipedia use—which received extensive media coverage in the U.S. and internationally— and a more recent study[4], which I wrote about recently in Slate, that examined among other things how state and corporate surveillance impact or “chill” certain people or groups differently. A lot of this new work was not possible in previous times, as it is based on new forms of data being made available to researchers and insights gleaned from analyzing public leaks and disclosures concerning surveillance like the Snowden revelations.

The story these and other new studies tell when it comes to the impact of surveillance is more complicated and subtle, suggesting the common assumptions of Posner and Sklansky are actually misconceptions. Though more subtle, these impacts are no less concerning and corrosive to democratic rights and freedoms, a point consistent with the work of surveillance studies theorists like David Lyon[5] and warnings from researchers at places like the Citizen Lab[6], Berkman Klein Center[7], and here at the CITP[8].  In subsequent posts, I will discuss these studies more fully, to paint a broader picture of surveillance effects today and, in light of increasingly sophisticated targeting and emerging automation technologies, tomorrow. Stay tuned.

* Jonathon Penney is a Research Affiliate of Princeton’s CITP, a Research Fellow at the Citizen Lab, located at the University of Toronto’s Munk School of Global Affairs, and teaches law as an Assistant Professor at Dalhousie University. He is also a research collaborator with Civil Servant at the MIT Media Lab. Find him on twitter at @jon_penney

[1] Stoycheff, E. (2016). Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring. Journalism & Mass Communication Quarterly. doi: 10.1177/1077699016630255

[2] Marthews, A., & Tucker, C. (2014). Government Surveillance and Internet Search Behavior. MIT Sloane Working Paper No. 14380.

[3] Penney, J. (2016). Chilling Effects: Online Surveillance and Wikipedia Use. Berkeley Tech. L.J., 31, 117-182.

[4] Penney, J. (2017). Internet surveillance, regulation, and chilling effects online: A comparative case study. Internet Policy Review, forthcoming

[5] See for example: Lyon, D. (2015). Surveillance After Snowden. Cambridge, MA: Polity Press; Lyon, D. (2006). Theorizing surveillance: The panopticon and beyond. Cullompton, Devon: Willan Publishing; Lyon, D. (2003). Surveillance After September 11. Cambridge, MA: Polity. See also Marx, G.T., (2002). What’s New About the ‘New Surveillance’? Classifying for Change and Continuity. Surveillance & Society, 1(1), pp. 9-29;  Graham, S. & D. Wood. (2003). Digitising Surveillance: Categorisation, Space, Inequality, Critical Social Policy, 23(2): 227-248.

[6] See for example, recent works: Parsons, C., Israel, T., Deibert, R., Gill, L., and Robinson, B. (2018). Citizen Lab and CIPPIC Release Analysis of the Communications Security Establishment Act. Citizen Lab Research Brief No. 104, January 2018; Parsons, C. (2015). Beyond Privacy: Articulating the Broader Harms of Pervasive Mass Surveillance. Media and Communication, 3(3), 1-11; Deibert, R. (2015). The Geopolitics of Cyberspace After Snowden. Current History, (114) 768 (2015): 9-15; Deibert, R. (2013) Black Code: Inside the Battle for Cyberspace, (Toronto: McClelland & Stewart).  See also

[7] See for example, recent work on the Surveillance Project, Berkman Klein Center for Internet and Society, Harvard University.

[8] See for example, recent work: Su, J., Shukla, A., Goel, S., Narayanan, A., De-anonymizing Web Browsing Data with Social Networks. World Wide Web Conference 2017; Zeide, E. (2017). The Structural Consequences of Big Data-Driven Education. Big Data. June 2017, 5(2): 164-172,;MacKinnon, R. (2012) Consent of the networked: The worldwide struggle for Internet freedomNew YorkBasic Books.; Narayanan, A. & Shmatikov, V. (2009). See also multiple previous Freedom to Tinker posts discussing research/issues point.