September 29, 2022

Cross-Layer Security: A Holistic View of Internet Security 

By Henry Birge-Lee, Liang Wang, Grace Cimaszewski, Jennifer Rexford and Prateek Mittal

On February 3, 2022, attackers launched a highly effective attack against the Korean cryptocurrency exchange KLAYswap. We discussed the details of this attack in our earlier blog post “Attackers exploit fundamental flaw in the web’s security to steal $2 million in cryptocurrency.” However, in that post we only scratched the surface of potential countermeasures that could prevent such attacks. In this new post, we will discuss how we can defend the web ecosystem against attacks like these. This attack was composed of multiple exploits at different layers of the network stack. We term attacks like this,  “cross-layer attacks,” and offer our perspective on why they are so effective. Furthermore, we propose a practical defense strategy against them that we call “cross-layer security.” 

As we discuss below, cross-layer security involves security technologies at different layers of the network stack working in harmony to defend vulnerabilities that are difficult to catch at a single layer alone.

At a high level, the adversary’s attack affected many layers of the networking stack:

  • The network layer is responsible for providing reachability between hosts on the Internet. The first part of the adversary’s attack involved targeting the network layer with a Border Gateway Protocol (BGP) attack that manipulated routes to hijack traffic intended for the victim.
  • The session layer is responsible for secure end-to-end communication over the network. To attack the session layer, the adversary leveraged its attack on the network layer to obtain a digital certificate for the victim’s domain from a trusted Certificate Authority (CA). With this digital certificate the adversary established encrypted and secure TLS sessions with KLAYswap users.
  • The application layer is responsible for interpreting and processing data that is sent over the network. The adversary used the hijacked TLS sessions with KLAYswap customers to serve malicious Javascript code that compromised the KLAYswap web application and caused users to unknowingly transfer their funds to the adversary.

The difficulty of fully protecting against cross-layer vulnerabilities like these is that they exploit the interactions between the different layers involved: a vulnerability in the routing system can be used to exploit a weak link in the PKI, and even the web-development ecosystem is involved in this attack because of the way javascript is loaded. The cross-layer nature of these vulnerabilities often leads developers working in each layer to dismiss the vulnerability as a problem with other layers. 

There have been several attempts to secure the web against these kinds of attacks at the HTTP layer. Interestingly, these technologies often ended up dead-in-the-water (as was the case with HTTP pinning and Extended Validation certificates). This is because the HTTP layer alone does not have the routing information needed to properly detect these attacks and can only rely on information that is available to end-user applications. This potentially causes HTTP-only defenses to block connections when benign events take place, like when a domain chooses to move to a new hosting provider or changes its certificate configuration because these look very similar to routing attacks at the HTTP layer. 

Due to the cross-layer nature of these vulnerabilities, we need a different mindset to fix the problem: people at all layers need to fully deploy any security solutions that are realistic at that layer. As we will explain below, there is no silver bullet that can be quickly deployed at any layer; instead, our best hope is more modest (but easier to deploy) security improvements for all the layers involved. Working under a “the other layer will fix the problem” attitude simply perpetuates these vulnerabilities.

Below are some short-term and ideal long-term expectations for each layer of the stack involved in these attacks. While in theory, any layer implementing one of these “long-term” security improvements could drastically reduce the attack surface, these technologies have still not seen the type of deployment needed for us to rely on them in the short term. On the other hand, all the technologies in the short-term list have seen some degree of production-level/real-world deployment and are something members of these communities can start using today without much difficulty.

Short-Term ChangesLong-Term Goals
Web apps (application layer)Reduce the use of code loaded from external domainsSign and authenticate all code being executed
The PKI/TLS (session layer)Universally deploy multiple vantage point validationAdopt a technology to verify identity based on cryptographically-protected DNSSEC which provides security in the presence of powerful network attacks
Routing (network layer)Sign and verify routes with RPKI and follow the security practices outlined by MANRSDeploy BGPSec for near-complete elimination of routing attacks

To elaborate:

At the application layer: Web apps are downloaded over the Internet and are completely decentralized. For the time being, there is no mechanism in place to universally vouch for the authenticity of code or content that is contained in a web app. If an adversary can obtain a TLS certificate for google.com and intercept your connection to Google, your browser (right now) will have no way of knowing that it is being served content that did not actually come from Google’s servers. However, developers can remember that any third-party-dependency (particularly those loaded from different domains) can be a third-party-vulnerability and limit the use of third-party code on their website (or host third-party code locally to reduce the attack surface). Furthermore, both locally hosted and third-party hosted content can be secured with subresource integrity where a cryptographic hash (included on the webpage) vouches for the integrity of dependencies. This lets developers provide cryptographic signatures for the dependencies on their webpage. Doing this vastly reduces the attack surface forcing the attacks to target only a single connection with the victim’s web server as opposed to the many different connections involved in retrieving different dependencies.

At the session layer: CAs need to establish the identity of customers requesting certificates and, while there are proposals to use cryptographic DNSSEC to verify identity (like DANE), the status quo is to verify identity via network communications with the domains listed in certificate requests. Thus, global routing attacks are likely to be very effective against CAs unless we make more substantial changes to the way certificates are issued. But this does not mean all hope is lost. Many network attacks are not global but are actually localized to a specific part of the Internet. CAs are capable of mitigating these attacks by verifying domains from several vantage points spread throughout the Internet. This allows some of the CAs vantage points to be unaffected by the attack and communicate with the legitimate domain owner. Our group at Princeton designed multiple vantage point validation and worked with the world’s largest web PKI CA Let’s Encrypt to develop the first ever production deployment of it. CAs can and should use multiple vantage points to verify domains making them immune to localized network attacks and ensuring that they see a global perspective on routing.

At the network layer: In routing, protecting against all BGP attacks is difficult. It requires expensive public-key operations on every BGP update using a protocol called BGPsec that current routers do not support. However, recently there has been significantly increased adoption of a technology called the Resource Public Key Infrastructure (RPKI) that prevents global attacks by establishing a cryptographic database of which networks on the Internet control which IP address blocks. Importantly, when properly configured, RPKI also specifies what size IP prefix should be announced which prevents global and highly-effective sub-prefix attacks. In a sub-prefix attack the adversary announces a longer, more-specific IP prefix than the victim and benefits from longest-prefix-match routing to have its announcement preferred by the vast majority of the Internet. RPKI is fully compatible with current router hardware. The only downside is that RPKI can still be evaded with certain local BGP attacks where, instead of claiming to own the victim’s IP address which is checked against the database, an adversary simply claims to be an Internet provider of the victim. The full map of which networks are connected to which other networks is not currently secured by the RPKI. This leaves a window for some types of BGP attacks which we have seen in the wild. However the impact of these attacks is significantly reduced and often affects only a part of the Internet. In addition, the MANRS project provides recommendations for best operational practices including RPKI that help prevent and mitigate BGP hijacks.

Using Cross-Layer Security to Defend Cross-Layer Attacks

Looking across these layers we see a common trend: in every layer there are proposed security technologies that could potentially stop attacks like the KLAYswap attack. However, these technologies all face deployment challenges. In addition, there are more modest technologies that are seeing extensive real-world deployment today. But each of these deployed technologies alone can be evaded by an adaptive adversary. For example, RPKI can be evaded by local attacks, multiple-vantage-point validation can be evaded by global attacks, etc. However, if we instead look at the benefit offered by all of these technologies together deployed at different layers, things look more promising. Below is a table summarizing this:

Security Technology/LayerGood at detecting routing attacks which affect the entire InternetGood at detecting routing attacks which affect part of the InternetLimits the number of potential targets for routing attacks
RPKI at the Network LayerYesNoNo
Multiple-Vantage-Point Validation at the Session LayerNoYesNo
Subresource Integrity and Locally Hosted Content at the Application LayerNoNoYes

This synergy of security technologies deployed at different layers is what we call cross-layer-security. RPKI alone can be evaded by clever adversaries (using attack techniques we are seeing more and more in the wild). However, the attacks that evade RPKI tend to be local (i.e., not affecting the entire Internet). This synergizes with multiple-vantage-point validation that is best at catching local attacks. Furthermore, because even these two technologies working together do not fully eliminate the attack surface, improvements at the web layer that reduce the reliance on code loaded from external domains help to even further reduce the attack surface. At the end of the day, the entire web ecosystem can benefit tremendously from each layer deploying security technologies that leverage the information and tools available exclusively to that layer. Furthermore, when working in unison, these technologies together can do something that none of them could do alone: stop cross-layer attacks.

Cross-layer attacks are surprisingly effective because no one layer has enough information about the attack to completely prevent it. Hopefully, each layer does have the ability to protect against a different portion of the attack surface. If developers across these different communitie know what type of security is realistic and expected of their layer in the stack, we will see some meaningful improvements.

Even though the ideal endgame is to deploy a security technology that is capable of fully defending against cross-layer attacks, we have not yet seen wide scale adoption of any such technology. In the meantime if we continue to solely focus security against cross-layer attacks in a single layer, these attacks will take significantly longer to protect against. Changing our mindset and seeing the strengths and weaknesses of each layer lets us protect against these attacks much more quickly by increasing the use of  synergistic technologies at different layers that have already seen real-world deployment.

Dcentral vs. Consensus: Are institutions “frens” or enemies of crypto?

As a part of an ethnographic study on blockchain organizations, I recently attended two major conferences – Dcentral Con and Consensus – held back-to-back in Austin, Texas during a blistering heatwave. My collaborator, Johannes Lenhard, and I had conducted a handful of interviews with angel investors, founders, and venture capitalists, but we’d yet to conduct any fieldwork to observe these types of operators in the wild. Dcentral, held at Austin’s Long Center for the Performing Arts, and Consensus, held at the Austin Convention Center and other venues throughout downtown, provided the perfect opportunity. The speaker and panel topics at both conferences varied widely–from non-fungible tokens (NFTs), to the metaverse, to decentralized finance (DeFi). At both conferences an underlying debate regarding the role of established institutions repeatedly bubbled to the surface. The differences between the two conferences themselves offered a stark contrast between those who envision a new frontier of crypto cowboys dismantling existing social and economic hierarchies and those who envision that same industry gaining traction and legitimacy through collaboration with regulators and the traditional financial (aka “TradFi”) sector. 

Dcentral was populated by scrappy developers of emerging protocols, avid gamers, and advocates for edgy decentralized autonomous organizations (DAOs), such as Treat DAO, which allows adult content creators to sell “NSFW” (i.e., not safe for work) NFTs. Attendees at Dcentral sported promotional t-shirts and sneakers, and a few even showed up in Comic Con style garb, flaunting flowing white togas and head-to-toe blue body paint. Over the course of Dcentral, many speakers and attendees crafted passionate arguments around common libertarian talking points–self sovereignty, individualism, opposition to the Federal Reserve, and skepticism about government oversight more broadly. Yet governments were not the only institutions drawing the ire of the Dcentral crowd. Speakers and attendees alike took aim at corporate actors from traditional finance systems as well as venture capital (VC) firms and accredited investors.

Perhaps the most acerbic critique of institutionalization in the crypto sector was issued by Stefan Rust, founder and CEO of Laguna. Wearing a white cowboy hat, he opened his presentation [see 3:19] with a criticism of protocols that impose undesirable “middlemen” between the user and their intended transactions:

“This is what we want to avoid. We invited these institutions into our ecosystem and we now have layers, on layers, on layers that have been created in order to take a decentralized peer-to-peer electronic cash ecosystem to fit a traditional, TradFi world, the system that we’ve been fighting so hard since 2008 to combat […]. Do we want this? I don’t know. I didn’t sign up to get into crypto and Bitcoin and a peer-to-peer electronic cash system for multiple layers of multiple middlemen and multiple fees”

Stefan Rust, Laguna

In his view, increasing involvement of institutional actors could lead to “SSDD.” That is, same shit, different day, which according to Rust, is exactly what the ecosystem should be dismantling.

Consensus, held directly after Dcentral, had an entirely different feel. In contrast to the casual dress of Dcentral, many attendees at Consensus wore conservative silk dresses, high heel pumps, or well-tailored suits, despite temperatures that topped 100 degrees just outside the conference center doors. In a panel aptly entitled, “Wall Street Suits Meet Hoodies,” Ryan VanGrack, a former advisor at the Securities and Exchange Commission (SEC), opened with a comment about how he felt uncomfortably informal in his crisp button-down shirt, slacks, and pristine gray sneakers. According to one marketer at a well-known technology company, the cost of hosting a booth on the exhibit floor was in the neighborhood of 75K. This was not the ragtag gang of artists and emerging protocols from Dcentral; these people were established crypto players who saw the pathway to revolution as running straight through the front door of institutions rather than by burning them to the ground.

Like Dcentral, speakers and panelists at Consensus called for the reform of the financial industry, often similarly drawing from libertarian values and arguments; however, unlike Dcentral, many at Consensus emphasized that regulation of the crypto industry is not only warranted, but necessary to expand its scope and market adoption. According to them, the lack of regulation has imposed an artificial ceiling on what the crypto sector can achieve because retail investors, would-be protocol founders, and institutional players are still “waiting on the sidelines” for regulatory clarity. This position was not merely abstract rhetoric. Current and former government actors such as Rostin Behnam, Chairman of the  Commodity Futures Trading Commission (CFTC) as well as Senators Kirsten Gillibrand, Cynthia Lummis, and Pat Toomey, participated in panels. These panels focused on the role of regulation in the crypto ecosystem, such as measures that preserve innovation while also preventing catastrophic failures such as the recent collapse of Terra, which financially decimated many retail investors. 

At Consensus, advocates of institutionalization were no less enthusiastic in their endorsement of the mission of crypto and web3 than the anti-institutionalists at Dcentral. In other words, they too were true believers, just with a different theory of change. On Friday night I was invited to attend an event hosted by Pantera Capital, a top-tier crypto VC fund. I mentioned to one of the other attendees that I had attended Dcentral. His face pulled into a grimace. “Why the look of disgust?” I asked. He clarified that while “disgust” was too strong of a word, he felt that events like Dcentral delegitimize what the industry seeks to accomplish. Rather than being the true embodiment of the web3 ethos, he felt these crypto cowboys and their antagonistic rhetoric risked undermining the very efforts that were likely to have the biggest impact.

At the conference, panelists and attendees referred to Terra as the “elephant in the room.” But it struck me that personal wealth and its tension with the crypto vision was a much bigger and far less acknowledged elephant. Possibly the only speaker to directly and unambiguously call attention to this was Assistant Professor of Law Rohan Grey. In a panel entitled “Who Should be Allowed to Issue Digital Dollars,” Grey noted that as the “resident pet skeptic” he would act as a rare detractor to the “self-congratulatory industry love-fest” or “circle jerk” that would unfold at Consensus. Establishing common ground with the crypto community, he noted that he too supported efforts to resist “Big Brother as well as Wall Street and Silicon Valley.” But then he offered a withering critique of crypto industry actors, especially those with ties to the established financial sector:

“We should be very clear about the difference between private, for-profit actors providing public goods for their own material benefit and actual public goods. So, who are the people who want to issue digital dollars if not the government? We’re talking about licensed limited liability companies backed by venture capitalists, many of whom are standard Wall Street actors. We’re talking about people with a fiduciary responsibility to a particular group of shareholders. We’re talking about decisions being made on behalf of the public by private individuals who are there only because of their capacity to hold wealth initially, and those actors will then be lobbying for laws favorable to themselves in government and creating the same revolving door that we’ve seen with Wall Street for decades.” 

Rohan Grey, Assistant Professor at Willamette University College of Law

The idea that private sector actors who made their fortunes in the traditional financial sector could serve as the vanguard of a financial revolution certainly merits scrutiny. Yet, even if somewhat dubious, it is at least possible that these actors, having seen from the inside the corruption and ill-effects of existing financial institutions, could leverage their insight to import better, more democratic values into an emerging crypto financial system. Along these lines, one man I chatted with at an after party said it was his experience witnessing what he felt were morally reprehensible, exploitative lending policies while working at a bank that ultimately pushed him to adopt the crypto vision. Still, more than a little skepticism is warranted given that institutional or even anti-institutional actors stand to materially benefit from greater adoption of crypto and its associated technologies, a point that Grey himself underscored.

Following such skepticism, a cynical take is that people will always behave in alignment with their own incentives, even when doing so causes harm to others. I have heard people espouse exactly this sentiment when excoriating scams, NFT “rug pulls,” or even failed DeFi applications. Yet such a bleak view of humanity is overly simplistic given the body of empirical data about human prosocial behavior (e.g., Fehr, Fischbacher & Kosfeld, 2005). People can and often do behave in ways that are altruistic or in the service of others, even at a cost to themselves. Many advocates both for and against institutionalization of the web3 and cryptocurrency sector are likely motivated by a sincere desire to benefit their fellow man. But intentions aren’t the only thing that matters. The positive and negative real-world impacts of blockchain applications both direct and indirect are critical. Whether this increasingly institutionalized sector will spark a real revolution or further entrench SSDD remains to be seen.

The Trust Architecture of Blockchain: Kevin Werbach at CITP

In 2009, bitcoin inventor Satoshi Nakomoto argued that it was “a system for electronic transactions without relying on trust.”

That’s not true, according to today’s CITP’s speaker Kevin Werbach (@kwerb), a professor of Legal Studies and Business Ethics at the Wharton School at UPenn. Kevin is author of a new book with MIT Press, The Blockchain and the New Architecture of Trust.

A world-renowned expert on emerging technology, Kevin examines business and policy implications of developments such as broadband, big data, gamification, and blockchain. Kevin served on the Obama Administration’s Presidential Transition Team, founded the Supernova Group (a technology conference and consulting firm) and helped develop the U.S. approach to internet policy during the Clinton Administration.

Blockchain does actually rely on trust, says Kevin. He tells us the story of the cryptocurrency exchange QuadrigaCX, who claimed that millions of dollars in cryptocurrency were lost when their CEO passed away. While whole story was more complex, Kevin says, it reveals how much bitcoin transactions rely on many kinds of trust.

Rather than removing the need for trust, blockchain offers a new architecture of trust compared to previous models. Peer to peer trust is based on personal relationships.  Leviathan trust, described by Hobbes, is a social contract with the state, which then has the power to enforce private agreements between people. The power of the state makes us more trusting in the private relationships– if you trust the state and if the legal system works. Intermediary trust involves a central entity that manages transactions between people

Blockchain is a new kind of trust, says Kevin. With blockchain trust, you can trust the ledger without (so it seems) trusting any actor to validate it. For this to work, transactions need to be very hard to change without central control – if anyone had the power to make changes, you would have to trust them.

Why would anyone value the blockchain? Blockchain minimizes the need for certain kinds of trust: removing single points of failure, reducing risks of monopoly, and reduces friction from the intermediation. Blockchain also expands trust by minimizing reconciliation, carries out automated execution, and increases the auditability of records.

What could possibly go wrong? Even if the blockchain ledger is auditable and trustworthy, the transaction record isn’t the whole system. Kevin points out that 80% of all bitcoin users rely on centralized key storage. He also reported figures that 20-80% of all Initial Coin Offerings were fraudulent.

Kevin tells us about “Vlad’s conundrum”- there’s a direct conflict between the design of the blockchain system and any regulatory model. The blockchain doesn’t know the difference between transactions, and there’s no entity that can say “no, that’s not okay.” Kevin tells us about the use of the blockchain for money laundering and financing terrorism. He also tells us about the challenge of moderating child pornography data that has been distributed across the blockchain- exposing every bitcoin node to legal risks.

None of these risks are as simple as they seem. Legal enforcement is carried out by humans who often consider intent. Simply possessing digital bits that represent child pornography data will not doom bitcoin. Furthermore, systems are less decentralized or anonymous than they appear. Regulations about parts of the system at the edges and endpoints of the blockchain can promote trust and innovation. Regulators have often been able to pull systems apart, find the involved parties, and hold actors accountable.

Kevin argues that designers of blockchain systems have to manage three trade-offs. Trust, freedom of action, and convenience. Any designer of a system will have to make hard choices about the tradeoffs among each of these factors.

aCiting Vili Lehdonvirta’s blockchain paradox, Kevin tells us several stories about ways that centralized governance processes managed serious problems and fraud in blockchain systems that would have been problems if governance had purely been decentralized.  Kevin also describes technical mechanisms for governance: voting systems, special kinds of contracts, arbitration schemes, and dispute resolution processes

Overall, Kevin tells us that blockchain governance comes back to trust– which shapes how we act with confidence in circumstances of uncertainty and vulnerability.

What’s new with BlockSci, Princeton’s blockchain analysis tool

Six months ago we released the initial version of BlockSci, a fast and expressive tool to analyze public blockchains. In the accompanying paper we explained how we used it to answer scientific questions about security, privacy, miner behavior, and economics using blockchain data. BlockSci has a number of other applications including forensics and as an educational tool.

Since then we’ve heard from a number of researchers and developers who’ve found it useful, and there’s already a published paper on ransomware that has made use of it. We’re grateful for the pull requests and bug reports on GitHub from the community. We’ve also used it to deep-dive into some of the strange corners of blockchain data. We’ve made enhancements including a 5x speed improvement over the initial version (which was already several hundred times faster than previous tools).

Today we’re happy to announce BlockSci 0.4.5, which has a large number of feature enhancements and bug fixes. As just one example, Bitcoin’s SegWit update introduces the concept of addresses that have different representations but are equivalent; tools such as blockchain.info are confused by this and return incorrect (or at least unexpected) values for the balance held by such addresses. BlockSci handles these nuances correctly. We think BlockSci is now ready for serious use, although it is still beta software. Here are a number of ideas on how you can use it in your projects or contribute to its development.

We plan to release talks and tutorials on BlockSci, and improve its documentation. I’ll give a brief talk about it at the MIT Bitcoin Expo this Saturday; then Harry Kalodner and Malte Möser will join me for a BlockSci tutorial/workshop at MIT on Monday, March 19, organized by the Digital Currency Initiative and Fidelity Labs. Videos of both events will be available.

We now have two priorities for the development of BlockSci. The first is to make it possible to implement almost all analyses in Python with the speed of C++. To enable this we are building a function composition interface to automatically translate Python to C++. The second is to better support graph queries and improved clustering of the transaction graph. We’ve teamed up with our colleagues in the theoretical computer science group to adapt sophisticated graph clustering algorithms to blockchain data. If this effort succeeds, it will be a foundational part of how we understand blockchains, just as PageRank is a fundamental part of how we understand the structure of the web. Stay tuned!

Blockchain: What is it good for?

Blockchain and cryptocurrencies are surrounded by world-historic levels of hype and snake oil. For people like me who take the old-fashioned view that technical claims should be backed by sound arguments and evidence, it’s easy to fall into the trap of concluding that there is no there there–and that blockchain and cryptocurrencies are fundamentally useless. This post is my attempt to argue that if we strip away the fluff, some valuable computer science ideas remain.

Let’s start by setting aside the currency part, for now, and focusing on blockchains. The core idea goes back to at least the 1990s: replicate a system’s state across a set of machines; use some kind of distributed consensus algorithm to agree on an append-only log of events that change the state; and use cryptographic hash-chaining to make the log tamper-evident. Much of the legitimate excitement about “blockchain” is driven by the use of this approach to enhance transparency and accountability, by making certain types of actions in a system visible. If an action is recorded in your blockchain, everyone can see it. If it is not in your blockchain, it is ignored as invalid.

An example of this basic approach is certificate transparency, in which certificate authorities (“CAs,” which vouch for digital certificates connecting a cryptographic key to the owner of a DNS name) must publish the certificates they issue on a public list, and systems refuse to accept certificates that are not on the list. This ensures that if a CA issues a certificate without permission from a name’s legitimate owner, the bogus certificate cannot be used without publishing it and thereby enabling the legitimate owner to raise an alarm, potentially leading to public consequences for the misbehaving CA.

In today’s world, with so much talk about the policy advantages of technological transparency, the use of blockchains for transparency can an important tool.

What about cryptocurrencies? There is a lot of debate about whether systems like Bitcoin are genuinely useful as a money transfer technology. Bitcoin has many limitations: transactions take a long time to confirm, and the mining-based consensus mechanism burns a lot of energy. Whether and how these limitations can be overcome is a subject of current research.

Cryptocurrencies are most useful when coupled with “smart contracts,” which allow parties to define the behavior of a virtual actor in code, and have the cryptocurrency’s consensus system enforce that the virtual actor behaves according to its code. The name “smart contract” is misleading, because these mechanisms differ significantly from legal contracts.  (A legal contract is an explicit agreement among an enumerated set of parties that constrains the behavior of those parties and is enforced by ex post remedies. A “smart contract” doesn’t require explicit agreement from parties, doesn’t enumerate participating parties, doesn’t constrain behavior of existing parties but instead creates a new virtual party whose behavior is constrained, and is enforced by ex ante prevention of deviations.) It is precisely these differences that make “smart contracts” useful.

From a computer science standpoint, what is exciting about “smart contracts” is that they let us make conditional payments an integral part of the toolbox for designing distributed protocols. A party can be required to escrow a deposit as a condition of participating in some process, and the return of that deposit, in part or in whole, can be conditioned on the party performing arbitrary required steps, as long as compliance can be checked by a computation.

Another way of viewing the value of “smart contracts” is by observing that we often define correctness for a new distributed protocol by postulating a hypothetical trusted third party who “referees” the protocol, and then proving some kind of equivalence between the new referee-free protocol we have designed and the notional refereed protocol. It sure would be nice if we could just turn the notional referee into a smart contract and let the consensus system enforce correctness.

But all of this requires a “smart contract” system that is efficient and scalable–otherwise the cost of using “smart contracts” will be excessive. Existing systems like Ethereum scale poorly. This too is a problem that will need to be overcome by new research. (Spoiler alert: We’ll be writing here about a research solution in the coming months.)

These are not the only things that blockchain and cryptocurrencies are good for. But I hope they are convincing examples. It’s sad that the hype and snake oil has gotten so extreme that it can be hard to see the benefits. The benefits do exist.