September 24, 2018

Archives for September 2018

How can we scale private, smart contracts? Ed Felten on Arbitrum

Smart contracts are powerful virtual referees for holding money and carrying out agreed-on procedures in cases of disputes, but they can’t guarantee privacy and have strict scalability limitations. How can we improve on these constraints?

Here at the Center for IT Policy, it’s the first event of our weekly Tuesday lunch series. Speaking today is Professor Ed Felten, director of CITP. Ed served at the White House as the deputy U.S. chief technology officer from June 2015 to January 2017. Ed was also the first chief technologist for the Federal Trade Commission from January 2011 until September 2012.

What is cryptocurrency? Ed describes a situation where Alice wants to share money with Bob. She digitally signs a data structure indicating that coin C should be paid to Bob’s address, and she sends it to the Bitcoin network. The systems in the network then gossip to each other that Alice wants to pay Bob.

This brings us to the blockchain. The blockchain is a data structure that includes information about transactions and a link to a previous block. Each block includes a cryptographic hash to the previous block, and if anyone accepts the block, they accept the rest of the chain. When Alice creates a transaction, it will be added to a block by a bitcoin miner. This miner then tries to succeed at getting their block to the blockchain- if the miner succeeds, then Alice’s transaction is accepted and it will be deemed to have happened.  That’s how Bitcoin works- it keeps track of all previous transactions, and that’s how it keeps track of currency.

Smart contracts are another blockchain idea- but it’s a misnomer. Here’s how it works. If Alice and Bob want to make an agreement and have a protocol for carrying it out, they write down computer code that defines the behavior of a third party. One way to do it is to have a trusted third party carry out that protocol. A smart contract creates a virtual third party, writes code describing what it should do, and then instantiates it into the blockchain system. Then, if all goes well, the contract will behave according to its code, and it will act as the third party or referee in the agreement between Alice and Bob. Ed shows this with a pile of money because one thing it can do is to receive coins, own them, and do whatever with those coins that its code has defined. The contract, in this sense, is a trusted third party expressed in code.

What can smart contracts do? One option is escrow. Maybe Alice wants to buy books and doesn’t want to pay until she receives the book– but maybe the shop will only ship after payment. This is typically what an escrow agent does. In the optimistic case, the escrow agent receives the money and transfers the money to the shop once the books have been received. Smart contracts can play the role of the escrow agent. Why set this up in code? In theory, an escrow agent defined in code will be less likely to carry out fraud.

Smart contracts can also support sealed-bid auctions. Ed asks us to imagine that someone is selling naming rights to a cafeteria. Everyone submits bids secretly, the “envelopes” are opened at the end of the bid, and whoever bid the most wins. Smart contracts can give people assurances that people will carry out key actions in the process by requiring them to provide a deposit, where they know what will happen with their money

The most popular smart contract system is Ethereum, in which all contract code and data is public. Every miner emulates every execution step of every contract. That is slow, expensive, and doesn’t scale, so Ethereum requires people to pay what they call “gas” – in exchange for computation and storage done by a contract. The high cost to the miners of emulating these steps translates to a high cost of gas.

Contract complexity on Ethereum is capped by a “global gas limit” – defining the maximum amount of contract work that the miners are able to do. Roughly speaking, Ed says, the total computational capacity of Ethereum is less than a tenth of a laptop. These scalability limitations make many protocols impossible, and blockchain space is very limited.

Ethereum also has privacy limitations- Bitcoin scripts and Ethereum code are all public. Not everyone wants the full details of every contract to be visible to everyone. In some cases, you might want something more like a traditional business contract, where the contract terms are normally only known to the parties.

Can we scale smart contracts? That’s what the Arbitrum team was trying to do. To make clear what the team is doing, Ed describes three areas where someone could do work. Rather than focus on the consensus level, the Arbitrum team focused on scaling the smart contracts.

  • Kalodner, H., Goldfeder, S., Chen, X., Weinberg, S. M., & Felten, E. W. (2018, August). Arbitrum: scalable, private smart contracts. In Proceedings of the 27th USENIX Conference on Security Symposium (pp. 1353-1370). USENIX Association.

How can you scale smart contracts? Ed’s team worked on an off-chain protocol. The work is performed out-of-band by the transacting parties. The computation and storage are done off-chain. All of these things need to be linked back to the chain.

Ed quickly summarizes approaches that have been taken, including SNARKs, Incentivized Verification (TrueBit), and State Channels. He goes more in-depth about TrueBit. In this system of incentivized verifiers, a group of “verifiers” volunteer to check computations. They are rewarded more if they find errors. Anyone can be a verifier, and the reward is split among them. If a computation checked by a verifier is incorrect, the verifier can give an efficient proof of incorrectness.

But there’s a participation dilemma to incentivized verifiers. Imagine a game-theory situation where there are N players, who can pay 1 to participate. Imagine that a participating verifier pretends to be more than one verifier (sybils). In that situation, if you have enough people wearing different sybil masks, people are disincentivized from being a verifier. The creators of TrueBit have shown that their system is “one-shot sybil proof.” As a result, if someone claims to be two people, they get two shares of the reward, but the shares are smaller, so that it would have been more profitable to claim (honestly) to be a single party.

Verification is a repeated game; in these cases, a verifier might sacrifice something in one situation in order to gain over long time. In their paper, Ed and his collaborators shared a game theoretic proof showing that every one-shot sybil proof participation game allows a situation where one verifier can bully all other players into not participating by flooding the system with fake verifiers.

The limits of other approaches show why Ed and his collaborators created Arbitrum, which uses a combination of protocol design, incentives, and a virtual machine infrastructure to carry out scalable, trustworthy smart contracts. Arbitrum starts by assuming an underlying consensus layer, which they call a “verifier.”

The Arbitrum system is built around Managers, who manage a virtual machine that carries out computation and data. Arbitrum provides an “any-trust” guarantee; as long as at least one manager of a VM is honest, the VM will execute correctly according to its code.

Imagine that Bob and Alice are going to play a chess competition. They create code that holds the gold medal, receives alternating moves, verifies the validity of the game, and pays the winner. Bob and Alice put the code onto a VM. Who are the managers in this situation? Alice and Bob can be the managers, and so long as they can hold each other accountable, the contract will work.

How can managers in Arbitrum cooperate to advance the state of a VM? Managers have incentives to agree unanimously about what a VM will do. If they all agree and digitally sign the assertion, the system accepts their assertions, since the system assumes that at least one manager is acting honestly. What if managers dispute the claim? A manager can make an assertion and deposit some funds. Another manager can challenge the assertion and also deposit some funds. If there’s a challenge, the system referees the dispute and takes the deposit of the manager that was lying. When a challenge happens, the asserter divides their assertion in half and the challenger must identify which half of the process was incorrect. Eventually, the dispute is narrowed from a large process into a single instruction. The system can then check the one-instruction claim to find out who’s lying.

By dividing the dispute down to a single instruction, Ed says it’s possible to decide the dispute efficiently in a way that minimizes privacy leaks. He then describes the data structure in Arbitrum that stores the state of a program as a tree of cryptographically-stored information. Conventional virtual machines store code and data in ways that require logarithmic time to verify instructions. First, Arbitrum stores data in fixed sized “tuples” that can be arranged in a tree structure. Second, application code manages the tree rather than the VM emulator. In the typical VM, a single instruction takes O(log n) to execute. In Arbitrum, it takes O(log n) instructions to execute something but each instruction takes constant time. And because Arbitrum narrows down verification to a single instruction, resolving a dispute can take constant time.

The state of a VM is revealed only to the VM’s managers– for example, Alice and Bob would be the only people who need to know what moves were made in the chess game. The only things that appear on the chain are: saltable hashes of the VM state, the number and timing of the steps, and the messages/money sent and received by the VM.

The Arbitrum team has implemented this system with 6,800 lines of Go code, a VM emulator, assembler, and loader. They have an honest manager module that makes and defends assertions. Their proof of concept uses a centralized verifier for simplicity, but you could easily replace this pluggable module that allows multiple verifiers. They also have an Arbitrum standard library.

How well does this scale? Ed describes an example contract, showing that at the high end, Arbitrum can work at roughly a million times the performance of Ethereum. Ed thinks it’s the only system that provides scalability, privacy, and a programmable modules for writing smart contracts.

Questions

After the talk, I asked Ed if collaborations like this are common- ones that bring together game-theoretical mechanism design, cryptography, and algorithm/data structure design. Ed responded that most cryptocurrency work does combine these things. What makes Arbitrum unusual, Ed explained, is the way in which the research team re-designed the VM in a way that makes the protocol more scalable. It’s hard for people to keep all of those things in mind, and Ed says that it’s easy to get things wrong– which is why peer review is so important in cryptocurrency research.

Thoughts on California’s Proposed Connected Device Privacy Bill (SB-327)

This post was authored by Noah Apthorpe.

On September 6, 2018, the California Legislature presented draft legislation to Governor Brown regarding security and authentication of Internet-connected devices. This legislation would extend California’s existing reasonable data security requirement—which already applies to online services—to Internet-connected devices.  

The intention of this legislation to prevent default passwords and other egregious authentication flaws is admirable, especially given the extent of documented security vulnerabilities in Internet of things (IoT) devices. Many such vulnerabilities, including default passwords and cleartext data transmissions (e.g., in IoT toys and medical devices discovered by researchers at Princeton), stem from lazy developer practices resulting in devices with minimal (or nonexistent) security or privacy protections. Such flaws could be addressed with well-known best practices that would not place excessive burden on device manufacturers.  With this context in mind, we applaud California’s effort to mandate minimal security and privacy protections for connected devices.

Unfortunately, as critics have pointed out, the wording of the proposed legislation is imprecise, especially regarding “reasonable security features.” Rather than reiterate this criticism, we point out some additional technical limitations of the proposed legislation, focusing on cases where the current language does not properly address security flaws as intended. We hope that these examples will help inform future improvements to this or other IoT security and privacy legislation.

1798.91.04.b.1: “The preprogrammed password is unique to each device manufactured.”
Mandating unique passwords for each device still leaves room for passwords that are still easily guessable. For example, a manufacturer could assign consecutive integers as passwords to all devices and be in compliance, but such passwords could be easily enumerated by an attacker. Related problems have already occurred. In 2015, TP-LINK routers were shipped with unique WiFi passwords derived from the hardware (MAC) addresses of each device. This made it trivial to observe traffic from one of these routers and subsequently guess the WiFi password. Much research has gone into the topic of generating secure passwords, the takeaways of which should be incorporated into any default password prevention law. Ultimately, additional criteria on preprogrammed unique passwords are needed for this bill to provide the intended security benefits.

1798.91.04.b.2: “The device contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time.”
This alternative requirement leaves open the possibility that devices or device features that users never access will not receive unique, secure passwords or other new authentication means. For example, many users never access the administrative interface of their home router (which typically has a different authentication method than the WiFi network itself). If the first login attempt is from an attacker, the attacker could receive the new authentication credentials.

Additionally, this requirement does not address the potential for devices to employ otherwise insecure authentication systems that still generate new credentials upon first access. As an analogy, just because an Internet user creates a new password for each online account does not mean that each of these passwords is necessarily secure. Again, additional criteria on the authentication system are needed.

1798.91.05.b: “‘Connected device’ means any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address.”
This extends the purview of the proposed legislation to PCs, laptops, smartphones, servers, and any other computing device that can access the Internet, even though the law is being promoted specifically as a protection for IoT devices.  While we are in favor of additional security and privacy protections on all networked devices, this broad scope may prevent improvements in the specificity of future versions of the legislation which may only be applicable to IoT, mobile, or other subcategory of devices.

1798.91.06.a: “This title shall not be construed to impose any duty upon the manufacturer of a connected device related to unaffiliated third-party software or applications that a user chooses to add to a connected device.”
This leaves unaddressed the complex ecosystem of third-party software that is incorporated into IoT and other Internet-connected devices before they reach the user. As an example, how does this law apply to Android smartphones for which the hardware is made by one manufacturer, the operating system is made by another (Google), and still other companies create mobile applications that come pre-loaded on the phone? The manufacturer of the hardware unlikely implements any authentication visible to the user. Instead, the operating system provides authentication for user access to the phone (usually via a PIN or a biometric fingerprint), while individual apps may have their own authentication measures. Future versions of the bill should clearly specify which criteria apply to the wide range of operating systems, applications, and software libraries from third-parties that provide core security, privacy, and authentication features for many devices.

Final Thoughts
Legal requirements that connected devices avoid well-known and easily fixed security flaws, such as default passwords, are long overdue. Yet, such legislation must recognize the complexity of cybersecurity issues. The above examples demonstrate the type of technical “gotchas” that can hide in well-meaning regulatory language. As California SB-327 or related legislation proceeds, we hope that legislators will consult with academic or industry researchers who have spent considerable effort developing, testing, and refining security and privacy solutions in a wide variety of technology contexts.

Serious design flaw in ESS ExpressVote touchscreen: “permission to cheat”

Kansas, Delaware, and New Jersey are in the process of purchasing voting machines with a serious design flaw, and they should reconsider while there is still time!

Over the past 15 years, almost all the states have moved away from paperless touchscreen voting systems (DREs) to optical-scan paper ballots.  They’ve done so because if a paperless touchscreen is hacked to give fraudulent results, there’s no way to know and no way to correct; but if an optical scanner were hacked to give fraudulent results, the fraud could be detected by a random audit of the paper ballots that the voters actually marked, and corrected by a recount of those paper ballots.

Optical-scan ballots marked by the voters are the most straightforward way to make sure that the computers are not manipulating the vote.  Second-best, in my opinion, is the use of a ballot-marking device (BMD), where the voter uses a touchscreen to choose candidates, then the touchscreen prints out an optical-scan ballot that the voter can then deposit in a ballot box or into an optical scanner.  Why is this second-best?  Because (1) most voters are not very good at inspecting their computer-marked ballot carefully, so hacked BMDs could change some choices and the voter might not notice, or might notice and think it’s the voter’s own error; and (2) the dispute-resolution mechanism is unclear; pollworkers can’t tell if it’s the machine’s fault or your fault; at best you raise your hand and get a new ballot, try again, and this time the machine “knows” not to cheat.

Third best is “DRE with paper trail”, where the paper ballot prints out behind glass; the voter can inspect it, but it can be difficult and discouraging to read a long ballot behind glass, and there’s pressure just to press the “accept” button and get on with it.  With hand-marked optical-scan ballots there’s much less pressure to hurry:  you’re not holding up the line at the voting machine, you’re sitting at one of the many cheap cardboard privacy screens with a pen and a piece of paper, and you don’t approach the optical scanner until you’re satisfied with your ballot.  That’s why states (such as North Carolina) that had previously permitted  “DRE with paper trail” moved last year to all optical-scan.

Now there’s an even worse option than “DRE with paper trail;”  I call it “press this button if it’s OK for the machine to cheat” option.   The country’s biggest vendor of voting machines, ES&S, has a line of voting machines called ExpressVote.  Some of these are optical scanners (which are fine), and others are “combination” machines, basically a ballot-marking device and an optical scanner all rolled into one.

This video shows a demonstration of ExpressVote all-in-one touchscreens purchased by Johnson County, Kansas.  The voter brings a blank ballot to the machine, inserts it into a slot, chooses candidates.  Then the machine prints those choices onto the blank ballot and spits it out for the voter to inspect.  If the voter is satisfied, she inserts it back into the slot, where it is counted (and dropped into a sealed ballot box for possible recount or audit).

So far this seems OK, except that the process is a bit cumbersome and not completely intuitive (watch the video for yourself).  It still suffers from the problems I describe above: voter may not carefully review all the choices, especially in down-ballot races; counties need to buy a lot more voting machines, because voters occupy the machine for a long time (in contrast to op-scan ballots, where they occupy a cheap cardboard privacy screen).

But here’s the amazingly bad feature:  “The version that we have has an option for both ways,” [Johnson County Election Commissioner Ronnie] Metsker said. “We instruct the voters to print their ballots so that they can review their paper ballots, but they’re not required to do so. If they want to press the button ‘cast ballot,’ it will cast the ballot, but if they do so they are doing so with full knowledge that they will not see their ballot card, it will instead be cast, scanned, tabulated and dropped in the secure ballot container at the backside of the machine.”  [TYT Investigates, article by Jennifer Cohn, September 6, 2018]

Now it’s easy for a hacked machine to cheat undetectably!  All the fraudulent vote-counting program has to do is wait until the voter chooses between “cast ballot without inspecting” and “inspect ballot before casting”.  If the latter, then don’t cheat on this ballot.  If the former, then change votes how it likes, and print those fraudulent votes on the paper ballot, knowing that the voter has already given up the right to look at it.

Johnson County should not have bought these machines; if they’re going to use them, they must insist that ES&S disable this “permission to cheat” feature.

Union County New Jersey and the entire state of Delaware are (to the best of my knowledge) in the process of purchasing ExpressVote XL machines, which are like the touchscreens shown in the video but with a much larger screen that can show the whole ballot at once.  New Jersey and Delaware should not buy these machines.  If they insist on buying them, they must disable the “permission to cheat” feature.

Of course, if the permission-to-cheat feature is disabled, that reverts to the cumbersome process shown in the video: (1) receive your bar-code card and blank ballot from the election worker; (2) insert the blank ballot card into the machine; (3) insert the bar-code card into the machine; (4) make choices on the screen; (5) press the “done” button; (6) wait for the paper ballot to be ejected; (7) compare the choices listed on the ballot with the ones you made on the screen; (8) put the ballot back into the machine.

Wouldn’t it be better to use conventional optical-scan balloting, as most states do?  (1) receive your optical-scan ballot from the election worker;  (2) fill in the ovals with a pen, behind a privacy screen; (3) bring your ballot to the optical scanner; (4) feed your ballot into the optical scanner.

I thank Professor Philip Stark (interviewed in the TYT article cited above) for bringing this to my attention.