December 14, 2024

Bitcoin and game theory: we’re still scratching the surface

In an earlier post I argued why Bitcoin’s stability is fundamentally a game-theoretic proposition, and ended with some questions:

Can we effectively model the system with all its interacting components in the language of strategies and payoff-maximization? Is the resulting model tractable — can we analyze it mathematically or using simulations? And most importantly, do its predictions match what we observe in practice?

Let’s look at those questions in the context of a “block withholding attack” between mining pools.

Recall that mining pools are groups of individual miners who pool their computing power as well as their rewards. Suppose two mining pools — let’s call them blue and red — are both seeking to maximize their mining rewards.  Let’s say the manager of the red pool decides to infiltrate the blue pool and decrease their efficiency using some of the mining power that red (directly or indirectly) controls. This can be done by submitting shares (partial proofs of work) to earn a share of rewards, but withholding any valid blocks which are found and therefore not contributing any productive work to the blue pool. At first sight this seems like cutting off your nose to spite your face — sure, blue’s efficiency will be hurt, but red is wasting hash power as well.

To get a handle on this situation, let’s write down three rules that govern rewards in pooled mining:

  1. A pool’s revenues in any period are proportional to the number of Bitcoin blocks that its members mine, measured as a fraction of the total blocks mined in that period.
  2. A miner’s rewards are proportional to the number of “shares” submitted, as a fraction of the total shares submitted by all members of that pool.
  3. Miners can easily create numerous pseudo-identities (“sybils”), each contributing a very small amount of mining power. Therefore pools can’t easily detect if a miner is withholding valid blocks (and can’t punish a miner for doing so).

These rules are somewhat of an approximation, but they are widely accepted as a starting point due to their analytical clarity. Within this framework, we’d like to determine if a block withholding attack can be profitable. This is obviously an important question, and it’s also well-defined mathematically. We’ve taken all elements of human behavior out of the equation, so we can do some arithmetic to check the answer.

Let’s say that initially, blue and red both manage 50% of mining power. For this example, let’s ignore 51% attacks.

Illustration of a profitable block-withholding attack. The red pool has 50% of mining power but earns five-ninths of rewards.

Now red devotes half its power (25% of the total) to infiltrating blue’s pool, and sends only shares and not blocks to the pool. This means that of all the blocks reaching the Bitcoin network, 2/3 are coming from blue and 1/3 from red. Mining rewards will therefore be distributed between the two pools in the same ratio, 2/3 and 1/3.

But of blue’s rewards, blue will pay a third out to red and only keep two-thirds for itself. That’s because red contributes 1/3 of blue’s shares and pools pay out on the basis of shares, not blocks. Recall that blue can’t tell which miners the misbehavior is coming from. In other words, blue keeps four-ninths of global mining rewards, and pays two-ninths out to red. Combined with the one-third that red earns directly, red’s share is five-ninths.

This means that block withholding attacks can in theory be profitable, which is an extremely interesting fact on its own.

What is mind-boggling, though, is that while people had asked the question for a long time of whether block withholding could be profitable, somehow no one had done the arithmetic presented in the last few paragraphs to discover the answer. The profitability of this attack was first pointed out in a paper by Courtois and Bahack last year that didn’t get much attention. Recently Ittay Eyal analyzed it rigorously, doing some neat game theoretic analysis of a situation with multiple attacking pools, and also brought it to wider attention.

This is not the only example of an obvious-in-retrospect break of widely held assumptions about the stability of Bitcoin mining. There’s at least Eyal and Sirer’s selfish mining and Andrew Miller’s feather fork. In each case, miners can potentially gain by deviating from the default protocol. Even though the models of mining used in these analyses are very simple, it took years to spot these bugs. And we can be sure we haven’t found the last of them.

I’m using the word bugs for a reason. If you think about the tremendous progress that’s been made in software testing and model checking for finding software bugs and writing correct programs, it’s hard to believe we haven’t found a way to represent Bitcoin’s strategy space in a formal language and automatically probe for deviant strategies. Is it simply a matter of the game theory and Bitcoin research communities having no overlap? Or are the tools developed in game theory for automated analysis of equilibria not capable of handling the domain of Bitcoin mining for some reason? [1]

Bitcoin offers an excellent testbed to explore and improve our knowledge of game theory. Due to the large financial incentives at stake, theoretical knowledge about strategies is considered very valuable. And yet, unlike, say, the stock market, the system is “closed” and relatively amenable to modeling and analysis. [2] We’re only slowly starting to exploit this opportunity, and further work in this area can enrich both Bitcoin and game theory.

Finally, there’s been little or no evidence of miners employing deviant strategies in practice so far. While this doesn’t in any way diminish the importance of the type of analysis we’ve talked about, it’s important to ask what’s causing the gap between models and observed behavior. I’ll take up this question in the next post.

[1] There’s been some work in the Bitcoin community on building mining simulators. This is a different approach, but also an interesting direction.

[2] For the system to be closed we have to ignore factors like the impact of mining strategies on the Bitcoin exchange rate. This will be the focus of the next post.

Comments

  1. Joe Schmoe says

    Assuming you are correct, doesn’t this just mean people can’t pool their mining efforts and must rely on fully trusted sources for power? If that’s the case, it’s probably a good thing. The whole point of the mining dynamic is to distribute the processing power over large numbers of competing miners.

  2. Anonymous says

    Similar paper about the same attack, which shows more strategies for the miner
    http://eprint.iacr.org/2015/155

  3. i’m not sure you can call Bitcoin mining a closed system when much of the incentive is driven by what’s going on in the traditional financial system, namely problems.

    i also fail to see the utility of a 2 actor model as you’ve diagrammed. currently the 2 biggest pools are Discus and AntPool, 17% and 15%, respectively. certainly they can’t model an attack against each other as you’ve gamed as they will lose to everyone else.

    what is clear that is going on is that the pool distribution is spreading out more evenly as hardware performance is leveling off and prices are dropping effectively commoditizing it and allowing smaller players to acquire equipment again at similar prices to the bigger players. this is good for decentralization in a relatively unregulated market with a new form of money that is fair. that’s plenty of incentive to encourage players to compete heartily w/o feeling disadvantaged.

    • The model is not limited to two pools. My example uses two pools for simplicity. A similar reasoning applies to any number of pools. Indeed, Lemma 2 in Eyal’s paper shows that with any number of pools and any distribution of mining power among those pools, at least one pool will choose to perform block withholding.

  4. Described not last year, but in 2011 in https://bitcoil.co.il/pool_analysis.pdf

    • That’s a very good paper, but while it talks about block withholding, it doesn’t have the insight about profitability that we’re talking about here. For example, he says: “The simpler attack is sabotage, where the attacker never submits any blocks. This has no direct benefit for the attacker, only causing harm to the pool operator or participants.” This is just what I had in mind when I said “At first sight this seems like cutting off your nose to spite your face.”

  5. Your math presumes that submitting shares to a pool is completely unhelpful… is that really the case? Helping even 1/18 (5.6%) of the time would invalidate the strategy – and eliminating known-bad solutions counts as help.

    • Yes, the presumption is correct — a miner can submit every share found, except if it is a valid block. So the miner makes zero contribution to the pool’s revenues. “Eliminating known-bad solutions” doesn’t help since the space is essentially infinite.

      • Harry Johnston says

        Wouldn’t this be easily detectable by statistical analysis? That is, why can’t you just not pay out if a miner has submitted too few valid blocks for the number of shares generated?

        I can see that this would mean delaying payment for small miners until the number of shares is high enough to be statistically valid, but I don’t see why this would be a major problem.

        • That’s because for sufficiently small miners, it takes basically forever to find even a single block (which is, after all, the problem that pooling is trying to solve). The attacker can always break up his power into a sufficiently large number of sybils.

        • James Landis says

          The mining coalition manager can do one better than statistical analysis:

          For any “untrusted” miner in the pool, randomly give it shares of work with known positive solutions and see if the miner actually gives back the positive solution. The frequency of such “loyalty” checks could decrease over time as the pool manager builds more trust in the “untrusted” miner.

          I could see a minor arms race develop here, but my intuition says that the cost of implementing a reliable “double-agent” is prohibitive and many basic real-world controls to enforce the trustworthiness of miners would easily overcome the attacker’s ability to infiltrate if this becomes a known common attack.

          • James Landis says

            Actually, now that I think a little more about this, the rogue miner can be detected pretty reliably on the EVERY defection (check each new solution on the blockchain to see if the solution was assigned to any miners in your pool that answered negative). Boot out the offenders.

            This shifts the problem toward simple registration controls (e.g. 1 BC deposit required to join the group, non-refundable if you’re found to misbehave, refundable after you’ve mined 1BC share of value). You will be booted for misbehavior at any time.

            Interesting analysis, but easy to solve for. I have the same intuition you have that there are lots of undiscovered attacks waiting in the ecosystem, but the opposite intuition that any of them are “class breaks”. The older the system gets, the less likely that some major game-breaking issue exists. (cf. “economists’ $20 bill on the sidewalk).

          • Nicholas Weaver says

            James: That detection strategy doesn’t work.

            Each mining pool is actually working on a slightly different version of the problem: a different merkel tree of transactions, a different payout address, etc…

            The work in the red pool contributing to blue is different than the work of red-for-red. If red finds a block on its work for the blue pool, it just drops it, and if it finds a block for the red pool it just publishes it, but there is no way for the red pool to go “this block is a winner on blue, lets publish it as red”.