February 22, 2017

Sign up now for the first workshop on Data and Algorithmic Transparency

I’m excited to announce that registration for the first workshop on Data and Algorithmic Transparency is now open. The workshop will take place at NYU on Nov 19. It convenes an emerging interdisciplinary community that seeks transparency and oversight of data-driven algorithmic systems through empirical research.

Despite the short notice of the workshop’s announcement (about six weeks before the submission deadline), we were pleasantly surprised by the number and quality of the submissions that we received. We ended up accepting 15 papers, more than we’d originally planned to, and still had to turn away good papers. The program includes both previously published work and original papers submitted to the workshop, and has just the kind of multidisciplinary mix we were looking for.

We settled on a format that’s different from the norm but probably familiar to many of you. We have five panels, one on each of the five main themes that emerged from the papers. The panels will begin with brief presentations, with the majority of the time devoted to in-depth discussions led by one or two commenters who will have read the papers beforehand and will engage with the authors. We welcome the audience to participate; to enable productive discussion, we encourage you to read or skim the papers beforehand. The previously published papers are available to read; the original papers will be made available in a few days.

I’m very grateful to everyone on our program committee for their hard work in reviewing and selecting papers. We received very positive feedback from authors on the quality of reviews of the original papers, and I was impressed by the work that the committee put in.

Finally, note that the workshop will take place at NYU rather than Columbia as originally announced. We learnt some lessons on the difficulty of finding optimal venues in New York City on a limited budget. Thanks to Solon Barocas and Augustin Chaintreau for their efforts in helping us find a suitable venue!

See you in three weeks, and don’t forget the related and colocated DTL and FAT-ML events.

Bitcoin is unstable without the block reward

With Miles Carlsten, Harry Kalodner, and Matt Weinberg, I have a new paper titled On the instability of Bitcoin without the block reward, which Harry will present at ACM CCS next week. The paper predicts that miner incentives will start to go haywire as Bitcoin rewards shift from block rewards to transaction fees, based on theoretical results that closely match up with findings from our new Bitcoin mining simulator.

Bitcoin provides two incentives for miners: block rewards and transaction fees. Currently the vast majority of miner revenues come from block rewards, but in the long run they will come primarily from transaction fees as block rewards dwindle. This design decision has been discussed a lot, but in terms of monetary policy and hardly ever in terms of security. There has been an implicit belief that the transition to transaction fees will not affect the security and stability of the block chain, and in particular that it is immaterial whether miners receive (say) 25 bitcoins as a fixed reward or 25 bitcoins in expectation via transaction fees.

We reexamine this assumption in our paper, and our findings make disturbing news for the future security of Bitcoin and many other cryptocurrencies. Our key insight is that with only transaction fees, the variance of the miner reward is very high due to the randomness of the block arrival time, and it becomes attractive to fork a “wealthy” block to “steal” the rewards therein. [1]

keyinsight2


The figure shows a scenario where forking might be more profitable than extending the longest chain. See the paper for a full explanation.

Here’s how things could go wrong. Due to the possibility of profitable forking, the default strategy is no longer best; we lay out a menagerie of interesting and bizarre strategies in the paper. The most worrisome is “undercutting,” where miners capture as little of the available transaction fees as they can get away with, leaving the rest in the pool as an incentive for the next miner to extend their block rather than a competing block.

We also show rigorously that selfish mining gets worse when block rewards are replaced by transaction fees, motivated by the following intuition: if you happen to mine a new block just seconds after the last one was found, you gain nothing by publishing, so you might as well keep it for selfish mining in case you get lucky. The variance in transaction fees enables strategies like this that simply don’t make sense when the block reward is fixed.

If miners switch to these deviant strategies, the blockchain will be much less secure because of the mining power wasted due to constant forking, undercutting, and withholding of found blocks.

We derive most of our results in two separate ways: analytically, i.e., using game theory, and with a new mining simulator that we created. This gives us added confidence in our findings. For example, in one setting, the theory predicts a rather grotesque equilibrium involving on the Lambert W function, with the proof running to several pages. Sure enough, in our simulations of the same setting, the Lambert miner does best. We hope that our analytical techniques as well as our simulator will be useful to other researchers. We have made the simulator code open-source.

What is the impact of our findings? The Bitcoin community will probably need to respond to this problem in the long run, potentially via a fork, to discourage deviant strategies. We aren’t predicting that deviant strategies will arise in the short term, and there is a long runway for mitigation steps to be rolled out. The fact that blocks have filled up due to their 1MB limit decreases the variance of transaction fees between different blocks, and this mitigates the problem somewhat, although it is far from a complete and satisfactory solution. For example, at the time of writing our paper, the previous 1000 blocks included per-block transaction fees ranging from 0.03 BTC to 4.51, with a mean of 0.49 and standard deviation of 0.25 (over half the mean!). So simply maintaining the block-size limit probably won’t resolve the underlying issues.

At a deeper level, our results suggest a fundamental rethinking of the role of block rewards in cryptocurrency design. The prevailing view is that the block reward is a necessary but temporary evil to achieve an initial allocation of coins in the absence of a central authority. The transaction-fee regime is seen as the ideal steady state of the system. But our work shows that incentivizing compliant miner behavior in the transaction fee regime is a significantly more daunting task than in the block reward regime. So perhaps designers of new cryptocurrencies should make the block reward permanent and accept monetary inflation as inevitable. Transaction fees would still exist, but merely as an incentive for miners to include transactions in their blocks.

One final point: there is a science of designing economic incentives so that rational players will behave in a desired way, and it’s called mechanism design. Creators of cryptocurrencies (as well as creators of applications such as the DAO) are essentially doing mechanism design. But mechanism design is hard, and our paper is the latest among many to point out that the mechanisms embedded in cryptocurrencies have flaws. Yet, sadly, the cryptocurrency community is currently disjoint from the mechanism design community. That is why I’m thrilled that mechanism design expert Matt Weinberg, who’s behind all the sophisticated theory in our paper, is joining Princeton’s faculty next semester. Expect more research from us on the mechanism design of cryptocurrencies!

[1] The problems uncover arise not because transaction fees may arrive erratically, but because blocks inevitably arrive unpredictably.  We model transaction fees as arriving at a uniform rate. The rate is non-uniform in practice, which is an additional complication. This is a theme throughout our paper: we show that undesirable behaviors will arise even in simplified, “clean” models. This is bad news both because we think things will probably be worse in practice and because we want cryptocurrency mining games to be analytically tractable. Our work shows that in a transaction-fee regime, predicting behavior will be fiendishly complex.

Update: see also Bryan Ford’s response to this post (and paper).

Language necessarily contains human biases, and so will machines trained on language corpora

I have a new draft paper with Aylin Caliskan-Islam and Joanna Bryson titled Semantics derived automatically from language corpora necessarily contain human biases. We show empirically that natural language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well.

Specifically, we look at “word embeddings”, a state-of-the-art language representation used in machine learning. Each word is mapped to a point in a 300-dimensional vector space so that semantically similar words map to nearby points.

We show that a wide variety of results from psychology on human bias can be replicated using nothing but these word embeddings. We primarily look at the Implicit Association Test (IAT), a widely used and accepted test of implicit bias. The IAT asks subjects to pair concepts together (e.g., white/black-sounding names with pleasant or unpleasant words) and measures reaction times as an indicator of bias. In place of reaction times, we use the semantic closeness between pairs of words.

In short, we were able to replicate every single result that we tested, with high effect sizes and low p-values.

These include innocuous, universal associations (flowers are associated with pleasantness and insects with unpleasantness), racial prejudice (European-American names are associated with pleasantness and African-American names with unpleasantness), and a variety of gender stereotypes (for example, career words are associated with male names and family words with female names).

But we go further. We show that information about the real world is recoverable from word embeddings to a striking degree. The figure below shows that for 50 occupation words (doctor, engineer, …), we can accurately predict the percentage of U.S. workers in that occupation who are women using nothing but the semantic closeness of the occupation word to feminine words!

These results simultaneously show that the biases in question are embedded in human language, and that word embeddings are picking up the biases.

Our finding of pervasive, human-like bias in AI may be surprising, but we consider it inevitable. We mean “bias” in a morally neutral sense. Some biases are prejudices, which society deems unacceptable. Others are facts about the real world (such as gender gaps in occupations), even if they reflect historical injustices that we wish to mitigate. Yet others are perfectly innocuous.

Algorithms don’t have a good way of telling these apart. If AI learns language sufficiently well, it will also learn cultural associations that are offensive, objectionable, or harmful. At a high level, bias is meaning. “Debiasing” these machine models, while intriguing and technically interesting, necessarily harms meaning.

Instead, we suggest that mitigating prejudice should be a separate component of an AI system. Rather than altering AI’s representation of language, we should alter how or whether it acts on that knowledge, just as humans are able to learn not to act on our implicit biases. This requires a long-term research program that includes ethicists and domain experts, rather than formulating ethics as just another technical constraint in a learning system.

Finally, our results have implications for human prejudice. Given how deeply bias is embedded in language, to what extent does the influence of language explain prejudiced behavior? And could transmission of language explain transmission of prejudices? These explanations are simplistic, but that is precisely our point: in the future, we should treat these as “null hypotheses’’ to be eliminated before we turn to more complex accounts of bias in humans.