October 5, 2015


The Defend Trade Secrets Act Has Returned

Freedom to Tinker readers may recall that I’ve previously warned about legislation to create a federal private cause of action for trade secret misappropriation in the name of fighting cyber-espionage against United States businesses. Titled the Defend Trade Secrets Act (DTSA), it failed to move last year. Well, the concerning legislation has returned, and, although it has some changes, it is little better than its predecessor. In fact, it may be worse.

Therefore, Sharon Sandeen and I have authored a new letter to Congress. In it, we point out that our previously-stated concerns remain, both stated by a previous letter and in a law review article entitled Here Come The Trade Secret Trolls. In sum, we argue that  combined “with an ex parte seizure remedy, embedded assumption of harm, and ambiguous language about the inevitable disclosure doctrine, the new DTSA appears to not only remain legislation with significant downsides, but those downsides may actually be even more pronounced.” Moreover, we assert that “the DTSA still does not do much, if anything, to address the problem of cyber-espionage that cannot already be done under existing state and federal law.”

In the letter, we call on Congress to abandon the DTSA. In addition, we ask that “there be public hearings on (a) the benefits and drawbacks of the DTSA, and (b) the specific question of whether the DTSA addresses the threat of cyber-espionage.” Finally, we encourage Congress to consider alternatives in dealing with cyber-espionage, including much-needed amendment of the Computer Fraud and Abuse Act.


Does cloud mining make sense?

[Paul Ellenbogen is a second year Ph.D. student at Princeton who’s been looking into the economics and game theory of Bitcoin, among other topics. He’s a coauthor of our recent paper on Namecoin and namespaces. — Arvind Narayanan]

Currently, if I wanted to mine Bitcoin I would need to buy specialized hardware, called application-specific integrated circuits (ASICs). I would need to find space for my hardware, which could take up a considerable amount of space. I might need to install a new cooling system into the facility to dissipate the considerable amounts of heat generated by the hardware.

Or I could buy a cloud mining contract. Cloud mining companies bill themselves as companies that take care of all of the gritty details and allow the consumer to directly buy hash power with dollars. Most cloud mining companies offer contracts for varying term lengths, going anywhere from on the order of weeks to perpetuity. For example, I could pay $300, and receive one terrahash per second for the next year. As soon as the cloud hashing provider receives my money, they start up a miner, or allocate me existing cycles, and I should start earning bitcoins in short order. Sounds easy right?

Cloud mining has a bad track record. Many cloud mining services have closed up shop and run off with customer money. Examples include PBmining, lunaminer, and cloudminr.io. Gavin Andresen, a Bitcoin Core developer, once speculated that cloud mining doesn’t make any sense and that most of these services will end up as scams.

Cloud mining has been a popular front for Ponzi schemes, investment frauds where old customers or investors are paid with the money of new customers. In the case of cloud mining Ponzi schemes, bitcoins to pay old contracts are furnished from the payment of new customers. Ponzi schemes tend to collapse when the flow of new customers dries up, or when a large number of customers try to cash out. Cloud mining is a particularly appealing target for Ponzi schemes because the second failure case, cashing out, is not an option for those holding mining contracts. The contracts stipulate a return of bitcoins determined by hash rate. This means Ponzi scheme operators only need to keep recruiting new users for as long as possible. Bitcointalk user Puppet points out a set of 7 useful criteria for spotting cloud mining scams. Out of the 42 operations puppet examines, they identify 30 operations as scams, 14 of which have already ceased operation.

Yet cloud mining persists. That so many cloud mining operations end up being scams may appeal to our basic business intuition. Compare a cloud miner to a traditional bitcoin miner. A traditional bitcoin miner mines bitcoins and sells them on the exchange at their current market rate. It seems that the only way for a cloud miner to do better than a traditional bitcoin miner selling bitcoins at market price is at the expense of the cloud mining customer. It appears there is no way for both cloud miner and their customer to walk away better off.

Yet cloud mining and at least some interest in cloud mining persists. I would like to offer some possible scenarios where cloud mining may deliver the hashes that customers order.

Hired guns? Papers that propose attacks against bitcoin often pose “An attacker with X% of the hash power could do Y.” For example, in selfish mining, as first described by Eyal et al, with 33% of the mining power an attacker could force the rest of the network to mine on top of their blocks. Cloud miners could be used for block withholding attacks too. An important feature of many of these attacks is that the mining power need not be used all the time. These attacks would require flexibility in the mining software the attackers are using, as most off the shelf mining software (thankfully) does not have these attacks built in. Most cloud mining set ups I have looked at don’t allow for enough flexibility to launch attacks, nor are the contract periods on most services short enough. Cloud mining customers typically have a simple web interface, and in the best case are able to chose which pools they join, but they do not have any sort of scriptable direct interface to the mining hardware. At the moment, cloud miners are probably not supporting themselves by executing attacks for others.

Regulatory loophole? Individuals may try to use cloud mining to circumvent Bitcoin regulations, such as know-your-customer. If I want to turn my dollars into bitcoins, I can buy bitcoins at an exchange, but that exchange would have to know my true identity in order to comply with regulations. Unscrupulous individuals may not want to link their identity and cash flow reported to the government. Cloud mining operators and unscrupulous customers may try to skirt these regulations by claiming cloud mining operations are not exchanges or banks, rather they merely rent computer hardware like any cloud computing provider, meaning they do not need to comply with banking regulation. It is unlikely this would be viable long term, or even short term, as regulators would become wise to these sorts of regulatory loopholes and close. This paragraph is the most speculative on my part, as I am neither a regulator nor a lawyer, so I don’t have expertise to draw on from either of those fields.

Financial instrument? Currently most bitcoin miners take on two roles, managing the mining hardware and managing the financial risk involved in mining. A more compelling justification for cloud miners existence is that cloud mining contracts allow a cloud mining provider to avoid volatility in the exchange rate of bitcoin and the variability in the hash rate. Cloud mining is a means of hedging risk. If cloud miners can enter contracts to provide a certain hash rate to a customer for a length of time, the cloud miner does not need to concern themselves with the exchange rate nor hash rate once the contract begins. It then becomes the job of the customer contracting the cloud miner to manage the risk presented by volatility in the exchange rate. This would allow the cloud miner to specialize in buying, configuring, and maintaining mining hardware, and other individuals to specialize in managing risk related to bitcoin. As the financial instruments surrounding cryptocurrencies become more sophisticated, a terrahash could become another just another cryptocurrency security that is traded.


Acknowledgment: I would like to thank Joseph Bonneau for the contribution of “cloud mining as a means of managing risk” concept.


The Chilling Effects of Confidentiality Creep

Today, North Carolina’s Governor Pat McCrory has a bill on his desk that would make it impossible for the public to find out what entities are supplying the chemical cocktail – the drugs – to be used for lethal injections in North Carolina. Known as the Restoring Proper Justice Act (the “Act”), it defines  “confidential information” as including the “name, address, qualifications, and other identifying information of any person or entity that manufactures, compounds, prepares, prescribes, dispenses, supplies, or administers the drugs or supplies” used in lethal injections, thereby shutting down mandatory public access to basic information involved in extinguishing life in the name of justice. Secret suppliers are but one effect of this legislation; the Act also allows executions to resume in North Carolina and permits a wide range of medical professionals (not just a doctor) to assist in executions.

Call this an example of “confidentiality creep” – quiet, under-scrutinized expansion of the kinds of information deemed inappropriate for public consumption. The Act does not call this information a “trade secret” – information that is valuable because it is not known by a competitor – even though some of it could conceivably be that. Nor is it defined as the “property” of a private person or entity, even though “qualifications” might be deemed such. No; this information is designated “confidential” simply because the Legislature says that it is. It’s a new category in the law.

Before you stop reading, consider that confidentiality creep is not an abstraction, of interest merely to commercial law, freedom of information, and privacy nerds. Regardless of your personal views about the death penalty, whenever the government designates any information secret, we should all take a close look.

The expansion of confidentiality could have repercussions beyond mere access to information. To the extent that we untether legal confidentiality from any clear theoretical grounding – privacy, property, commercial ethics, contract – it runs the risk of being a powerful catch-all, subject to abuse. Slowly, if unchecked, we might expect that the work of cybersecurity researchers, journalists, civil society groups, and anyone else who accesses information deemed “confidential” would be threatened. Those “chilling effects” so prevalent in copyright could become even more powerful where information is used in a way unflattering to those who wave the confidentiality wand, backed by a rudderless law.

As to this particular case of confidentiality creep, there are very real and pernicious impacts. For example, if the Act becomes law, cruel and unusual punishment challenges in North Carolina would be subject to drug manufacturers’ preference not to be bothered. Investigators of gruesome botched executions, which have occurred in other states, would now have to jump an unknown hurdle so as to gather information about the drugs used. After all, the public might want to know how a drug was manufactured, where, under what conditions, and the like. But under the Act – if enforcedall identifying information would have to be gathered by leak, a whistleblower, or by exploiting a mistaken release of information, which might then render the information excluded from evidence.

But wait: the Act’s title suggests that it is supposed to be “restoring proper justice.” Surely there must be a good reason for this confidentiality? According to press reports, the best (and perhaps only) apparent reason is a concern that, if identified, these manufacturers will choose not to provide the drugs out of fear of litigation by death penalty opponents, or be forced not to by a court. North Carolina Policy Watch reported that Sen. Buck Newton explained that confidentiality is needed “so that they aren’t litigated to death in order to prevent them from selling these drugs to the state.” Additionally, Rep. Leo Daughtry, a sponsor of the Act, noted that “if you tell [opponents] where the drug comes from, there will be 300 people outside the building.”

Put aside, for the moment, the assertion that a cost of doing business with the government – especially when it involves something as serious and irreversible as the administration of the death penalty – might be public accountability and scrutiny. As argued by the North Carolina American Civil Liberties Union’s Sarah Preston, “Courts, lawyers and the public have a right to know basic details about how the government executes inmates in their name.” Instead, let’s take the above argument seriously and see where it leads.

Generally, the argument is that if [name your information] is made public, then [name your private manufacturer] won’t provide the [name your good or service] to the government. Well, we’ve heard it before, and it doesn’t pass muster. From voting machines to hydraulic fracturing, weakening commercial confidentiality has generally not resulted in private entities withdrawing from providing the good or service; rather, they’ve adjusted. Moreover, private entities can raise prices in order to hedge against the risk of litigation. Indeed, the threat of exposure might force such manufacturers to provide better goods that are less susceptible to challenge. In other words, the impact of publicity is not all bad – even for the private entities, like drug companies, potentially subject to public scrutiny.

So where does this leave us? As alluded to above, we often forget that doing the public’s business is not the same as private commerce, precisely because the customer is the public itself. We often wind up with confidentiality creep, expanding the definition of what we call “confidential” for underexplored or non-explored reasons, with little or no public discussion. If you’re not convinced, throw the efforts to gain public access to the basic negotiating text for the nearing-completion multilateral Trans Pacific Partnership Agreement into the mix, because that’s confidentiality creep also. It’s not publicly available, for unclear reasons.

We, the public, need to pay a lot more attention to these developments. If you’re concerned about the above, contact Governor McCrory’s office now and urge him to veto the Restoring Proper Justice Act. Aside from any other reasons that you might have, tell him that preventing the risk of a fleeing commercial provider, if it is a problem at all, comes at enormous public cost. Do it now as he could sign the Act at any moment.

More broadly, think critically when you hear arguments about the need for confidentiality. Admittedly, the press does not usually cover these issues, dismissing them as arcane or too complex. Nonetheless, I’ll do my best to document these issues as they arise, because they are too important to ignore. Let’s try to stop the quiet, baseless, under-explained creep.



Analyzing the 2013 Bitcoin fork: centralized decision-making saved the day

On March 11, 2013, Bitcoin experienced a technical crisis. Versions 0.7 and 0.8 of the software diverged from each other in behavior due to a bug, causing the block chain to “fork” into two. Considering how catastrophic a hard fork can be, the crisis was resolved quickly with remarkably little damage owing to the exemplary competence of the developers in charge. The event gives us a never-before-never-again look into Bitcoin’s inner workings. In this post, I’ll do a play-by-play analysis of the dramatic minutes, and draw many surprising lessons. For a summary of the event, see here.

First of all, the incident shows the necessity of an effective consensus process for the human actors in the Bitcoin ecosystem. The chronology of events was like this: it took about an hour after the fork started for enough evidence to accumulate that a fork was afoot. Once that was understood, things unfolded with remarkable speed and efficiency. The optimal course of action (and, in retrospect, the only one that avoided serious risks to the system) was first proposed and justified 16 minutes later, and the developers reached consensus on it a mere 20–25 minutes after that. Shortly thereafter — barely an hour after the discovery of the fork — the crisis response had effectively and successfully concluded. It took a few hours more for the fork to heal based on the course of action the developers initiated, but the outcome wasn’t in doubt.

More surprisingly, it also shows the effectiveness of strong central leadership. That’s because the commonsense solution to the fork — as well as the one programmed into the software itself — was to encourage miners running old versions to upgrade. As it turns out, the correct response was exactly the opposite. Even a delay of a few hours in adopting the downgrade solution would have been very risky, as I’ll argue, with potentially devastating consequences. Without the central co-ordination of the Bitcoin Core developers and the strong trust that the community places in them, it is inconceivable that adopting this counterintuitive solution could have been successfully accomplished.

Further, two more aspects of centralization proved very useful, although perhaps not as essential. The first is the ability of a few developers who possess a cryptographic key to broadcast alert messages to every client, which in this case was used to urge them to downgrade. The second is the fact that the operator of BTC Guild, a large mining pool at the time, was able to singlehandedly shift the balance of mining power to the old branch by downgrading. If it weren’t for this, it would have resulted in a messy “coordination problem” among miners, and we can imagine each one hesitating, waiting for someone else to take the leap.

Since most of the discussion and decision-making happened on the #bitcoin-dev IRC channel, it remains publicly archived and offers a remarkable window into the Core developers’ leadership and consensus process. Consensus operated at remarkable speed, in this instance faster than consensus happens in the Bitcoin network itself. The two levels of consensus are intricately connected.

Let’s now dive into the play-by-play analysis of the fork and the reaction to it. I’ve annotated the transcript of selected, key events from the IRC log of that fateful night. I’ve made minor edits to make the log easier to read — mainly replacing the nicknames of prominent community members with their real names, since their identities are salient to the discussion.

Signs of trouble

The first signs that something is wrong come from a miner with nickname thermoman, as well as Jouke Hofman, a Dutch exchange operator, who report strange behavior from their Bitcoin clients. Bitcoin core developer Pieter Wuille helps them debug these problems, but at this point everyone assumes these are problems local to the two users, rather than something on the network. But around 23:00, a variety of other services showing strange behavior are noticed (blockchain.info, blockexplorer.com, and the IRC bot that reports the status of the network), making it obvious that something’s wrong on the network. Luke Dashjr, a prominent developer, spells out the unthinkable:

  23:06  Luke Dashjr		so??? yay accidental hardfork? :x
  23:06  Jouke Hofman		Holy crap

Over the next few minutes people convince themselves that there’s a fork and that nodes running the 0.8 and the 0.7 versions are on different sides of it. Things progress rapidly from here. A mere five minutes later the first measure to mitigate the damage is taken by Mark Karpeles, founder of Mt. Gox:

  23:11  Mark Karpeles		I've disabled the import of bitcoin blocks for now
				until this is sorted out
  23:13  Luke Dashjr		I'm trying to contact poolops [mining pool operators]

It’s pretty obvious at this point that the best short-term fix is to get everyone on one side of the fork. But which one?

Up or down? The critical decision

At 23:18, Pieter Wuille sends a message to the bitcoin-dev mailing list, informing them of the problem. But he hasn’t fully grasped the nature of the fork yet, stating “We risk having (several) forked chains with smaller blocks” and suggests upgrading as the solution. This is unfortunate, but it’s the correct thing to do given his understanding of the fork. This email will stay uncorrected for 45 minutes, and is arguably the only slight misstep in the developer response.

  23:21  Luke Dashjr		at least 38% [of hashpower] is on 0.8 right now
				otoh, that 38% is actively reachable

Dashjr seems to suggest that the 0.8 0.7 downgrade is better because the operators of newly upgraded nodes are more likely to be reachable by the developers to convince them to downgrade. This is a tempting argument. Indeed, when I describe the fork in class and ask my students why the developers picked the downgrade rather than the upgrade, this is the explanation they always come up with. When I push them to think harder, a few figure out the right answer, which Dashjr points out right afterward:

  23:22  Gavin Andresen		the 0.8 fork is longer, yes? So majority hashpower is 0.8....
  23:22  Luke Dashjr		Gavin Andresen: but 0.8 fork is not compatible
				earlier will be accepted by all versions

Indeed! The behavior of the two versions is not symmetric. Upgrading will mean that the fork will persist essentially indefinitely, while downgrading will end it relatively quickly.

(Lead developer) Gavin Andresen still protests, but Wuille also accepts Dashjr’s explanation:

  23:23  Gavin Andresen		first rule of bitcoin: majority hashpower wins
  23:23  Luke Dashjr		if we go with 0.8, we are hardforking
  23:23  Pieter Wuille		the forking action is a too large block
				if we ask miners to switch temporarily to smaller blocks again,
				we should get to a single chain soon
				with a majority of miners on small blocks, there is no risk
  23:24  Luke Dashjr		so it's either 1) lose 6 blocks, or 2) hardfork for no benefit
  23:25  BTC Guild		We'll lose more than 6

BTC Guild was a large pool at the time, and its operator happened to be online. They are correct — the 0.8 branch had 6 blocks at the time, but was growing much faster than the 0.7 branch and would continue to grow until the latter gradually caught up. Eventually 24 blocks would be lost. BTC Guild will turn out to be a key player, as we will soon see.

More explanation for why downgrade is the right approach:

  23:25  Pieter Wuille		all old miners will stick to their own chain
				regardless of the mining power behind the other
  23:25  Luke Dashjr		and the sooner we decide on #1, the fewer it loses
  23:26  Pieter Wuille		even with 90% of mining power on 0.8
				all merchants on an old client will be vulnerable
  23:26  Luke Dashjr		if we hardfork, all Bitcoin service providers have an emergency situation
  23:30  Pieter Wuille		and we _cannot_ get every bitcoin user in the world
				to now instantly switch to 0.8
				so no, we need to rollback to the 0.7 chain


Many Bitcoin users are not aware of the centralized ability in Bitcoin for a few developers to send out alert messages. Not only does it exist, it becomes crucial here, as Core developer Jeff Garzik and another user point out:

  23:31  Jeff Garzik		alert is definitely needed
  23:31  jrmithdobbs		also, if this isn't an alert message scenario 
				I don't know what is, where's that at? :)

A bit of a comic interlude from the operator of ozcoin, another mining pool:

  23:31  Graet			ozcoin wont even get looked at for an hour or so
  23:31  Graet			no-one is avalable and i need to take kids to school

The situation is urgent (the more clients upgrade, the harder it will be to convince everyone to downgrade):

  23:32  phantomcircuit		0.7 clients are already displaying a big scary warning
				Warning: Displayed transactions may not be correct!
				You may need to upgrade, or other nodes may need to upgrade.
				unfortunately i suspect that warning will get people to upgrade

Other complications due to custom behavior of some miners’ software:

  23:35  lianj			oh, damn. anyhow please keep us guys updated which code change is made 
				to solve the problem. our custom node does .8 behavior

Gavin Andresen and Jeff Garzik still aren’t convinced (they seem to be thinking about getting 0.8 nodes to switch to the other branch, rather than the more blunt solution of asking miners to downgrade the client)

  23:34  Jeff Garzik		and how to tell bitcoind "mine on $this old fork"
  23:35  Gavin Andresen		exactly. Even if we want to roll back to the 0.7-compatible chain, 
				I don't see an easy way to do that.

This shows the usefulness of the developers having direct channels to the pool operators, another benefit of central co-ordination:

  23:38  Luke Dashjr		FWIW, Josh (EclipseMC) has to be on a plane in 20 minutes,
				so he needs this decided before then :/

As time goes on the solution only gets harder, as illustrated by a new user wading into the channel.

  23:39  senseless		So whats the issue?
				I got the warning you need to upgrade. So I upgraded.

Gavin Andresen, notable for his cautious approach, brings up a potential problem:

  23:40  Gavin Andresen		If we go back to 0.7, then we risk some other block 
				triggering the same condition.

Happily, as others pointed out, there’s nothing to worry about — once majority hashpower is on 0.7, other blocks that have the same condition will be harmless one-block forks instead of a hard fork.

The BTC guild operator offers to basically end the fork:

  23:43  BTC Guild		I can single handedly put 0.7 back to the majority hash power
				I just need confirmation that thats what should be done
  23:44  Pieter Wuille		BTC Guild: imho, that is was you should do,
				but we should have consensus first

So much for decentralization! The fact that BTC Guild can tip the scales here is crucial. (The hash power distribution at that time appears to be roughly 2/3 vs 1/3 in favor of the 0.8 branch, and BTC Guild controlled somewhere between 20% and 30% of total hash power.) By switching, BTC Guild loses the work they’ve done on 0.8 since the fork started. On the other hand, they are more or less assured that the 0.7 branch will win and the fork will end, so at least their post-downgrade mining power won’t be wasted.

If mining power were instead distributed among thousands of small independent miners, it’s far from clear that coordinating them would be possible at all. More likely, each miner on the 0.8 branch would wait for the 0.7 branch to gain the majority hash power, or at least for things to start heading clearly in that direction, before deciding to downgrade. Meanwhile, some miners in the 0.7 branch, seeing the warning in their clients and unaware of the developer recommendation, would in fact upgrade. The 0.8 branch would pull ahead faster and faster, and pretty soon the window of opportunity would be lost. In fact, if the developers had delayed their decision by even a few hours, it’s possible that enough miners would have upgraded from 0.7 to 0.8 that no single miner or pool operator would be able to reverse it singlehandedly, and then it’s anybody’s guess as to whether the downgrade solution would have worked at all.

Back to our story: we’re nearing the critical moment.

  23:44  Jeff Garzik		ACK on preferring 0.7 chain, for the moment
  23:45  Gavin Andresen		BTC Guild: if you can cleanly get us back on the 0.7 chain,
				ACK from here, too

Consensus is reached!

Time for action

Right away, developers start giving out advice to downgrade:

  23:49  Luke Dashjr		surge_: downgrade to 0.7 if you mine, or just wait
  23:50  Pieter Wuille		doublec: do you operate a pool?
  23:50  doublec		yes
  23:50  Pieter Wuille		doublec: then please downgrade now

BTC Guild gets going immediately…

  23:51  BTC Guild		BTC Guild is going back to full default block settings and 0.7 soon.
  00:01  BTC Guild		Almost got one stratum node moved

… even at significant monetary cost.

  23:57  BTC Guild		I've lost way too much money in the last 24 hours
				from 0.8

One way to look at this is that BTC Guild sacrificed revenues for the good of the network. But these actions can also be justified from a revenue-maximizing perspective. If the BTC Guild operator believed that the 0.7 branch would win anyway (perhaps the developers would be able to convince another large pool operator), then moving first is relatively best, since delaying would only take BTC Guild further down the doomed branch. Either way, the key factor enabling BTC Guild to confidently downgrade is that by doing so, they can ensure that the 0.7 branch will win.

Now that the decision has been taken, it’s time to broadcast an alert to all nodes:

  00:07  Gavin Andresen		alert params set to relay for 15 minutes, expire after 4 hours

The alert in question is a model of brevity: “URGENT: chain fork, stop mining on version 0.8″

At this point people start flooding the channel and chaos reigns. However, the work is done, and only one final step remains.

At 00:29, Pieter Wuille posts to bitcointalk. This essentially concludes the crisis response. The post said, in its entirety:

Hello everyone,

there is an emergency right now: the block chain has split between 0.7+earlier and 0.8 nodes. I’ll explain the reasons in a minute, but this is what you need to know now:

  • After a discussion on #bitcoin-dev, it seems trying to get everyone on the old chain again is the least risky solution.
  • If you’re a miner, please do not mine on 0.8 code. Stop, or switch back to 0.7. BTCGuild is switching to 0.7, so the old chain will get a majority hash rate soon.
  • If you’re a merchant: please stop processing transactions until the chains converge.
  • If you’re on 0.7 or older, the client will likely tell you that you need to upgrade. Do not follow this advise – the warning should go away as soon as the old chain catches up
  • If you are not a merchant or a miner, don’t worry.

Crucially, note that he was able to declare that the 0.7 branch was going to win due to BTC Guild switching to it. This made the downgrade decision the only rational one for everyone else, and from here things were only a matter of time.

What would have happened if the developers had done nothing?

Throughout the text I’ve emphasized that the downgrade option was the correct one and that speed of developer response was of the essence. Let’s examine this claim further by thinking about what would have happened if the developers had simply let things take their course. Vitalik Buterin thinks everything would have been just fine: “if the developers had done nothing, then Bitcoin would have carried on nonetheless, only causing inconvenience to those bitcoind and BitcoinQt users who were on 0.7 and would have had to upgrade.”

Obviously, I disagree. We can’t know for sure what would have happened, but we can make informed guesses. First of all, the fork would have gone on for far longer — essentially until every last miner running version 0.7 or lower either shut down or upgraded their software. Given that many miners leave their setups unattended and others have custom setups that aren’t easy to upgrade quickly, the fork would have lasted days. This would have several effects. Most obviously, the psychological impact of an ongoing fork would have been serious. In contrast, as events actually turned out, the event happened overnight in the US and had been resolved the next morning, and media coverage praised the developers for their effective action. The price of Bitcoin dropped by 25% during the incident but recovered immediately to almost its previous value.

Another adverse impact is that exchanges or payment services that took too long to upgrade their clients (or disable transactions) might find themselves victims of large double-spend attacks. As it happened, OKPay suffered a $10,000 double spend. This was done by a user trying to prove a point and who revealed the details publicly; they got lucky in that their payment to OKPay was confirmed by the 0.8 branch but not 0.7. A longer-running fork would likely have exacerbated the problem and allowed malicious attackers to figure out a systematic way to create double-spend transactions. [1]

Worse, it is possible, even if not likely, that the 0.7 branch might have continued indefinitely. Obviously, if this did happen, it would be devastating for Bitcoin, resulting in a fork of the currency itself. One reason the fork might keep going is because of a “Goldfinger attacker” interested in de-stabilizing Bitcoin: they might not have the resources to execute a 51% attack, but the fork might give them just the opportunity they need: they could simply invest resources into keeping the 0.7 fork alive instead of launching an attack from scratch.

There’s another reason why the fork might have never ended. Miners who postponed their decision to switch from 0.7 to 0.8 by, say, a week would face the distasteful prospect of forgoing a week’s worth of mining revenue. They might instead gamble and continue to operate on the 0.7 branch as a big fish in a small pond. If the 0.7 branch had, say, 10% of the mining power of the 0.8 branch, the miner’s revenue would be multiplied tenfold by mining on the 0.7 branch. Of course, the currency they’d earn would be “Bitcoin v0.7”, which would fork into a different currency from “Bitcoin v0.8”, and would be worth much less, the latter being considered the legitimate Bitcoin. We analyze this type of situation in Chapter 7, “Community, Politics, and Regulation” of our Bitcoin textbook-in-progress or the corresponding sections of the video lecture.

While the exact course of events that would have resulted from inaction is debatable, it is clear that the downgrade solution is by far the less risky one, and the speed and clearheadedness of the developers’ response is commendable.

All this is in stark contrast to the dysfunctional state of the consensus process on the block size issue. Why is consensus on that issue failing? The main reason is that unlike the fork, there is no correct solution to the block size issue; instead there are various parties with differing goals that aren’t mutually aligned. Further, in the case of the fork, the developers had a well-honed process for coming to consensus on technical questions including bugs. For example, it was obvious to everyone that the discussion of the fork should take place on the #bitcoin-dev IRC channel; this didn’t even need to be said. On the other hand, there is no clear process for debating the block size issue, and the discussion is highly fragmented between different channels. Finally, once the developers had reached consensus about the fork, the community went with that decision because they trusted the developers’ technical competence. On the other hand, there is no single entity that the Bitcoin community trusts to make decisions that have economic implications.


In summary, we have a lot to learn from looking back at the fork. Bitcoin had a really close call, and another bug might well lead to a different outcome. Contrary to the view of the consensus protocol as fixed in stone by Satoshi, it is under active human stewardship, and the quality of that stewardship is essential to its security. [2] Centralized decision-making saved the day here, and for the most part it’s not in conflict with the decentralized nature of the network itself. The human element becomes crucial when the code fails or needs to adapt over time (e.g., the block size debate). We should accept and embrace the need for a strong leadership and governance structure instead of treating decentralization as a magic bullet.

[1] This gets into a subtle technical point: it’s not obvious how to get a transaction to get into one branch but not the other. By default any transaction that’s broadcast will just get included in both branches, but there are several ways to try to subvert this. But given access to even one transaction that’s been successfully double-spent, an attacker can amplify it to gradually cause an arbitrary amount of divergence between the two branches.

[2] To underscore how far the protocol is from being fixed for all time by a specification, the source code of the reference implementation is the only correct documentation of the protocol. Even creating and maintaining a compatible implementation has proved to be near-infeasible.

Thanks to Andrew Miller for comments on a draft.


Too many SSNs floating around

In terms of impact, the OPM data breach involving security clearance information is almost certainly the most severe data breach in American history. The media has focused too much on social security numbers in its reporting, but is slowly starting to understand the bigger issues for anyone who has a clearance, or is a relative or neighbor or friend of someone with a clearance.

But the news got me thinking about the issue of SSNs, and how widespread they are. The risks of SSNs as both authentication and identifier are well known, and over the past decade, many organizations have tried to reduce their use of and reliance on SSNs, to minimize the damage done if (or maybe I should say “when”) a breach occurs.

In this blog post, I’m going to describe three recent cases involving SSNs that happened to me, and draw some lessons.

Like many suburbanites, I belong to Costco (a warehouse shopping club ideal for buying industrial quantities of toilet paper and guacamole, for those not familiar with the chain). A few months ago I lost my Costco membership card, so I went to get a new one, as a card is required for shopping in the store. The clerk looked up my driver’s license number (DL#) and couldn’t find me in the system; searching by address found me – but with my SSN as my DL#. When Costco first opened in my area, SSNs were still in use as DL#s, and so even though my DL# changed 20 years ago, Costco had no reason to know that, and still had my SSN. Hence, if there were a Costco breach, it’s quite possible that in addition to my name & address, an attacker would also get my SSN, along with some unknown number of other SSNs from long-term members. Does Costco even know that they have SSNs in their systems? Perhaps not, unless their IT staff includes old-timers!

A recent doctor’s visit had a similar result. The forms I was asked to fill out asked for my insurance ID (but not my SSN), however the receipt helpfully provided at the end of my visit included my SSN, which I had provided the first time I saw that doctor 25 years ago. Does the doctor know that his systems still have SSNs for countless patients?

Last fall I did a TV interview; because of my schedule, the interview was taped in my home, and the cameraman’s equipment accidentally did some minor damage to my house (*). In order to collect payment for the damage, the TV station insisted on having my SSN for a tax form 1099 (**), which they helpfully suggested I email in. I had to make a decision – should I email it, send it via US mail, or forgo the $200 payment? (Ultimately I sent it via US mail; whether they then copied it down and emailed it, I have no idea.) I got the check – but I suspect my SSN is permanently in the TV station’s records, and most likely accessible to far too many people.

These cases got me thinking where else my SSN is floating around, perhaps in organizations that don’t even realize they have SSNs that need to be protected. The grocery store probably got my DL# decades ago when it was still my SSN so I could get a check cashing card, and that number is probably still on file somewhere even though I haven’t written a check in a grocery store for 10 or 20 years. The car dealer that sold me my car five years ago has my SSN as part of the paperwork to file for a title with the Department of Motor Vehicles, even if they don’t have it from my DL#. Did they destroy their copy once they sent the paperwork to DMV? I’m not betting on it. I cosigned an apartment lease for my daughter before she had her own credit history close to 10 years ago, and that required my SSN, which is probably still in their files. I met a sales person 20 years ago who had his SSN on his business card, to make it easier for his customers in the classified world to look him up and verify his clearance. (I probably have his business card somewhere, but luckily for him I’m not very organized so I can’t find it.) Many potential employers require an SSN as part of a job application; who knows how many of those records are floating around. Luckily, many of these files are paper records in a file cabinet, and so mass breaches are unlikely, but it’s hard to know.  Did any of them scan all of their old files and post them on a file server, before destroying the paper copies?

As many people have suggested, it’s time to permanently retire SSNs as an authenticator, and make them just an identifier. Unfortunately, that’s much easier said than done. Todd Davis, CEO of Lifelock, famously put his SSN on his company’s advertising, and was then the victim of identity theft. We all know that the “last four” of your SSN has become a less intrusive (and even less secure!) substitute authenticator.

So what should we do? If you’re a CIO or in a corporate IT department, think about all the places where SSNs may be hiding. They’re not always obvious, like personnel records, but may be in legacy systems that have never been cleaned up, as is probably the case for Costco and my doctor. And once you get finished with your electronic records, think about where they’re hiding in paper records. Those are certainly lower risk for a bulk theft, but they’re at some risk of insider theft. Can the old (paper) records simply get shredded? Does it really matter if you have records of who applied for a job or a check cashing card 15 years ago?

I’m not optimistic, but I’ll keep my eyes open for other places where SSNs are still hiding, but shouldn’t be.

(*) Since you insist: one of the high intensity lights blew up, and the glass went flying, narrowly missing the producer. Two pieces melted into the carpet, ruining small sections. The staff were very apologetic, and there was no argument about their obligation to reimburse me for the damage. The bigger damage was that I spent an hour being interviewed on camera, and they used about 10 seconds in the TV piece.

(**) Yes, I know they shouldn’t need an SSN for reimbursement, but I unsuccessfully tilted at that windmill.


Congress’ Fast Track to Bad Law

Congress appears poised to pass Trade Promotion Authority, otherwise known as “fast track,” for the Trans Pacific Partnership Agreement (TPP). If this happens, it will likely close the door to any possibility of meaningful public input about TPP’s scope and contours. That’s a major problem, as this “21st century trade agreement” encompassing around 800 million people in the United States and eleven other countries, will impact areas ranging from access to medicine (who gets it) to digital privacy rights (who has them). But, unless you are a United States Trade Representative (USTR) “cleared advisor” (which almost always means that you represent an industry, like entertainment or pharmaceuticals), or, under certain limited circumstances, an elected official, your chief source of TPP information is WikiLeaks. In other words, if Julian Assange gets his hands on a draft TPP text, you might see it, once he decides that it should be made public. Of course, you’ll have to hope that the copy that you see is current and accurate.

There have been no – not one – formal releases of the TPP’s text. Thus, this 21st century agreement has been negotiated with 19th century standards of information access and flow. Indeed, TPP has been drafted with a degree of secrecy unprecedented for issues like intellectual property law and access to information. Some degree of secrecy and discretion is necessary in any negotiation, but the amount of secrecy here has left all but a few groups in the informational dark.

This process, if you want to call it that, defies logic. Margot Kaminski has labeled the entire process “transparency theater.” Perhaps most problematically, “transparency theater” has caused widespread opposition to TPP, like mine, that might otherwise not have materialized. Standing alone, the TPP’s negotiation process is sufficient to cause opposition. Additionally, the process has seemingly led to bad substance, which is a separate reason to oppose TPP. Imagine if bills in Congress were treated this way?

Meanwhile, fast track will mean that Congress will simply vote yes or no on the entire deal. Therefore, fast track will exacerbate that informational vacuum, and the public will not be able to do much more than accept whatever happens. In essence, an international agreement negotiated with no meaningful public input – and to some unknown degree written by a few industries —  is about to be rushed through the domestic legislative process. [Note: I submitted testimony in the case referenced in the previous hyperlink by Yale Law School’s Media Freedom and Information Access Clinic].

At this point, if you are at all concerned about the TPP’s process, the best thing that you can do is contact your Representatives and urge them to vote “no” on fast track. You could also join the call to formally release the TPP’s text before fast track is voted upon (i.e., right now). Finally, you could help assure that two other important international agreements currently in negotiation but in earlier stages – the Transatlantic Trade and Investment Partnership and Trade in Services Agreement – are negotiated more openly. How? By paying attention, and calling your elected officials and the USTR when things remain murky. I’ll have much more to say about these processes in the coming months.


An empirical study of Namecoin and lessons for decentralized namespace design

[Let’s welcome to Freedom to Tinker first-year grad student Miles Carlsten, who, with fellow first-years Harry Kalodner and Paul Ellenbogen, worked on a neat study of Namecoin. — Arvind Narayanan]

Namecoin is a Bitcoin-like cryptocurrency that aims to create a secure decentralized namespace — that is, an online system that maps names to values, but without the need for a central authority to manage the mappings [1]. In particular, Namecoin focuses on establishing a censorship-resistant alternative to the current centralized Domain Name System (DNS).

In a new paper to be presented at WEIS 2015, we report the results of an empirical study of Namecoin. Our primary finding is that so far Namecoin hasn’t succeeded at this goal — out of about 200,000 registered names, only 28 represent non-squatted domains with non-trivial content. We argue that there’s a crucial game-theoretic component to namespaces that must be designed properly for such systems to be successful.

[Read more…]


The story behind the picture of Nick Szabo with other Bitcoin researchers and developers

Reddit seems to have discovered this picture of a group of 20 Bitcoin people having dinner, and the community seems intrigued by Nick Szabo’s public presence. It’s actually an old picture, from March 2014. I was the chief instigator of that event, so let me tell the story of how that amazing group of people happened to be assembled at Princeton’s Prospect House.

Photo credit: Matt Green

[Read more…]


Bitcoin faces a crossroads, needs an effective decision-making process

Joint post with Andrew Miller.

Virtually unknown outside the Bitcoin community, a debate is raging about whether or not to increase the maximum size of Bitcoin blocks. Blocks are created in Bitcoin roughly once every ten minutes and are currently limited to a size of 1 megabyte, putting a limit on the rate at which the network can handle transactions. At first sight this might seem like a technical decision for the developers to make and indeed it’s largely being treated that way. In reality, it has far-reaching consequences for the Bitcoin ecosystem as it is the first truly contentious decision the Bitcoin community has faced. In fact, the manner in which the community reaches — or fails to reach — consensus on this issue may set a crucial precedent for Bitcoin’s long-term ability to survive, adapt, grow, and govern itself. [1]

[Read more…]


The Error of Fast Tracking the Trans-Pacific Partnership Agreement

National media reported yesterday that a Congressional agreement has been reached on so-called “fast track” authority for the Trans-Pacific Partnership Agreement (TPP). This international agreement, having been negotiated under extreme secrecy by 12 countries including the United States, Australia, Canada, Japan, Malaysia and Singapore, is supposed to be an “ambitious, next-generation, Asia-Pacific trade agreement that reflects U.S. economic priorities and values.” Indeed, if it comes into effect, it will be the largest such agreement in history, covering some 800 million people. Unfortunately, its chances of meeting that laudable goal have been severely diminished by the aforementioned secrecy.

In theory, “fast track” authority should allow the President to more thoroughly and forcefully negotiate trade agreements with other governments by streamlining the domestic political process. By eliminating much of Congress’s review and amendment process that could force the TPP negotiators back to the table, “trade promotion authority” allows for complex international trade agreements to receive a swift and decisive Congressional sign-off. However, because the TPP has been negotiated largely in secret, with only a precious few outside the government (almost exclusively representing the entertainment and pharmaceutical industries) privy to its text, fast track will have the effect of eliminating the last possibility for anyone outside the above select few to change the contours of the agreement. That’s a significant concern, as the TPP (based upon leaks) covers issues ranging from access to medicine to liability for linking to allegedly copyright-infringing content on the Internet. Democracy deserves better.

To be sure, even without fast track, the chances of realistically being able to change the TPP once it hits Congress would be slim. Requiring negotiators to go back to the table after the TPP text is agreed upon in international negotiations is a significant undertaking that would be discouraged. But with fast track in place, the chances of offering any meaningful amendments to the final text are near zero. As a result, the moment that TPP’s negotiators announce that they have a final text will also be the effective end of the opportunity for small businesses, labor, civil society groups, and even the general public to impact the provisions of the agreement. Their only play will be to oppose TPP outright (which, in fairness, some may do regardless of how TPP was negotiated).

The very secrecy around TPP could be its undoing, as it was with the failed Anti-Counterfeiting Trade Agreement. Therefore, it is well past the time that the negotiators should make the text public. If it isn’t released, and soon, “fast track” could become a fast track to failure of this multi-year negotiating process – which, depending on the terms of the agreement, could be the right result.