August 29, 2015

avatar

Bitcoin course available on Coursera; textbook is now official

Earlier this year we made our online course on Bitcoin publicly available 11 video lectures and draft chapters of our textbook-in-progress, including exercises. The response has been very positive: numerous students have sent us thanks, comments, feedback, and a few error corrections. We’ve heard that our materials are being used in courses at a few universities. Some students have even translated the chapters to other languages.

Coursera. I’m very happy to announce that the course is now available as a Princeton University online course on Coursera. The first iteration starts next Friday, September 4. The Coursera version offers embedded quizzes to test your understanding; you’ll also be part of a community of students to discuss the lectures with (about 10,000 15,000 have already signed up). We’ve also fixed all the errors we found thanks to the video editing skillz of the Princeton Broadcast Center folks. Sign up now, it’s free!

We’re closely watching ongoing developments in the cryptocurrency world such as Ethereum. Whenever a body of scientific knowledge develops around a new area, we will record additional lectures. The Coursera class already includes one additional lecture: it’s on the history of cryptocurrencies by Jeremy Clark. Jeremy is the ideal person to give this lecture for many reasons, including the fact that he worked with David Chaum for many years.

Jeremy Clark lecturing on the history of cryptocurrencies

Textbook. We’re finishing the draft of the textbook; Chapter 8 was released today and the rest will be coming out in the next few weeks. The textbook closely follows the structure of the lectures, but the textual format has allowed us to refine and polish the explanations, making them much clearer in many places, in my opinion.

I’m excited to announce that we’ll be publishing the textbook with Princeton University Press. The draft chapters will continue to be available free of charge, but you should buy the book it will be peer reviewed, professionally edited and typeset, and the graphics will be re-done professionally.

Finally, if you’re an educator interested in teaching Bitcoin, write to us and we’ll be happy to share with you some educational materials that aren’t yet public.

avatar

Robots don’t threaten, but may be useful threats

Hi, I’m Joanna Bryson, and I’m just starting as a fellow at CITP, on sabbatical from the University of Bath.  I’ve been blogging about natural and artificial intelligence since 2007, increasingly with attention to public policy.  I’ve been writing about AI ethics since 1998.  This is my first blog post for Freedom to Tinker.

Will robots take our jobs?  Will they kill us in war?  The answer to these questions depends not (just) on technological advances – for example in the area of my own expertise, AI – but in how we as a society determine to view what it means to be a moral agent.  This may sound esoteric, and indeed the term moral agent comes from philosophy.  An agent is something that changes its environment (so chemical agents cause reactions).  A moral agent is something society holds responsible for the changes it effects.

Should society hold robots responsible for taking jobs or killing people?  My argument is “no”.  The fact that humans have full authorship over robots‘ capacities, including their goals and motivations, means that transferring responsibility to them would require abandoning, ignoring or just obscuring the obligations of humans and human institutions that create the robots.  Using language like “killer robots” can confuse the tax-paying public already easily lead by science fiction and runaway agency detection to believing that robots are sentient competitors.  This belief ironically serves to protect the people and organisations that are actually the moral actors.

So robots don’t kill or replace people; people use robots to kill or replace each other.  Does that mean there’s no problem with robots?  Of course not. Asking whether robots (or any other tools) should be subject to policy and regulation is a very sensible question.

In my first paper about robot ethics (you probably want to read the 2011 update for IJCAI, Just an Artifact: Why Machines are Perceived as Moral Agents), Phil Kime and I argued that as we gain greater experience of robots, we will stop reasoning about them so naïvely, and stop ascribing moral agency (and patiency [PDF, draft]) to them.  Whether or not we were right is an empirical question I think would be worth exploring – I’m increasingly doubting whether we were.  Emotional engagement with something that seems humanoid may be inevitable.  This is why one of the five Principles of Robotics (a UK policy document I coauthored, sponsored by the British engineering and humanities research councils) says “Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.” Or in ordinary language, “Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.”

Nevertheless, I hope that by continuing to educate the public, we can at least help people make sensible conscious decisions about allocating their resources (such as time or attention) between real humans versus machines.  This is why I object to language like “killer robots.”  And this is part of the reason why my research group works on increasing the transparency of artificial intelligence.

However, maybe the emotional response we have to the apparently human-like threat of robots will also serve some useful purposes.  I did sign the “killer robot” letter, because although I dislike the headlines associated with it, the actual letter (titled “Autonomous Weapons: an Open Letter from AI & Robotics Researchers“) makes clear the nature of the threat of taking humans out of the loop on real-time kill decisions.   Similarly, I am currently interested in understanding the extent to which information technology, including AI, is responsible for the levelling off of wages since 1978.  I am still reading and learning about this; I think it’s quite possible that the problem is not information technology per se, but rather culture, politics and policy more generally.  However, 1978 was a long time ago.  If more pictures of the Terminator get more people attending to questions of income inequality and the future of labour, maybe that’s not a bad thing.

avatar

How not to measure security

A recent paper published by Smartmatic, a vendor of voting systems, caught my attention.

The first thing is that it’s published by Springer, which typically publishes peer-reviewed articles – which this is not. This is a marketing piece. It’s disturbing that a respected imprint like Springer would get into the business of publishing vendor white papers. There’s no disclaimer that it’s not a peer-reviewed piece, or any other indication that it doesn’t follow Springer’s historical standards.

The second, and more important issue, is that the article could not possibly have passed peer review, given some of its claims. I won’t go into the controversies around voting systems (a nice summary of some of those issues can be found on the OSET blog), but rather focus on some of the security metrics claims.

The article states, “Well-designed, special-purpose [voting] systems reduce the possibility of results tampering and eliminate fraud. Security is increased by 10-1,000 times, depending on the level of automation.”

That would be nice. However, we have no agreed-upon way of measuring security of systems (other than cryptographic algorithms, within limits). So the only way this is meaningful is if it’s qualified and explained – which it isn’t. Other studies, such as one I participated in (Applying a Reusable Election Threat Model at the County Level), have tried to quantify the risk to voting systems – our study measured risk in terms of the number of people required to carry out the attack. So is Smartmatic’s study claiming that they can make an attack require 10 to 1000 more people, 10 to 1000 times more money, 10 to 1000 times more expertise (however that would be measured!), or something entirely different?

But the most outrageous statement in the article is this:

The important thing is that, when all of these methods [for providing voting system security] are combined, it becomes possible to calculate with mathematical precision the probability of the system being hacked in the available time, because an election usually happens in a few hours or at the most over a few days. (For example, for one of our average customers, the probability was 1×10-19. That is a point followed by 19 [sic] zeros and then 1). The probability is lower than that of a meteor hitting the earth and wiping us all out in the next few years—approximately 1×10-7 (Chemical Industry Education Centre, Risk-Ed n.d.)—hence it seems reasonable to use the term ‘unhackable’, to the chagrin of the purists and to my pleasure.

As noted previously, we don’t know how to measure much of anything in security, and we’re even less capable of measuring the results of combining technologies together (which sometimes makes things more secure, and other times less secure). The claim that putting multiple security measures together gives risk probabilities with “mathematical precision” is ludicrous. And calling any system “unhackable” is just ridiculous, as Oracle discovered some years ago when the marketing department claimed their products were “unhackable”. (For the record, my colleagues in engineering at Oracle said they were aghast at the slogan.)

As Ron Rivest said at a CITP symposium, if voting vendors have “solved the Internet security and cybersecurity problem, what are they doing implementing voting systems? They should be working with the Department of Defense or financial industry. These are not solved problems there.” If Smartmatic has a method for obtaining and measuring security with “mathematical precision” at the level of 1019, they should be selling trillions of dollars in technology or expertise to every company on the planet, and putting everyone else out of business.

I debated posting this blog entry, because it may bring more attention to a marketing piece that should be buried. But I hope that writing this will dissuade anyone who might be persuaded by Smartmatic’s unsupported claims that masquerade as science. And I hope that it may embarrass Springer into rethinking their policy of posting articles like this as if they were scientific.

avatar

The Defend Trade Secrets Act Has Returned

Freedom to Tinker readers may recall that I’ve previously warned about legislation to create a federal private cause of action for trade secret misappropriation in the name of fighting cyber-espionage against United States businesses. Titled the Defend Trade Secrets Act (DTSA), it failed to move last year. Well, the concerning legislation has returned, and, although it has some changes, it is little better than its predecessor. In fact, it may be worse.

Therefore, Sharon Sandeen and I have authored a new letter to Congress. In it, we point out that our previously-stated concerns remain, both stated by a previous letter and in a law review article entitled Here Come The Trade Secret Trolls. In sum, we argue that  combined “with an ex parte seizure remedy, embedded assumption of harm, and ambiguous language about the inevitable disclosure doctrine, the new DTSA appears to not only remain legislation with significant downsides, but those downsides may actually be even more pronounced.” Moreover, we assert that “the DTSA still does not do much, if anything, to address the problem of cyber-espionage that cannot already be done under existing state and federal law.”

In the letter, we call on Congress to abandon the DTSA. In addition, we ask that “there be public hearings on (a) the benefits and drawbacks of the DTSA, and (b) the specific question of whether the DTSA addresses the threat of cyber-espionage.” Finally, we encourage Congress to consider alternatives in dealing with cyber-espionage, including much-needed amendment of the Computer Fraud and Abuse Act.

avatar

Does cloud mining make sense?

[Paul Ellenbogen is a second year Ph.D. student at Princeton who’s been looking into the economics and game theory of Bitcoin, among other topics. He’s a coauthor of our recent paper on Namecoin and namespaces. — Arvind Narayanan]

Currently, if I wanted to mine Bitcoin I would need to buy specialized hardware, called application-specific integrated circuits (ASICs). I would need to find space for my hardware, which could take up a considerable amount of space. I might need to install a new cooling system into the facility to dissipate the considerable amounts of heat generated by the hardware.

Or I could buy a cloud mining contract. Cloud mining companies bill themselves as companies that take care of all of the gritty details and allow the consumer to directly buy hash power with dollars. Most cloud mining companies offer contracts for varying term lengths, going anywhere from on the order of weeks to perpetuity. For example, I could pay $300, and receive one terrahash per second for the next year. As soon as the cloud hashing provider receives my money, they start up a miner, or allocate me existing cycles, and I should start earning bitcoins in short order. Sounds easy right?

Cloud mining has a bad track record. Many cloud mining services have closed up shop and run off with customer money. Examples include PBmining, lunaminer, and cloudminr.io. Gavin Andresen, a Bitcoin Core developer, once speculated that cloud mining doesn’t make any sense and that most of these services will end up as scams.

Cloud mining has been a popular front for Ponzi schemes, investment frauds where old customers or investors are paid with the money of new customers. In the case of cloud mining Ponzi schemes, bitcoins to pay old contracts are furnished from the payment of new customers. Ponzi schemes tend to collapse when the flow of new customers dries up, or when a large number of customers try to cash out. Cloud mining is a particularly appealing target for Ponzi schemes because the second failure case, cashing out, is not an option for those holding mining contracts. The contracts stipulate a return of bitcoins determined by hash rate. This means Ponzi scheme operators only need to keep recruiting new users for as long as possible. Bitcointalk user Puppet points out a set of 7 useful criteria for spotting cloud mining scams. Out of the 42 operations puppet examines, they identify 30 operations as scams, 14 of which have already ceased operation.

Yet cloud mining persists. That so many cloud mining operations end up being scams may appeal to our basic business intuition. Compare a cloud miner to a traditional bitcoin miner. A traditional bitcoin miner mines bitcoins and sells them on the exchange at their current market rate. It seems that the only way for a cloud miner to do better than a traditional bitcoin miner selling bitcoins at market price is at the expense of the cloud mining customer. It appears there is no way for both cloud miner and their customer to walk away better off.

Yet cloud mining and at least some interest in cloud mining persists. I would like to offer some possible scenarios where cloud mining may deliver the hashes that customers order.

Hired guns? Papers that propose attacks against bitcoin often pose “An attacker with X% of the hash power could do Y.” For example, in selfish mining, as first described by Eyal et al, with 33% of the mining power an attacker could force the rest of the network to mine on top of their blocks. Cloud miners could be used for block withholding attacks too. An important feature of many of these attacks is that the mining power need not be used all the time. These attacks would require flexibility in the mining software the attackers are using, as most off the shelf mining software (thankfully) does not have these attacks built in. Most cloud mining set ups I have looked at don’t allow for enough flexibility to launch attacks, nor are the contract periods on most services short enough. Cloud mining customers typically have a simple web interface, and in the best case are able to chose which pools they join, but they do not have any sort of scriptable direct interface to the mining hardware. At the moment, cloud miners are probably not supporting themselves by executing attacks for others.

Regulatory loophole? Individuals may try to use cloud mining to circumvent Bitcoin regulations, such as know-your-customer. If I want to turn my dollars into bitcoins, I can buy bitcoins at an exchange, but that exchange would have to know my true identity in order to comply with regulations. Unscrupulous individuals may not want to link their identity and cash flow reported to the government. Cloud mining operators and unscrupulous customers may try to skirt these regulations by claiming cloud mining operations are not exchanges or banks, rather they merely rent computer hardware like any cloud computing provider, meaning they do not need to comply with banking regulation. It is unlikely this would be viable long term, or even short term, as regulators would become wise to these sorts of regulatory loopholes and close. This paragraph is the most speculative on my part, as I am neither a regulator nor a lawyer, so I don’t have expertise to draw on from either of those fields.

Financial instrument? Currently most bitcoin miners take on two roles, managing the mining hardware and managing the financial risk involved in mining. A more compelling justification for cloud miners existence is that cloud mining contracts allow a cloud mining provider to avoid volatility in the exchange rate of bitcoin and the variability in the hash rate. Cloud mining is a means of hedging risk. If cloud miners can enter contracts to provide a certain hash rate to a customer for a length of time, the cloud miner does not need to concern themselves with the exchange rate nor hash rate once the contract begins. It then becomes the job of the customer contracting the cloud miner to manage the risk presented by volatility in the exchange rate. This would allow the cloud miner to specialize in buying, configuring, and maintaining mining hardware, and other individuals to specialize in managing risk related to bitcoin. As the financial instruments surrounding cryptocurrencies become more sophisticated, a terrahash could become another just another cryptocurrency security that is traded.

 

Acknowledgment: I would like to thank Joseph Bonneau for the contribution of “cloud mining as a means of managing risk” concept.

avatar

The Chilling Effects of Confidentiality Creep

Today, North Carolina’s Governor Pat McCrory has a bill on his desk that would make it impossible for the public to find out what entities are supplying the chemical cocktail – the drugs – to be used for lethal injections in North Carolina. Known as the Restoring Proper Justice Act (the “Act”), it defines  “confidential information” as including the “name, address, qualifications, and other identifying information of any person or entity that manufactures, compounds, prepares, prescribes, dispenses, supplies, or administers the drugs or supplies” used in lethal injections, thereby shutting down mandatory public access to basic information involved in extinguishing life in the name of justice. Secret suppliers are but one effect of this legislation; the Act also allows executions to resume in North Carolina and permits a wide range of medical professionals (not just a doctor) to assist in executions.

Call this an example of “confidentiality creep” – quiet, under-scrutinized expansion of the kinds of information deemed inappropriate for public consumption. The Act does not call this information a “trade secret” – information that is valuable because it is not known by a competitor – even though some of it could conceivably be that. Nor is it defined as the “property” of a private person or entity, even though “qualifications” might be deemed such. No; this information is designated “confidential” simply because the Legislature says that it is. It’s a new category in the law.

Before you stop reading, consider that confidentiality creep is not an abstraction, of interest merely to commercial law, freedom of information, and privacy nerds. Regardless of your personal views about the death penalty, whenever the government designates any information secret, we should all take a close look.

The expansion of confidentiality could have repercussions beyond mere access to information. To the extent that we untether legal confidentiality from any clear theoretical grounding – privacy, property, commercial ethics, contract – it runs the risk of being a powerful catch-all, subject to abuse. Slowly, if unchecked, we might expect that the work of cybersecurity researchers, journalists, civil society groups, and anyone else who accesses information deemed “confidential” would be threatened. Those “chilling effects” so prevalent in copyright could become even more powerful where information is used in a way unflattering to those who wave the confidentiality wand, backed by a rudderless law.

As to this particular case of confidentiality creep, there are very real and pernicious impacts. For example, if the Act becomes law, cruel and unusual punishment challenges in North Carolina would be subject to drug manufacturers’ preference not to be bothered. Investigators of gruesome botched executions, which have occurred in other states, would now have to jump an unknown hurdle so as to gather information about the drugs used. After all, the public might want to know how a drug was manufactured, where, under what conditions, and the like. But under the Act – if enforcedall identifying information would have to be gathered by leak, a whistleblower, or by exploiting a mistaken release of information, which might then render the information excluded from evidence.

But wait: the Act’s title suggests that it is supposed to be “restoring proper justice.” Surely there must be a good reason for this confidentiality? According to press reports, the best (and perhaps only) apparent reason is a concern that, if identified, these manufacturers will choose not to provide the drugs out of fear of litigation by death penalty opponents, or be forced not to by a court. North Carolina Policy Watch reported that Sen. Buck Newton explained that confidentiality is needed “so that they aren’t litigated to death in order to prevent them from selling these drugs to the state.” Additionally, Rep. Leo Daughtry, a sponsor of the Act, noted that “if you tell [opponents] where the drug comes from, there will be 300 people outside the building.”

Put aside, for the moment, the assertion that a cost of doing business with the government – especially when it involves something as serious and irreversible as the administration of the death penalty – might be public accountability and scrutiny. As argued by the North Carolina American Civil Liberties Union’s Sarah Preston, “Courts, lawyers and the public have a right to know basic details about how the government executes inmates in their name.” Instead, let’s take the above argument seriously and see where it leads.

Generally, the argument is that if [name your information] is made public, then [name your private manufacturer] won’t provide the [name your good or service] to the government. Well, we’ve heard it before, and it doesn’t pass muster. From voting machines to hydraulic fracturing, weakening commercial confidentiality has generally not resulted in private entities withdrawing from providing the good or service; rather, they’ve adjusted. Moreover, private entities can raise prices in order to hedge against the risk of litigation. Indeed, the threat of exposure might force such manufacturers to provide better goods that are less susceptible to challenge. In other words, the impact of publicity is not all bad – even for the private entities, like drug companies, potentially subject to public scrutiny.

So where does this leave us? As alluded to above, we often forget that doing the public’s business is not the same as private commerce, precisely because the customer is the public itself. We often wind up with confidentiality creep, expanding the definition of what we call “confidential” for underexplored or non-explored reasons, with little or no public discussion. If you’re not convinced, throw the efforts to gain public access to the basic negotiating text for the nearing-completion multilateral Trans Pacific Partnership Agreement into the mix, because that’s confidentiality creep also. It’s not publicly available, for unclear reasons.

We, the public, need to pay a lot more attention to these developments. If you’re concerned about the above, contact Governor McCrory’s office now and urge him to veto the Restoring Proper Justice Act. Aside from any other reasons that you might have, tell him that preventing the risk of a fleeing commercial provider, if it is a problem at all, comes at enormous public cost. Do it now as he could sign the Act at any moment.

More broadly, think critically when you hear arguments about the need for confidentiality. Admittedly, the press does not usually cover these issues, dismissing them as arcane or too complex. Nonetheless, I’ll do my best to document these issues as they arise, because they are too important to ignore. Let’s try to stop the quiet, baseless, under-explained creep.

 

avatar

Analyzing the 2013 Bitcoin fork: centralized decision-making saved the day

On March 11, 2013, Bitcoin experienced a technical crisis. Versions 0.7 and 0.8 of the software diverged from each other in behavior due to a bug, causing the block chain to “fork” into two. Considering how catastrophic a hard fork can be, the crisis was resolved quickly with remarkably little damage owing to the exemplary competence of the developers in charge. The event gives us a never-before-never-again look into Bitcoin’s inner workings. In this post, I’ll do a play-by-play analysis of the dramatic minutes, and draw many surprising lessons. For a summary of the event, see here.

First of all, the incident shows the necessity of an effective consensus process for the human actors in the Bitcoin ecosystem. The chronology of events was like this: it took about an hour after the fork started for enough evidence to accumulate that a fork was afoot. Once that was understood, things unfolded with remarkable speed and efficiency. The optimal course of action (and, in retrospect, the only one that avoided serious risks to the system) was first proposed and justified 16 minutes later, and the developers reached consensus on it a mere 20–25 minutes after that. Shortly thereafter — barely an hour after the discovery of the fork — the crisis response had effectively and successfully concluded. It took a few hours more for the fork to heal based on the course of action the developers initiated, but the outcome wasn’t in doubt.

More surprisingly, it also shows the effectiveness of strong central leadership. That’s because the commonsense solution to the fork — as well as the one programmed into the software itself — was to encourage miners running old versions to upgrade. As it turns out, the correct response was exactly the opposite. Even a delay of a few hours in adopting the downgrade solution would have been very risky, as I’ll argue, with potentially devastating consequences. Without the central co-ordination of the Bitcoin Core developers and the strong trust that the community places in them, it is inconceivable that adopting this counterintuitive solution could have been successfully accomplished.

Further, two more aspects of centralization proved very useful, although perhaps not as essential. The first is the ability of a few developers who possess a cryptographic key to broadcast alert messages to every client, which in this case was used to urge them to downgrade. The second is the fact that the operator of BTC Guild, a large mining pool at the time, was able to singlehandedly shift the balance of mining power to the old branch by downgrading. If it weren’t for this, it would have resulted in a messy “coordination problem” among miners, and we can imagine each one hesitating, waiting for someone else to take the leap.

Since most of the discussion and decision-making happened on the #bitcoin-dev IRC channel, it remains publicly archived and offers a remarkable window into the Core developers’ leadership and consensus process. Consensus operated at remarkable speed, in this instance faster than consensus happens in the Bitcoin network itself. The two levels of consensus are intricately connected.

Let’s now dive into the play-by-play analysis of the fork and the reaction to it. I’ve annotated the transcript of selected, key events from the IRC log of that fateful night. I’ve made minor edits to make the log easier to read — mainly replacing the nicknames of prominent community members with their real names, since their identities are salient to the discussion.

Signs of trouble

The first signs that something is wrong come from a miner with nickname thermoman, as well as Jouke Hofman, a Dutch exchange operator, who report strange behavior from their Bitcoin clients. Bitcoin core developer Pieter Wuille helps them debug these problems, but at this point everyone assumes these are problems local to the two users, rather than something on the network. But around 23:00, a variety of other services showing strange behavior are noticed (blockchain.info, blockexplorer.com, and the IRC bot that reports the status of the network), making it obvious that something’s wrong on the network. Luke Dashjr, a prominent developer, spells out the unthinkable:

  23:06  Luke Dashjr		so??? yay accidental hardfork? :x
  23:06  Jouke Hofman		Holy crap

Over the next few minutes people convince themselves that there’s a fork and that nodes running the 0.8 and the 0.7 versions are on different sides of it. Things progress rapidly from here. A mere five minutes later the first measure to mitigate the damage is taken by Mark Karpeles, founder of Mt. Gox:

  23:11  Mark Karpeles		I've disabled the import of bitcoin blocks for now
				until this is sorted out
  23:13  Luke Dashjr		I'm trying to contact poolops [mining pool operators]

It’s pretty obvious at this point that the best short-term fix is to get everyone on one side of the fork. But which one?

Up or down? The critical decision

At 23:18, Pieter Wuille sends a message to the bitcoin-dev mailing list, informing them of the problem. But he hasn’t fully grasped the nature of the fork yet, stating “We risk having (several) forked chains with smaller blocks” and suggests upgrading as the solution. This is unfortunate, but it’s the correct thing to do given his understanding of the fork. This email will stay uncorrected for 45 minutes, and is arguably the only slight misstep in the developer response.

  23:21  Luke Dashjr		at least 38% [of hashpower] is on 0.8 right now
				otoh, that 38% is actively reachable

Dashjr seems to suggest that the 0.8 0.7 downgrade is better because the operators of newly upgraded nodes are more likely to be reachable by the developers to convince them to downgrade. This is a tempting argument. Indeed, when I describe the fork in class and ask my students why the developers picked the downgrade rather than the upgrade, this is the explanation they always come up with. When I push them to think harder, a few figure out the right answer, which Dashjr points out right afterward:

  23:22  Gavin Andresen		the 0.8 fork is longer, yes? So majority hashpower is 0.8....
  23:22  Luke Dashjr		Gavin Andresen: but 0.8 fork is not compatible
				earlier will be accepted by all versions

Indeed! The behavior of the two versions is not symmetric. Upgrading will mean that the fork will persist essentially indefinitely, while downgrading will end it relatively quickly.

(Lead developer) Gavin Andresen still protests, but Wuille also accepts Dashjr’s explanation:

  23:23  Gavin Andresen		first rule of bitcoin: majority hashpower wins
  23:23  Luke Dashjr		if we go with 0.8, we are hardforking
  23:23  Pieter Wuille		the forking action is a too large block
				if we ask miners to switch temporarily to smaller blocks again,
				we should get to a single chain soon
				with a majority of miners on small blocks, there is no risk
  23:24  Luke Dashjr		so it's either 1) lose 6 blocks, or 2) hardfork for no benefit
  23:25  BTC Guild		We'll lose more than 6

BTC Guild was a large pool at the time, and its operator happened to be online. They are correct — the 0.8 branch had 6 blocks at the time, but was growing much faster than the 0.7 branch and would continue to grow until the latter gradually caught up. Eventually 24 blocks would be lost. BTC Guild will turn out to be a key player, as we will soon see.

More explanation for why downgrade is the right approach:

  23:25  Pieter Wuille		all old miners will stick to their own chain
				regardless of the mining power behind the other
  23:25  Luke Dashjr		and the sooner we decide on #1, the fewer it loses
  23:26  Pieter Wuille		even with 90% of mining power on 0.8
				all merchants on an old client will be vulnerable
  23:26  Luke Dashjr		if we hardfork, all Bitcoin service providers have an emergency situation
  23:30  Pieter Wuille		and we _cannot_ get every bitcoin user in the world
				to now instantly switch to 0.8
				so no, we need to rollback to the 0.7 chain

Achtung!

Many Bitcoin users are not aware of the centralized ability in Bitcoin for a few developers to send out alert messages. Not only does it exist, it becomes crucial here, as Core developer Jeff Garzik and another user point out:

  23:31  Jeff Garzik		alert is definitely needed
  23:31  jrmithdobbs		also, if this isn't an alert message scenario 
				I don't know what is, where's that at? :)

A bit of a comic interlude from the operator of ozcoin, another mining pool:

  23:31  Graet			ozcoin wont even get looked at for an hour or so
  23:31  Graet			no-one is avalable and i need to take kids to school

The situation is urgent (the more clients upgrade, the harder it will be to convince everyone to downgrade):

  23:32  phantomcircuit		0.7 clients are already displaying a big scary warning
				Warning: Displayed transactions may not be correct!
				You may need to upgrade, or other nodes may need to upgrade.
				unfortunately i suspect that warning will get people to upgrade

Other complications due to custom behavior of some miners’ software:

  23:35  lianj			oh, damn. anyhow please keep us guys updated which code change is made 
				to solve the problem. our custom node does .8 behavior

Gavin Andresen and Jeff Garzik still aren’t convinced (they seem to be thinking about getting 0.8 nodes to switch to the other branch, rather than the more blunt solution of asking miners to downgrade the client)

  23:34  Jeff Garzik		and how to tell bitcoind "mine on $this old fork"
  23:35  Gavin Andresen		exactly. Even if we want to roll back to the 0.7-compatible chain, 
				I don't see an easy way to do that.

This shows the usefulness of the developers having direct channels to the pool operators, another benefit of central co-ordination:

  23:38  Luke Dashjr		FWIW, Josh (EclipseMC) has to be on a plane in 20 minutes,
				so he needs this decided before then :/

As time goes on the solution only gets harder, as illustrated by a new user wading into the channel.

  23:39  senseless		So whats the issue?
				I got the warning you need to upgrade. So I upgraded.

Gavin Andresen, notable for his cautious approach, brings up a potential problem:

  23:40  Gavin Andresen		If we go back to 0.7, then we risk some other block 
				triggering the same condition.

Happily, as others pointed out, there’s nothing to worry about — once majority hashpower is on 0.7, other blocks that have the same condition will be harmless one-block forks instead of a hard fork.

The BTC guild operator offers to basically end the fork:

  23:43  BTC Guild		I can single handedly put 0.7 back to the majority hash power
				I just need confirmation that thats what should be done
  23:44  Pieter Wuille		BTC Guild: imho, that is was you should do,
				but we should have consensus first

So much for decentralization! The fact that BTC Guild can tip the scales here is crucial. (The hash power distribution at that time appears to be roughly 2/3 vs 1/3 in favor of the 0.8 branch, and BTC Guild controlled somewhere between 20% and 30% of total hash power.) By switching, BTC Guild loses the work they’ve done on 0.8 since the fork started. On the other hand, they are more or less assured that the 0.7 branch will win and the fork will end, so at least their post-downgrade mining power won’t be wasted.

If mining power were instead distributed among thousands of small independent miners, it’s far from clear that coordinating them would be possible at all. More likely, each miner on the 0.8 branch would wait for the 0.7 branch to gain the majority hash power, or at least for things to start heading clearly in that direction, before deciding to downgrade. Meanwhile, some miners in the 0.7 branch, seeing the warning in their clients and unaware of the developer recommendation, would in fact upgrade. The 0.8 branch would pull ahead faster and faster, and pretty soon the window of opportunity would be lost. In fact, if the developers had delayed their decision by even a few hours, it’s possible that enough miners would have upgraded from 0.7 to 0.8 that no single miner or pool operator would be able to reverse it singlehandedly, and then it’s anybody’s guess as to whether the downgrade solution would have worked at all.

Back to our story: we’re nearing the critical moment.

  23:44  Jeff Garzik		ACK on preferring 0.7 chain, for the moment
  23:45  Gavin Andresen		BTC Guild: if you can cleanly get us back on the 0.7 chain,
				ACK from here, too

Consensus is reached!

Time for action

Right away, developers start giving out advice to downgrade:

  23:49  Luke Dashjr		surge_: downgrade to 0.7 if you mine, or just wait
  23:50  Pieter Wuille		doublec: do you operate a pool?
  23:50  doublec		yes
  23:50  Pieter Wuille		doublec: then please downgrade now

BTC Guild gets going immediately…

  23:51  BTC Guild		BTC Guild is going back to full default block settings and 0.7 soon.
  00:01  BTC Guild		Almost got one stratum node moved

… even at significant monetary cost.

  23:57  BTC Guild		I've lost way too much money in the last 24 hours
				from 0.8

One way to look at this is that BTC Guild sacrificed revenues for the good of the network. But these actions can also be justified from a revenue-maximizing perspective. If the BTC Guild operator believed that the 0.7 branch would win anyway (perhaps the developers would be able to convince another large pool operator), then moving first is relatively best, since delaying would only take BTC Guild further down the doomed branch. Either way, the key factor enabling BTC Guild to confidently downgrade is that by doing so, they can ensure that the 0.7 branch will win.

Now that the decision has been taken, it’s time to broadcast an alert to all nodes:

  00:07  Gavin Andresen		alert params set to relay for 15 minutes, expire after 4 hours

The alert in question is a model of brevity: “URGENT: chain fork, stop mining on version 0.8″

At this point people start flooding the channel and chaos reigns. However, the work is done, and only one final step remains.

At 00:29, Pieter Wuille posts to bitcointalk. This essentially concludes the crisis response. The post said, in its entirety:

Hello everyone,

there is an emergency right now: the block chain has split between 0.7+earlier and 0.8 nodes. I’ll explain the reasons in a minute, but this is what you need to know now:

  • After a discussion on #bitcoin-dev, it seems trying to get everyone on the old chain again is the least risky solution.
  • If you’re a miner, please do not mine on 0.8 code. Stop, or switch back to 0.7. BTCGuild is switching to 0.7, so the old chain will get a majority hash rate soon.
  • If you’re a merchant: please stop processing transactions until the chains converge.
  • If you’re on 0.7 or older, the client will likely tell you that you need to upgrade. Do not follow this advise – the warning should go away as soon as the old chain catches up
  • If you are not a merchant or a miner, don’t worry.

Crucially, note that he was able to declare that the 0.7 branch was going to win due to BTC Guild switching to it. This made the downgrade decision the only rational one for everyone else, and from here things were only a matter of time.

What would have happened if the developers had done nothing?

Throughout the text I’ve emphasized that the downgrade option was the correct one and that speed of developer response was of the essence. Let’s examine this claim further by thinking about what would have happened if the developers had simply let things take their course. Vitalik Buterin thinks everything would have been just fine: “if the developers had done nothing, then Bitcoin would have carried on nonetheless, only causing inconvenience to those bitcoind and BitcoinQt users who were on 0.7 and would have had to upgrade.”

Obviously, I disagree. We can’t know for sure what would have happened, but we can make informed guesses. First of all, the fork would have gone on for far longer — essentially until every last miner running version 0.7 or lower either shut down or upgraded their software. Given that many miners leave their setups unattended and others have custom setups that aren’t easy to upgrade quickly, the fork would have lasted days. This would have several effects. Most obviously, the psychological impact of an ongoing fork would have been serious. In contrast, as events actually turned out, the event happened overnight in the US and had been resolved the next morning, and media coverage praised the developers for their effective action. The price of Bitcoin dropped by 25% during the incident but recovered immediately to almost its previous value.

Another adverse impact is that exchanges or payment services that took too long to upgrade their clients (or disable transactions) might find themselves victims of large double-spend attacks. As it happened, OKPay suffered a $10,000 double spend. This was done by a user trying to prove a point and who revealed the details publicly; they got lucky in that their payment to OKPay was confirmed by the 0.8 branch but not 0.7. A longer-running fork would likely have exacerbated the problem and allowed malicious attackers to figure out a systematic way to create double-spend transactions. [1]

Worse, it is possible, even if not likely, that the 0.7 branch might have continued indefinitely. Obviously, if this did happen, it would be devastating for Bitcoin, resulting in a fork of the currency itself. One reason the fork might keep going is because of a “Goldfinger attacker” interested in de-stabilizing Bitcoin: they might not have the resources to execute a 51% attack, but the fork might give them just the opportunity they need: they could simply invest resources into keeping the 0.7 fork alive instead of launching an attack from scratch.

There’s another reason why the fork might have never ended. Miners who postponed their decision to switch from 0.7 to 0.8 by, say, a week would face the distasteful prospect of forgoing a week’s worth of mining revenue. They might instead gamble and continue to operate on the 0.7 branch as a big fish in a small pond. If the 0.7 branch had, say, 10% of the mining power of the 0.8 branch, the miner’s revenue would be multiplied tenfold by mining on the 0.7 branch. Of course, the currency they’d earn would be “Bitcoin v0.7”, which would fork into a different currency from “Bitcoin v0.8”, and would be worth much less, the latter being considered the legitimate Bitcoin. We analyze this type of situation in Chapter 7, “Community, Politics, and Regulation” of our Bitcoin textbook-in-progress or the corresponding sections of the video lecture.

While the exact course of events that would have resulted from inaction is debatable, it is clear that the downgrade solution is by far the less risky one, and the speed and clearheadedness of the developers’ response is commendable.

All this is in stark contrast to the dysfunctional state of the consensus process on the block size issue. Why is consensus on that issue failing? The main reason is that unlike the fork, there is no correct solution to the block size issue; instead there are various parties with differing goals that aren’t mutually aligned. Further, in the case of the fork, the developers had a well-honed process for coming to consensus on technical questions including bugs. For example, it was obvious to everyone that the discussion of the fork should take place on the #bitcoin-dev IRC channel; this didn’t even need to be said. On the other hand, there is no clear process for debating the block size issue, and the discussion is highly fragmented between different channels. Finally, once the developers had reached consensus about the fork, the community went with that decision because they trusted the developers’ technical competence. On the other hand, there is no single entity that the Bitcoin community trusts to make decisions that have economic implications.

Conclusion

In summary, we have a lot to learn from looking back at the fork. Bitcoin had a really close call, and another bug might well lead to a different outcome. Contrary to the view of the consensus protocol as fixed in stone by Satoshi, it is under active human stewardship, and the quality of that stewardship is essential to its security. [2] Centralized decision-making saved the day here, and for the most part it’s not in conflict with the decentralized nature of the network itself. The human element becomes crucial when the code fails or needs to adapt over time (e.g., the block size debate). We should accept and embrace the need for a strong leadership and governance structure instead of treating decentralization as a magic bullet.

[1] This gets into a subtle technical point: it’s not obvious how to get a transaction to get into one branch but not the other. By default any transaction that’s broadcast will just get included in both branches, but there are several ways to try to subvert this. But given access to even one transaction that’s been successfully double-spent, an attacker can amplify it to gradually cause an arbitrary amount of divergence between the two branches.

[2] To underscore how far the protocol is from being fixed for all time by a specification, the source code of the reference implementation is the only correct documentation of the protocol. Even creating and maintaining a compatible implementation has proved to be near-infeasible.

Thanks to Andrew Miller for comments on a draft.

avatar

Too many SSNs floating around

In terms of impact, the OPM data breach involving security clearance information is almost certainly the most severe data breach in American history. The media has focused too much on social security numbers in its reporting, but is slowly starting to understand the bigger issues for anyone who has a clearance, or is a relative or neighbor or friend of someone with a clearance.

But the news got me thinking about the issue of SSNs, and how widespread they are. The risks of SSNs as both authentication and identifier are well known, and over the past decade, many organizations have tried to reduce their use of and reliance on SSNs, to minimize the damage done if (or maybe I should say “when”) a breach occurs.

In this blog post, I’m going to describe three recent cases involving SSNs that happened to me, and draw some lessons.

Like many suburbanites, I belong to Costco (a warehouse shopping club ideal for buying industrial quantities of toilet paper and guacamole, for those not familiar with the chain). A few months ago I lost my Costco membership card, so I went to get a new one, as a card is required for shopping in the store. The clerk looked up my driver’s license number (DL#) and couldn’t find me in the system; searching by address found me – but with my SSN as my DL#. When Costco first opened in my area, SSNs were still in use as DL#s, and so even though my DL# changed 20 years ago, Costco had no reason to know that, and still had my SSN. Hence, if there were a Costco breach, it’s quite possible that in addition to my name & address, an attacker would also get my SSN, along with some unknown number of other SSNs from long-term members. Does Costco even know that they have SSNs in their systems? Perhaps not, unless their IT staff includes old-timers!

A recent doctor’s visit had a similar result. The forms I was asked to fill out asked for my insurance ID (but not my SSN), however the receipt helpfully provided at the end of my visit included my SSN, which I had provided the first time I saw that doctor 25 years ago. Does the doctor know that his systems still have SSNs for countless patients?

Last fall I did a TV interview; because of my schedule, the interview was taped in my home, and the cameraman’s equipment accidentally did some minor damage to my house (*). In order to collect payment for the damage, the TV station insisted on having my SSN for a tax form 1099 (**), which they helpfully suggested I email in. I had to make a decision – should I email it, send it via US mail, or forgo the $200 payment? (Ultimately I sent it via US mail; whether they then copied it down and emailed it, I have no idea.) I got the check – but I suspect my SSN is permanently in the TV station’s records, and most likely accessible to far too many people.

These cases got me thinking where else my SSN is floating around, perhaps in organizations that don’t even realize they have SSNs that need to be protected. The grocery store probably got my DL# decades ago when it was still my SSN so I could get a check cashing card, and that number is probably still on file somewhere even though I haven’t written a check in a grocery store for 10 or 20 years. The car dealer that sold me my car five years ago has my SSN as part of the paperwork to file for a title with the Department of Motor Vehicles, even if they don’t have it from my DL#. Did they destroy their copy once they sent the paperwork to DMV? I’m not betting on it. I cosigned an apartment lease for my daughter before she had her own credit history close to 10 years ago, and that required my SSN, which is probably still in their files. I met a sales person 20 years ago who had his SSN on his business card, to make it easier for his customers in the classified world to look him up and verify his clearance. (I probably have his business card somewhere, but luckily for him I’m not very organized so I can’t find it.) Many potential employers require an SSN as part of a job application; who knows how many of those records are floating around. Luckily, many of these files are paper records in a file cabinet, and so mass breaches are unlikely, but it’s hard to know.  Did any of them scan all of their old files and post them on a file server, before destroying the paper copies?

As many people have suggested, it’s time to permanently retire SSNs as an authenticator, and make them just an identifier. Unfortunately, that’s much easier said than done. Todd Davis, CEO of Lifelock, famously put his SSN on his company’s advertising, and was then the victim of identity theft. We all know that the “last four” of your SSN has become a less intrusive (and even less secure!) substitute authenticator.

So what should we do? If you’re a CIO or in a corporate IT department, think about all the places where SSNs may be hiding. They’re not always obvious, like personnel records, but may be in legacy systems that have never been cleaned up, as is probably the case for Costco and my doctor. And once you get finished with your electronic records, think about where they’re hiding in paper records. Those are certainly lower risk for a bulk theft, but they’re at some risk of insider theft. Can the old (paper) records simply get shredded? Does it really matter if you have records of who applied for a job or a check cashing card 15 years ago?

I’m not optimistic, but I’ll keep my eyes open for other places where SSNs are still hiding, but shouldn’t be.

(*) Since you insist: one of the high intensity lights blew up, and the glass went flying, narrowly missing the producer. Two pieces melted into the carpet, ruining small sections. The staff were very apologetic, and there was no argument about their obligation to reimburse me for the damage. The bigger damage was that I spent an hour being interviewed on camera, and they used about 10 seconds in the TV piece.

(**) Yes, I know they shouldn’t need an SSN for reimbursement, but I unsuccessfully tilted at that windmill.

avatar

Congress’ Fast Track to Bad Law

Congress appears poised to pass Trade Promotion Authority, otherwise known as “fast track,” for the Trans Pacific Partnership Agreement (TPP). If this happens, it will likely close the door to any possibility of meaningful public input about TPP’s scope and contours. That’s a major problem, as this “21st century trade agreement” encompassing around 800 million people in the United States and eleven other countries, will impact areas ranging from access to medicine (who gets it) to digital privacy rights (who has them). But, unless you are a United States Trade Representative (USTR) “cleared advisor” (which almost always means that you represent an industry, like entertainment or pharmaceuticals), or, under certain limited circumstances, an elected official, your chief source of TPP information is WikiLeaks. In other words, if Julian Assange gets his hands on a draft TPP text, you might see it, once he decides that it should be made public. Of course, you’ll have to hope that the copy that you see is current and accurate.

There have been no – not one – formal releases of the TPP’s text. Thus, this 21st century agreement has been negotiated with 19th century standards of information access and flow. Indeed, TPP has been drafted with a degree of secrecy unprecedented for issues like intellectual property law and access to information. Some degree of secrecy and discretion is necessary in any negotiation, but the amount of secrecy here has left all but a few groups in the informational dark.

This process, if you want to call it that, defies logic. Margot Kaminski has labeled the entire process “transparency theater.” Perhaps most problematically, “transparency theater” has caused widespread opposition to TPP, like mine, that might otherwise not have materialized. Standing alone, the TPP’s negotiation process is sufficient to cause opposition. Additionally, the process has seemingly led to bad substance, which is a separate reason to oppose TPP. Imagine if bills in Congress were treated this way?

Meanwhile, fast track will mean that Congress will simply vote yes or no on the entire deal. Therefore, fast track will exacerbate that informational vacuum, and the public will not be able to do much more than accept whatever happens. In essence, an international agreement negotiated with no meaningful public input – and to some unknown degree written by a few industries —  is about to be rushed through the domestic legislative process. [Note: I submitted testimony in the case referenced in the previous hyperlink by Yale Law School’s Media Freedom and Information Access Clinic].

At this point, if you are at all concerned about the TPP’s process, the best thing that you can do is contact your Representatives and urge them to vote “no” on fast track. You could also join the call to formally release the TPP’s text before fast track is voted upon (i.e., right now). Finally, you could help assure that two other important international agreements currently in negotiation but in earlier stages – the Transatlantic Trade and Investment Partnership and Trade in Services Agreement – are negotiated more openly. How? By paying attention, and calling your elected officials and the USTR when things remain murky. I’ll have much more to say about these processes in the coming months.