October 2, 2022

Archives for August 2015

Bitcoin course available on Coursera; textbook is now official

Earlier this year we made our online course on Bitcoin publicly available 11 video lectures and draft chapters of our textbook-in-progress, including exercises. The response has been very positive: numerous students have sent us thanks, comments, feedback, and a few error corrections. We’ve heard that our materials are being used in courses at a few universities. Some students have even translated the chapters to other languages.

Coursera. I’m very happy to announce that the course is now available as a Princeton University online course on Coursera. The first iteration starts next Friday, September 4. The Coursera version offers embedded quizzes to test your understanding; you’ll also be part of a community of students to discuss the lectures with (about 10,000 15,000 have already signed up). We’ve also fixed all the errors we found thanks to the video editing skillz of the Princeton Broadcast Center folks. Sign up now, it’s free!

We’re closely watching ongoing developments in the cryptocurrency world such as Ethereum. Whenever a body of scientific knowledge develops around a new area, we will record additional lectures. The Coursera class already includes one additional lecture: it’s on the history of cryptocurrencies by Jeremy Clark. Jeremy is the ideal person to give this lecture for many reasons, including the fact that he worked with David Chaum for many years.

Jeremy Clark lecturing on the history of cryptocurrencies

Textbook. We’re finishing the draft of the textbook; Chapter 8 was released today and the rest will be coming out in the next few weeks. The textbook closely follows the structure of the lectures, but the textual format has allowed us to refine and polish the explanations, making them much clearer in many places, in my opinion.

I’m excited to announce that we’ll be publishing the textbook with Princeton University Press. The draft chapters will continue to be available free of charge, but you should buy the book it will be peer reviewed, professionally edited and typeset, and the graphics will be re-done professionally.

Finally, if you’re an educator interested in teaching Bitcoin, write to us and we’ll be happy to share with you some educational materials that aren’t yet public.

Robots don't threaten, but may be useful threats

Hi, I’m Joanna Bryson, and I’m just starting as a fellow at CITP, on sabbatical from the University of Bath.  I’ve been blogging about natural and artificial intelligence since 2007, increasingly with attention to public policy.  I’ve been writing about AI ethics since 1998.  This is my first blog post for Freedom to Tinker.

Will robots take our jobs?  Will they kill us in war?  The answer to these questions depends not (just) on technological advances – for example in the area of my own expertise, AI – but in how we as a society determine to view what it means to be a moral agent.  This may sound esoteric, and indeed the term moral agent comes from philosophy.  An agent is something that changes its environment (so chemical agents cause reactions).  A moral agent is something society holds responsible for the changes it effects.

Should society hold robots responsible for taking jobs or killing people?  My argument is “no”.  The fact that humans have full authorship over robots‘ capacities, including their goals and motivations, means that transferring responsibility to them would require abandoning, ignoring or just obscuring the obligations of humans and human institutions that create the robots.  Using language like “killer robots” can confuse the tax-paying public already easily lead by science fiction and runaway agency detection to believing that robots are sentient competitors.  This belief ironically serves to protect the people and organisations that are actually the moral actors.

So robots don’t kill or replace people; people use robots to kill or replace each other.  Does that mean there’s no problem with robots?  Of course not. Asking whether robots (or any other tools) should be subject to policy and regulation is a very sensible question.

In my first paper about robot ethics (you probably want to read the 2011 update for IJCAI, Just an Artifact: Why Machines are Perceived as Moral Agents), Phil Kime and I argued that as we gain greater experience of robots, we will stop reasoning about them so naïvely, and stop ascribing moral agency (and patiency [PDF, draft]) to them.  Whether or not we were right is an empirical question I think would be worth exploring – I’m increasingly doubting whether we were.  Emotional engagement with something that seems humanoid may be inevitable.  This is why one of the five Principles of Robotics (a UK policy document I coauthored, sponsored by the British engineering and humanities research councils) says “Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.” Or in ordinary language, “Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.”

Nevertheless, I hope that by continuing to educate the public, we can at least help people make sensible conscious decisions about allocating their resources (such as time or attention) between real humans versus machines.  This is why I object to language like “killer robots.”  And this is part of the reason why my research group works on increasing the transparency of artificial intelligence.

However, maybe the emotional response we have to the apparently human-like threat of robots will also serve some useful purposes.  I did sign the “killer robot” letter, because although I dislike the headlines associated with it, the actual letter (titled “Autonomous Weapons: an Open Letter from AI & Robotics Researchers“) makes clear the nature of the threat of taking humans out of the loop on real-time kill decisions.   Similarly, I am currently interested in understanding the extent to which information technology, including AI, is responsible for the levelling off of wages since 1978.  I am still reading and learning about this; I think it’s quite possible that the problem is not information technology per se, but rather culture, politics and policy more generally.  However, 1978 was a long time ago.  If more pictures of the Terminator get more people attending to questions of income inequality and the future of labour, maybe that’s not a bad thing.

How not to measure security

A recent paper published by Smartmatic, a vendor of voting systems, caught my attention.

The first thing is that it’s published by Springer, which typically publishes peer-reviewed articles – which this is not. This is a marketing piece. It’s disturbing that a respected imprint like Springer would get into the business of publishing vendor white papers. There’s no disclaimer that it’s not a peer-reviewed piece, or any other indication that it doesn’t follow Springer’s historical standards.

The second, and more important issue, is that the article could not possibly have passed peer review, given some of its claims. I won’t go into the controversies around voting systems (a nice summary of some of those issues can be found on the OSET blog), but rather focus on some of the security metrics claims.

The article states, “Well-designed, special-purpose [voting] systems reduce the possibility of results tampering and eliminate fraud. Security is increased by 10-1,000 times, depending on the level of automation.”

That would be nice. However, we have no agreed-upon way of measuring security of systems (other than cryptographic algorithms, within limits). So the only way this is meaningful is if it’s qualified and explained – which it isn’t. Other studies, such as one I participated in (Applying a Reusable Election Threat Model at the County Level), have tried to quantify the risk to voting systems – our study measured risk in terms of the number of people required to carry out the attack. So is Smartmatic’s study claiming that they can make an attack require 10 to 1000 more people, 10 to 1000 times more money, 10 to 1000 times more expertise (however that would be measured!), or something entirely different?

But the most outrageous statement in the article is this:

The important thing is that, when all of these methods [for providing voting system security] are combined, it becomes possible to calculate with mathematical precision the probability of the system being hacked in the available time, because an election usually happens in a few hours or at the most over a few days. (For example, for one of our average customers, the probability was 1×10-19. That is a point followed by 19 [sic] zeros and then 1). The probability is lower than that of a meteor hitting the earth and wiping us all out in the next few years—approximately 1×10-7 (Chemical Industry Education Centre, Risk-Ed n.d.)—hence it seems reasonable to use the term ‘unhackable’, to the chagrin of the purists and to my pleasure.

As noted previously, we don’t know how to measure much of anything in security, and we’re even less capable of measuring the results of combining technologies together (which sometimes makes things more secure, and other times less secure). The claim that putting multiple security measures together gives risk probabilities with “mathematical precision” is ludicrous. And calling any system “unhackable” is just ridiculous, as Oracle discovered some years ago when the marketing department claimed their products were “unhackable”. (For the record, my colleagues in engineering at Oracle said they were aghast at the slogan.)

As Ron Rivest said at a CITP symposium, if voting vendors have “solved the Internet security and cybersecurity problem, what are they doing implementing voting systems? They should be working with the Department of Defense or financial industry. These are not solved problems there.” If Smartmatic has a method for obtaining and measuring security with “mathematical precision” at the level of 1019, they should be selling trillions of dollars in technology or expertise to every company on the planet, and putting everyone else out of business.

I debated posting this blog entry, because it may bring more attention to a marketing piece that should be buried. But I hope that writing this will dissuade anyone who might be persuaded by Smartmatic’s unsupported claims that masquerade as science. And I hope that it may embarrass Springer into rethinking their policy of posting articles like this as if they were scientific.

The Defend Trade Secrets Act Has Returned

Freedom to Tinker readers may recall that I’ve previously warned about legislation to create a federal private cause of action for trade secret misappropriation in the name of fighting cyber-espionage against United States businesses. Titled the Defend Trade Secrets Act (DTSA), it failed to move last year. Well, the concerning legislation has returned, and, although it has some changes, it is little better than its predecessor. In fact, it may be worse.

Therefore, Sharon Sandeen and I have authored a new letter to Congress. In it, we point out that our previously-stated concerns remain, both stated by a previous letter and in a law review article entitled Here Come The Trade Secret Trolls. In sum, we argue that  combined “with an ex parte seizure remedy, embedded assumption of harm, and ambiguous language about the inevitable disclosure doctrine, the new DTSA appears to not only remain legislation with significant downsides, but those downsides may actually be even more pronounced.” Moreover, we assert that “the DTSA still does not do much, if anything, to address the problem of cyber-espionage that cannot already be done under existing state and federal law.”

In the letter, we call on Congress to abandon the DTSA. In addition, we ask that “there be public hearings on (a) the benefits and drawbacks of the DTSA, and (b) the specific question of whether the DTSA addresses the threat of cyber-espionage.” Finally, we encourage Congress to consider alternatives in dealing with cyber-espionage, including much-needed amendment of the Computer Fraud and Abuse Act.