Emin Gün Sirer has a fascinating post about how the use of NoSQL caused technical failures that led to the demise of Bitcoin exchanges Flexcoin and Poloniex. But these are only the latest in a long line of hacks of exchanges, other services, and individuals; a wide variety of bugs have been implicated. This suggests that there’s some underlying reason why Bitcoiners keep building systems that get exploited. In this post I’ll examine why.
[This is the first in a series of posts giving some examples of security-related research in the Princeton computer science department. We're actively recruiting top-notch students to enter our Ph.D. program, as well as postdocs and visiting scholars. We don't have enough bandwidth here on the blog to feature everything we do, so we'll be highlighting a few examples over the next couple of weeks.]
Everything we do on the web is tracked, profiled, and analyzed. But what do companies do with that information? To what extent do they use it in ways that benefit us, versus discriminatory ways? While many concerns have been raised, not much is known quantitatively. That’s why at Princeton we’re building an infrastructure to detect, measure and reverse engineer differential treatment of web users.
Joint post with Andrew Miller, University of Maryland.
Bitcoin is broken, claims a new paper by Cornell researchers Ittay Eyal and Emin Gun Sirer. No it isn’t, respond Bitcoiners. Yes it is, say the authors. Our own Ed Felten weighed in with a detailed analysis, refuting the paper’s claim that a coalition of “selfish miners” will grow in size until it controls the whole currency. But this has been disputed as well.
In other words, the jury is still out. But something has been lost in all the noise about the grandiose statements — on their way to getting to their strong claim, the authors make a weaker and much more defensible argument, namely that selfish miners can earn more than their fair share of mining revenue. [Read more...]
Two weeks ago I asked for real-life examples of ethical dilemmas in software engineering. Many of you sent responses by email, twitter, and comments. Thank you for taking the time! Here is a quick summary (in no particular order).
Aaron Massey has written a very thoughtful post in response. I encourage you to give it a read. Let me highlight one point he makes in particular that I found very insightful:
Worse, ethics and failure tend to be lumped together, at least in software engineering. When I’ve asked questions similar to Arvind’s in the past, I’ve found that important or noticeable failures are common, but these are not always the most useful for learning ethics. Consider the Therac-25 failure, in which several deaths occurred because of a software engineering failure. While this is a serious failure, I’m not sure it’s fair to say that this is a great example of an ethical dilemma. The developers of the software weren’t tempted to introduce the bug; it was simply an accident of construction. Had they known about this beforehand, it’s likely they would have fixed it. Similar arguments can be made for things like the failed launch of the Ariane-5 or the Mars Climate Orbiter, which are also commonly mentioned. I suppose these are reasonable examples of the need to at least not be totally ambivalent about engineering projects, but they aren’t great examples of ethical dilemmas.
Next, a person who wishes to be anonymous writes by email:
Here’s one that happened to me [...] It was the website for a major clothing brand targeted at one gender. They were running a competition for entrants to win one of five of a very cool prize (think iphone or xbox). At the end of the competition management asked us to randomly extract five winners from the database. So we wrote a little script to pull five random entries — it returned 3 of one gender and 2 from the other.
We sent the names up the chain but then head office came back and asked us to redraw as they didn’t want any winners from the non-target gender. We refused based on the Anti-Discrimination Act here in my home state.
On a historical note, Steve Bellovin points to the history of the Computer Professionals for Social Responsibility. I’d never heard of the organization; it appears that it started in 1983, has been relatively inactive for the last few years and was dissolved a few months ago.
Augie Fackler writes
designing a security algorithm without any peer review — this comes to mind because of a recent blackhat presentation about z-wave hardware showing that the algorithm (might) be sound, but there are defects in door lock implementations that can cause the security to be worthwhile. The ethical dilemma comes in because it’s clearly an under-tested security system that vendors are pushing for physical security.
Henry Corrigan-Gibbs points to his paper with Bryan Ford via Twitter:
We draw an ethical analogy between Internet freedom efforts and humanitarian aid work. This parallel motivates a number of ethical questions relating to anonymity and censorship-circumvention research.
James Grimmelmann points to several papers: Danielle Citron’s Technological Due Process, which I think is a very important paper, Bias in Computer systems by Friedman and Nissenbaum, and his own The Google Dilemma. I haven’t read the latter two yet. He also links to a Gamasutra essay on free-to-play games, which is coincidentally something I’ve been investigating in the context of my recent series on price discrimination.
Several other interesting Twitter responses: spam/mass mailing, weapons tech, Internet filtering, Facebook Beacon.
And finally, many great responses in the comments; one frequent theme was vulnerabilities/crypto/hacking/malware.
Apologies if I missed anything. Feel free to send me more! If this list keeps growing, it might be productive to set up a Wiki.
Software developers create the architectures that govern our online and often our offline lives — from software-controlled cars and medical systems to digital content consumption and behavioral advertising. In fact, software shapes our societal values. Are the creators of code aware of the power that they wield, and the responsibilities that go with it? As students, are they trained in the ethics of their discipline?
The good folks at the Markkula center for applied ethics at Santa Clara University have released a self-contained software engineering ethics module to fill the gap between the critical role of software and the lack of adequate ethical training in computer science and software engineering programs. (I had a small part to play in helping write the introduction.) If you’re an educator or a student, I encourage you to give it a look!
The module has several hypothetical examples as thought exercises for students. This is nice because it isolates certain ethical principles for study. That said, we felt that it would also be useful to present real-world examples of ethical dilemmas, perhaps in a follow-on module for slightly more advanced students. There are a huge number of these, so we’d like your help in compiling them.
At this point I’m not looking for fully articulated case studies, but merely examples of software deployed in the real world in a way that raises ethical concerns. A few examples with different levels of severity to start things off: 1. Stuxnet 2. Circumvention of Safari cookie blocking by Google and other companies. 3. The Keep calm and rape T-shirt.
If you have an example to suggest, please leave a comment, , or tweet at me. You will have the reward of
lavish gifts knowing that you’ve helped improve the abysmal state of ethics education for the next generation of software engineers. Thank you!
A common argument advanced by Bitcoin proponents is that unlike banks and credit cards, Bitcoin has low (or even zero) transaction fees. The claim is a complete red herring, and in this post I’ll explain why.
Let’s assume for the purposes of argument that Bitcoin transaction fees are, in fact, zero. There are small mining-related transaction fees, but it seems plausible that these fees will always be far smaller than those associated with traditional banking.
Why do banks and credit cards charge those annoying fees? A major reason is fraud. Banks eat the cost of fraudulent transactions, but pass on the cost to the customer by taking a cut of each legitimate transaction. Fraud is not an artifact of a particular system that we can design away — it is inherent to every form of money handled by humans. To compare Bitcoin meaningfully with traditional banking, then, we must ask how big fraud-related losses are for Bitcoin users.
Framed this way, the comparison is not a happy one for Bitcoin. From thefts of wallets to hacks of Bitcoin exchanges, fraud in the Bitcoin ecosystem is rampant. It only gets worse when we add sources of risk other than fraud. A recent study found that 45% of Bitcoin exchanges shut down. Several of the rest have suffered attacks and losses.
In many organizations that are leaders in their field, new inductees often report being awed when they start to comprehend how sophisticated the system is compared to what they’d assumed. Engineers joining Google, for example, seem to express that feeling about the company’s internal technical architecture. Princeton’s system for teaching large undergraduate CS classes has had precisely that effect on me.
I’m “teaching” COS 226 (Data Structures and Algorithms) together with Josh Hug this semester. I put that word in quotes because lecturing turns out to be a rather small, albeit highly visible part of the elaborate instructional system for these classes that’s been put in place and refined over many years. It involves nine different educational modes that students interact with and a six different types of instructional staff(!), each with a different set of roles. Let me break it down in terms of instructional staff responsibilities, which correspond roughly to learning modes.
The big news in the Bitcoin world is that there are several Bitcoin-mining ASICs (custom chips) already shipping or about to be launched. Avalon in particular has been getting some attention recently. Bitcoin mining moved long ago from CPUs to GPUs, but this takes it one step further. The expectation is that very soon most mining will be done by such specialized hardware; general-purpose machines will be too inefficient to be profitable. This development raises a whole host of interesting questions.
First, if ASIC mining is indeed so much faster than any current method, and can recover the hardware cost in just a few days as claimed, why are the creators selling them, or even talking about them?! Why not keep the existence of these devices a secret and mine as much BTC as possible before the world catches on? After all, the costs of packaging and distribution will only decrease the margins. There are several possible explanations:
[Editor's note: The Center for Information Technology Policy (CITP) is delighted to welcome Arvind Narayanan as an Assistant Professor in Computer Science, and an affiliated faculty member in CITP. Narayanan is a leading researcher in digital privacy, data anonymization, and technology policy. His work has been widely published, and includes a paper with CITP co-authors Ed Felten and Joseph Calandrino. In addition to his core technical research, Professor Narayanan will be engaged in active public policy topics through projects such as DoNotTrack.us, and is sought as an expert in the increasingly complex domain of privacy and technology. He was recently profiled on Wired.com as the "World's Most Wired Computer Scientist."]
I’ve had a wonderful first month at Princeton as an assistant professor in Computer Science and CITP. Let me take a quick moment to introduce myself.
I’m a computer scientist by training; I study information privacy and security, and in the last few years have developed a strong side-interest in tech policy. I did my Ph.D. at UT Austin and more recently I was a post-doctoral researcher at Stanford and a Junior Affiliate Fellow at the Stanford Law School Center for Internet and Society.