Two weeks ago I asked for real-life examples of ethical dilemmas in software engineering. Many of you sent responses by email, twitter, and comments. Thank you for taking the time! Here is a quick summary (in no particular order).
Aaron Massey has written a very thoughtful post in response. I encourage you to give it a read. Let me highlight one point he makes in particular that I found very insightful:
Worse, ethics and failure tend to be lumped together, at least in software engineering. When I’ve asked questions similar to Arvind’s in the past, I’ve found that important or noticeable failures are common, but these are not always the most useful for learning ethics. Consider the Therac-25 failure, in which several deaths occurred because of a software engineering failure. While this is a serious failure, I’m not sure it’s fair to say that this is a great example of an ethical dilemma. The developers of the software weren’t tempted to introduce the bug; it was simply an accident of construction. Had they known about this beforehand, it’s likely they would have fixed it. Similar arguments can be made for things like the failed launch of the Ariane-5 or the Mars Climate Orbiter, which are also commonly mentioned. I suppose these are reasonable examples of the need to at least not be totally ambivalent about engineering projects, but they aren’t great examples of ethical dilemmas.
Next, a person who wishes to be anonymous writes by email:
Here’s one that happened to me [...] It was the website for a major clothing brand targeted at one gender. They were running a competition for entrants to win one of five of a very cool prize (think iphone or xbox). At the end of the competition management asked us to randomly extract five winners from the database. So we wrote a little script to pull five random entries — it returned 3 of one gender and 2 from the other.
We sent the names up the chain but then head office came back and asked us to redraw as they didn’t want any winners from the non-target gender. We refused based on the Anti-Discrimination Act here in my home state.
On a historical note, Steve Bellovin points to the history of the Computer Professionals for Social Responsibility. I’d never heard of the organization; it appears that it started in 1983, has been relatively inactive for the last few years and was dissolved a few months ago.
Augie Fackler writes
designing a security algorithm without any peer review — this comes to mind because of a recent blackhat presentation about z-wave hardware showing that the algorithm (might) be sound, but there are defects in door lock implementations that can cause the security to be worthwhile. The ethical dilemma comes in because it’s clearly an under-tested security system that vendors are pushing for physical security.
Henry Corrigan-Gibbs points to his paper with Bryan Ford via Twitter:
We draw an ethical analogy between Internet freedom efforts and humanitarian aid work. This parallel motivates a number of ethical questions relating to anonymity and censorship-circumvention research.
James Grimmelmann points to several papers: Danielle Citron’s Technological Due Process, which I think is a very important paper, Bias in Computer systems by Friedman and Nissenbaum, and his own The Google Dilemma. I haven’t read the latter two yet. He also links to a Gamasutra essay on free-to-play games, which is coincidentally something I’ve been investigating in the context of my recent series on price discrimination.
Several other interesting Twitter responses: spam/mass mailing, weapons tech, Internet filtering, Facebook Beacon.
And finally, many great responses in the comments; one frequent theme was vulnerabilities/crypto/hacking/malware.
Apologies if I missed anything. Feel free to send me more! If this list keeps growing, it might be productive to set up a Wiki.