May 21, 2018

Ethical dilemmas faced by software engineers: A roundup of responses

Two weeks ago I asked for real-life examples of ethical dilemmas in software engineering. Many of you sent responses by email, twitter, and comments. Thank you for taking the time! Here is a quick summary (in no particular order).

Aaron Massey has written a very thoughtful post in response. I encourage you to give it a read. Let me highlight one point he makes in particular that I found very insightful:

Worse, ethics and failure tend to be lumped together, at least in software engineering. When I’ve asked questions similar to Arvind’s in the past, I’ve found that important or noticeable failures are common, but these are not always the most useful for learning ethics. Consider the Therac-25 failure, in which several deaths occurred because of a software engineering failure. While this is a serious failure, I’m not sure it’s fair to say that this is a great example of an ethical dilemma. The developers of the software weren’t tempted to introduce the bug; it was simply an accident of construction. Had they known about this beforehand, it’s likely they would have fixed it. Similar arguments can be made for things like the failed launch of the Ariane-5 or the Mars Climate Orbiter, which are also commonly mentioned. I suppose these are reasonable examples of the need to at least not be totally ambivalent about engineering projects, but they aren’t great examples of ethical dilemmas.

Next, a person who wishes to be anonymous writes by email:

Here’s one that happened to me […] It was the website for a major clothing brand targeted at one gender. They were running a competition for entrants to win one of five of a very cool prize (think iphone or xbox). At the end of the competition management asked us to randomly extract five winners from the database. So we wrote a little script to pull five random entries — it returned 3 of one gender and 2 from the other.

We sent the names up the chain but then head office came back and asked us to redraw as they didn’t want any winners from the non-target gender. We refused based on the Anti-Discrimination Act here in my home state.

Alex Stamos points to the slides and audio of his Defcon talk on the ethics of the white hat industry, and notes that all of the examples in the end are real.

On a historical note, Steve Bellovin points to the history of the Computer Professionals for Social Responsibility. I’d never heard of the organization; it appears that it started in 1983, has been relatively inactive for the last few years and was dissolved a few months ago.

Augie Fackler writes

designing a security algorithm without any peer review — this comes to mind because of a recent blackhat presentation about z-wave hardware showing that the algorithm (might) be sound, but there are defects in door lock implementations that can cause the security to be worthwhile. The ethical dilemma comes in because it’s clearly an under-tested security system that vendors are pushing for physical security.

Henry Corrigan-Gibbs points to his paper with Bryan Ford via Twitter:

We draw an ethical analogy between Internet freedom efforts and humanitarian aid work. This parallel motivates a number of ethical questions relating to anonymity and censorship-circumvention research.

James Grimmelmann points to several papers: Danielle Citron’s Technological Due Process, which I think is a very important paper, Bias in Computer systems by Friedman and Nissenbaum, and his own The Google Dilemma. I haven’t read the latter two yet. He also links to a Gamasutra essay on free-to-play games, which is coincidentally something I’ve been investigating in the context of my recent series on price discrimination.

Several other interesting Twitter responses: spam/mass mailing, weapons tech, Internet filtering, Facebook Beacon.

And finally, many great responses in the comments; one frequent theme was vulnerabilities/crypto/hacking/malware.

Apologies if I missed anything. Feel free to send me more! If this list keeps growing, it might be productive to set up a Wiki.

Ethical dilemmas faced by software engineers: A request for real-world examples

Software developers create the architectures that govern our online and often our offline lives — from software-controlled cars and medical systems to digital content consumption and behavioral advertising. In fact, software shapes our societal values. Are the creators of code aware of the power that they wield, and the responsibilities that go with it? As students, are they trained in the ethics of their discipline?

The good folks at the Markkula center for applied ethics at Santa Clara University have released a self-contained software engineering ethics module to fill the gap between the critical role of software and the lack of adequate ethical training in computer science and software engineering programs. (I had a small part to play in helping write the introduction.) If you’re an educator or a student, I encourage you to give it a look!

The module has several hypothetical examples as thought exercises for students. This is nice because it isolates certain ethical principles for study. That said, we felt that it would also be useful to present real-world examples of ethical dilemmas, perhaps in a follow-on module for slightly more advanced students. There are a huge number of these, so we’d like your help in compiling them.

At this point I’m not looking for fully articulated case studies, but merely examples of software deployed in the real world in a way that raises ethical concerns. A few examples with different levels of severity to start things off: 1. Stuxnet 2. Circumvention of Safari cookie blocking by Google and other companies. 3. The Keep calm and rape T-shirt.

If you have an example to suggest, please leave a comment, , or tweet at me. You will have the reward of lavish gifts knowing that you’ve helped improve the abysmal state of ethics education for the next generation of software engineers. Thank you!