November 17, 2024

Ethical dilemmas faced by software engineers: A roundup of responses

Two weeks ago I asked for real-life examples of ethical dilemmas in software engineering. Many of you sent responses by email, twitter, and comments. Thank you for taking the time! Here is a quick summary (in no particular order).

Aaron Massey has written a very thoughtful post in response. I encourage you to give it a read. Let me highlight one point he makes in particular that I found very insightful:

Worse, ethics and failure tend to be lumped together, at least in software engineering. When I’ve asked questions similar to Arvind’s in the past, I’ve found that important or noticeable failures are common, but these are not always the most useful for learning ethics. Consider the Therac-25 failure, in which several deaths occurred because of a software engineering failure. While this is a serious failure, I’m not sure it’s fair to say that this is a great example of an ethical dilemma. The developers of the software weren’t tempted to introduce the bug; it was simply an accident of construction. Had they known about this beforehand, it’s likely they would have fixed it. Similar arguments can be made for things like the failed launch of the Ariane-5 or the Mars Climate Orbiter, which are also commonly mentioned. I suppose these are reasonable examples of the need to at least not be totally ambivalent about engineering projects, but they aren’t great examples of ethical dilemmas.

Next, a person who wishes to be anonymous writes by email:

Here’s one that happened to me […] It was the website for a major clothing brand targeted at one gender. They were running a competition for entrants to win one of five of a very cool prize (think iphone or xbox). At the end of the competition management asked us to randomly extract five winners from the database. So we wrote a little script to pull five random entries — it returned 3 of one gender and 2 from the other.

We sent the names up the chain but then head office came back and asked us to redraw as they didn’t want any winners from the non-target gender. We refused based on the Anti-Discrimination Act here in my home state.

Alex Stamos points to the slides and audio of his Defcon talk on the ethics of the white hat industry, and notes that all of the examples in the end are real.

On a historical note, Steve Bellovin points to the history of the Computer Professionals for Social Responsibility. I’d never heard of the organization; it appears that it started in 1983, has been relatively inactive for the last few years and was dissolved a few months ago.

Augie Fackler writes

designing a security algorithm without any peer review — this comes to mind because of a recent blackhat presentation about z-wave hardware showing that the algorithm (might) be sound, but there are defects in door lock implementations that can cause the security to be worthwhile. The ethical dilemma comes in because it’s clearly an under-tested security system that vendors are pushing for physical security.

Henry Corrigan-Gibbs points to his paper with Bryan Ford via Twitter:

We draw an ethical analogy between Internet freedom efforts and humanitarian aid work. This parallel motivates a number of ethical questions relating to anonymity and censorship-circumvention research.

James Grimmelmann points to several papers: Danielle Citron’s Technological Due Process, which I think is a very important paper, Bias in Computer systems by Friedman and Nissenbaum, and his own The Google Dilemma. I haven’t read the latter two yet. He also links to a Gamasutra essay on free-to-play games, which is coincidentally something I’ve been investigating in the context of my recent series on price discrimination.

Several other interesting Twitter responses: spam/mass mailing, weapons tech, Internet filtering, Facebook Beacon.

And finally, many great responses in the comments; one frequent theme was vulnerabilities/crypto/hacking/malware.

Apologies if I missed anything. Feel free to send me more! If this list keeps growing, it might be productive to set up a Wiki.

Comments

  1. Hi all,

    The title is very dear to me as a long time developer of enterprise software. I am so glad to find someone asking this very important question. It seems most of the posts focus on the liability and development practices and very few talks about as a developer ethical/unethical dilemmas in dealing with his/her customer.

    I remember very well in a situation once I worked in a company in Brisbane, Australia, in which I believe developers were forced into an unethical situation directed and supported by management towards their customer in order to get the deal through. As a senior member of the development team, I saw the practice of using programming trickey and changing the wording to remove the customer’s objection a very unethetical practice because it did not really solve the customer’s problem. It is unethical that the development company should take advantage of customer not being able to communicate succinctly. I being a senior member refrained from going along with this but many who might not have seen this as ethical issue went along mistakeningly thinking it is clever to get it over the line quickly.

    The end result is forcing this customers to provide unnnecessarily excessive amount of resources just to run the program that the developers should have addressed in the right way. In the end, the reason was obvious as the company did not really want to invest resources in doing the right things so let the customers in the disadvantage position of not knowing all the facts to foot the bill.

    It is hard for developers in this kind of ethical dilemmas to deal with them. If one sticks to one’s ethical stand, one most likely stands to lose the job plus a very bad reference. If one goes along with the unethical practice, one’s ethical stand could gradually weaken to the point that one loses that ability to judge and may I say most likely rewarded handsomely.

    In my case, I voiced my view and not actively participated in such an unethical act to hookwink the customer. I am interesting to know how others experience in handling this kind of predicament or dilemmas?

    Noel

    • I have been terrible; I have told my clients what they’re buying. I’ve seen far too many client’s projects run as cash-cows, milked for all they’re worth. Or run with too few resources, because someone promised more than could be delivered. All too often, my job has been to fix broken things, or to bury dead projects. Nobody likes to hire Cassandra.

      I prefer to understand, and to help other people understand, the choices we’re making and the options at hand. I suspect this is why I’m almost in academia.

  2. I disagree with Aaron and you about the value of cases like Therac-25 as ethical lessons. Certainly the designers of Therac-25 didn’t wrestle with an ethical dilemma of whether their design should deliver deadly doses of radiation to people.

    But they did make ethical decisions about how much care they should take to avoid deadly errors, and subsequent investigations suggest that they got those ethical decisions badly wrong. As a result they pursued an engineering approach that had an unacceptably high risk of killing people. As a result of that, plus bad luck, people died.

    Cases like this are useful, if taught well, in helping students understand the ethics of safety engineering and how to learn from accidents and failures.

    • Aaron Massey says

      I’m not sure we actually disagree. I do think there’s value in teaching these cases like the Therac-25 case. The paragraph in my post prior to the one Arvind cited in this post highlights Henry Petroski’s To Engineer is Human as a good example of teaching the ethics of safety engineering through cases that are the civil engineering equivalent of the Therac-25 (e.g. Tacoma Narrows, KC Hyatt Regency walkway, etc…). My concern is that obvious failure cases tend to be the only cases presented. Many (most?) of the examples Arvind presented in this post present fundamentally different ethical concerns that are often overlooked or ignored.

      Consider privacy, which has subtle implications for software engineers. Ann Bartow commented that privacy suffers from the problem of “not enough dead bodies.” My concern is rather similar. Many important ethical arguments don’t make it to the classroom because they aren’t “attractive” enough. If you’re looking for motivating pedagogical examples to illustrate the importance of ethics in software engineering for an audience of 18-22 year olds, irradiated bodies and exploding rockets are more compelling than ordering differences in Google search results or the vagaries of deep packet inspection for network management or advertising. The ethics of these latter examples simply aren’t as attractive or obvious at first blush. It’s easier to view these problem as liabilities for management to avoid rather than ethical decisions in which engineers should involve themselves (to paraphrase maelorin’s comment above).

      • It is *because* some ethical choices are difficult, or difficult to understand as ethical choices, that they need to be taught to students.

        I teach first year technology students. I use examples from my own experience to give them context and to draw them into discussions that eventually become discussions of ethical choices. I start with the choices I made, and get them to consider alternatives. We don’t always get things right, and students need to appreciate that. They also need to understand that not choosing is a choice, and letting others choose for us is a choice. Choosing for ourselves is hard, but it is the better approach for anyone who wants to be a professional.

        We *want* computer experts to be recognised as professionals; to achieve this, they have to behave as professionals. We have to *be* professional. To be professional, we need to understand what that means – and be able to explain it.

        Students find it difficult to relate to a lot of what they need to know, to be able to do, to be software engineers, or architects, or lawyers. It is encumber upon us, whether their teachers or their peers, to help them learn. To provide the examples, to *be* the examples. There are time when we don’t know the answers – that needs to be OK. It’s then that we need to be able to ask our peers for advice; and to know that, as professionals, we can trust them to help us. But we are responsible for the choices we make.

        If *we* don’t understand the choices we make, who else can explain them?

  3. Ethics is a widely misunderstood topic – whether this is because it is not taught well, or is not taken seriously by students, or some other reason(s) – it is one that deserves more attention by everyone across engineering and technology. Ethics is a core component of professional practice: central to what makes a profession a profession.

    Often discussions of ethics are actually discussions of liability, rather than of values and responsibility. Liability for failures rather than responsibility for what is or is not done, and why. This same confusion occurs elsewhere (for example, it is common in legal education for students to get the impression that legal ethics is about avoiding liability rather than a concern about how choices are made and why. This distinction is important, but often overlooked.

    The rival dilemmas I have seen in my practice experience (legal and/or technological) have often revolved around the appropriate way to do something, though I have seen some ethical inconsistencies in decisions about what is to be done. How to select candidates from a database is a common issue, but so is how that database was populated in the first place – and what information, and from what sources.

    It has been very attractive to businesses for some time now to collect as much information about individual people as they can. Very often the justification for this is “we might find a way to use it in the future”. The popularity of ‘Big Data’ may be a recent trend, but the underlying claim for extensive data collection is far from new: data collection beyond the scope of immediate needs. Ethically, this raises many questions. Unfortunately, in practice, software engineers are often expected to design and develop products on demand in environments that dissuade raising such matters.

    So long as software engineers continue to view ethics as liability-avoidance, rather than a framework for explaining and discussing what and why questions, managers will continue to expect them to do as their told. I am not suggesting or advocating some kind of revolt, rather I think we ought to be working towards earlier involvement in decision-making: we are experts and professionals and we can contribute to decision-making. This would also facilitate the movement of senior software engineers into senior management – more experience with senior management personnel and processes, better relationships with senior management, and better understanding of strategic decision-making and the contribution/s that software engineering can make to the organisation. Earlier involvement and better articulation of ethical concerns may also facilitate innovation, and avert problems that can arise when the assumptions of management and of implementation teams are not aligned.

    Discussion of ethics and ethical issues in software engineering is important. I will follow this thread with some interest.

    • Tim C. Mazur says

      Wow. I don’t know who maelorin is but, as someone who has worked in business ethics daily for 26 years, s/he wrote an excellent reply.

    • Wow nice reply maelorin. I couldn’t put my finger on this that has been bothering me for a long time; and in many aspects of life; a rather obvious to my eyes disconnect between what people claim is “ethical” and what I see as clearly “unethical.” With your post now I see where the disconnect is; it seems to be in how the discussion of ethics is framed.

      When I read the initial request for real world examples, nothing came to mind. I am not in a very high position for any coding I do there is very little ethical decision making in the first place. But reading the comments didn’t spark much thoughts either. Why? Well because what I understand from the word “ethics” seems to be quite contrary to what was/is being discussed. You hit the nail on the head for me; it seems liability is what is really being discussed; and not actually “ethics” or [my definition] “the moral basis on which one makes decisions.”

      In the interim I now have a real world example; yesterday’s headlines of the news site, I generally visit most, mentioned Google in court over it’s decisions to read (automated processes) user’s e-mail and even any e-mail coming to their users for the purpose of targeted advertising. The comment boards pretty much were contrary to everything I consider “ethics.” Most people said things like “well you should have read the EULA” or “don’t use it if you don’t like it” and other such stuff. Most believed it was perfectly fine for Google to read (automated) the contents of e-mails because it is known that they push targeted marketing and advertising and that is why they give the free e-mail service.

      Of course their EULA and privacy policy are not actually CLEAR that is what they are doing; they do say they are collecting data for marketing, but they don’t say it is coming from what people actually write in their e-mails (even those who are not their customers and not ever even accepted their EULA at all); though this has been known about in tech circles for as long as gmail has been around. However, this is not clear to most of their users.

      To me this is a absolute case of true “ethics dilemma.” On the one hand you have a company built up for the sole purpose of marketing (much like Axciom in Ed’s post of Sept 5); and they [Google] give away a service for free with the user notice that they will be targeted for advertising. This creates the incentive to collect ‘Big Data’ so that they can better server their customers [e.g. the advertisers]. And thus within tech circles they may have indicated they plan to collect their data via “scanning e-mail contents.” But the vast majority of their users do not have this understanding.

      Clearly it should be “unethical” to read (even with automated processes) things that people write simply because you control the servers that process what they write and have notified some of the parties involved that you plan to do it. But the court case and the vast majority of comments of users seem more revolved around liability … how much did Google reveal about their intentions and to whom.

      When the ethics dilemma is really about whether or not one should even have automated reading of personal writing from which the majority of writers expect a general level of privacy–when there are many other was to collect data for marketing. The question then is in the decision making process “to read or not to read;” rather than “to notify or not to notify.” The first being ethics the second being liability; but only the second seems to matter when the discussion comes up.

      And now I see why I have a problem with most lawyers; its because they focus on legal liability rather than actual ethical behavior [John Swallow, Utah State Attorney General as a prime example for me.]