October 22, 2017

Getting serious about research ethics: AI and machine learning

[This blog post is a continuation of our series about research ethics in computer science.]

The widespread deployment of artificial intelligence and specifically machine learning algorithms causes concern for some fundamental values in society, such as employment, privacy, and discrimination. While these algorithms promise to optimize social and economic processes, research in this area has exposed some major deficiencies in the social consequences of their operation. Some consequences may be invisible or intangible, such as erecting computational barriers to social mobility through a variety of unintended biases, while others may be directly life threatening. At the CITP’s recent conference on computer science ethics, Joanna Bryson, Barbara Engelhardt, and Matt Salganik discussed how their research led them to work on machine learning ethics.

Joanna Bryson has made a career researching artificial intelligence, machine learning, and understanding their consequences on society. She has found that people tend to identify with the perceived consciousness of artificially intelligent artifacts, such as robots, which then complicates meaningful conversations about the ethics of their development and use. By equating artificially intelligent systems to humans or animals, people deduce its moral status and can ignore their engineered nature.

While the cognitive power of AI systems can be impressive, Bryson argues they do not equate to humans and should not be regulated as such. On the one hand, she demonstrates the power of an AI system to replicate societal biases in a recent paper (co-authored with CITP’s Aylin Caliskan and Arvind Narayanan) by letting systems trained on a corpus of text from the World Wide Web learn the implicit biases around the gender of certain professions. On the other hand, she argues that machines cannot ‘suffer’ in the same way as humans do, which is one of the main deterrents for humans in current legal systems. Bryson proposes we understand both AI and ethics as human-made artifacts. It is therefore appropriate to rely ethics – rather than science – to determine the moral status of artificially intelligent systems.

Barbara Engelhardt’s work focuses on machine learning in computational biology, specifically genomics and medicine. Her main area of concern is the reliance on recommendation systems, such as we encounter on Amazon and Netflix, to make decisions in other domains such as healthcare, financial planning, and career decisions. These machine learning systems rely on data as well as social networks to make inferences.

Engelhardt describes examples where using patient records to inform medical decisions can lead to erroneous recommendation systems for diagnosis as well as harmful medical interventions. For example, the symptoms of heart disease differ substantially between men and women, and so do their appropriate treatments. Most data collected about this condition was from men, leaving a blind spot for the diagnosis of heart disease in women. Bias, in this case, is useful and should be maintained for correct medical interventions. In another example, however, data was collected from a variety of hospitals in somewhat segregated poor and wealthy areas. The data appear to show that cancers in children from hispanic and caucasian races develop differently. However, inferences based on this data fail to take into account the biasing effect of economic status in determining at which stage of symptoms different families decide seek medical help. In turn, this determines the stage of development at which the oncological data is collected. The recommendation system with this type of bias confuses race with economic barriers to medical help, which will lead to harmful diagnosis and treatments.

Matt Salganik proposes that the machine learning community draws some lessons from ethics procedures in social science. Machine learning is a powerful tool the can be used responsibly or inappropriately. He proposes that it can be the task of ethics to guide researchers, engineers, and developers to think carefully about the consequences of their artificially intelligent inventions. To this end, Salganik proposes a hope-based and principle-based approach to research ethics in machine learning. This is opposed to a fear-based and rule-based approach in social science, or the more ad hoc ethics culture that we encounter in data and computer science. For example, machine learning ethics should include pre-research review through forms that are reviewed by third parties to avoid groupthink and encourage researchers’ reflexivity. Given the fast pace of development, though, the field should avoid a compliance mentality typically found at institutional review boards of univeristies. Any rules to be complied with are unlikely to stand the test of time in the fast-moving world of machine learning, which would result in burdensome and uninformed ethics scrutiny. Salganik develops these themes in his new book Bit By Bit: Social Research in the Digital Age, which has an entire chapter about ethics.”

See a video of the panel here.

Getting serious about research ethics: Security and Internet Measurement

[This blog post is a continuation of our series about research ethics in computer science that we started last week]

Research projects in the information security and Internet measurement sub-disciplines typically interact with third-party systems or devices to collect a large amounts of data. Scholars engaging in these fields are interested to collect data about technical phenomenon. As a result of the widespread use of the Internet, their experiments can interfere with human use of devices and reveal all sorts of private information, such as their browsing behaviour. As awareness of the unintended impact on Internet users grew, these communities have spent considerable time debating their ethical standards at conferences, dedicated workshops, and in journal publications. Their efforts have culminated in guidelines for topics such as vulnerability disclosure or privacy, whereby the aim is to protect unsuspecting Internet users and human implicated in technical research.

 

Prof. Nick Feamster, Prof. Prateek Mittal, moderator Prof. Elana Zeide, and I discussed some important considerations for research ethics in a panel dedicated to these sub-disciplines at the recent CITP conference on research ethics in computer science communities. We started by explaining that gathering empirical data is crucial to infer the state of values such as privacy and trust in communication systems. However, as methodological choices in computer science will often have ethical impacts, researchers need to be empowered to reflect on their experimental setup meaningfully.

 

Prof. Feamster discussed several cases where he had sought advice from ethical oversight bodies, but was left with unsatisfying guidance. For example, when his team conducted Internet censorship measurements (pdf), they were aware that they were initiating requests and creating data flows from devices owned by unsuspecting Internet users. These new information flows were created in realms where adversaries were also operating, for example in the form of a government censors. This may pose a risk to the owners of devices that were implicated in the experimentation and data collection. The ethics board, however, concluded that such measurements did not meet the strict definition of “human subjects research”, which thereby excluded the need for formal review. Prof. Feamster suggests computer scientists reassess how they think about their technologies or newly initiated data flows that can be misused by adversaries, and take that into account in ethical review procedures.

 

Ethical tensions and dilemmas in technical Internet research could be seen as interesting research problems for scholars, argued Prof. Mittal. For example, to reason about privacy and trust in the anonymous Tor network, researchers need to understand to what extent adversaries can exploit vulnerabilities and thus observe Internet traffic of individual users. The obvious, relatively easy, and ethically dubious measurement would be to attack existing Tor nodes and attempt to collect real-time traffic of identifiable users. However, Prof. Mittal gave an insight into his own critical engagement with alternative design choices, which led his team to create a new node within Princeton’s university network that they subsequently attacked. This more lab-based approach eliminates risks for unsuspecting Internet users, but allowed for the same inferences to be done.

 

I concluded the panel, suggesting that ethics review boards at universities, academic conferences, and scholarly journals engage actively with computer scientists to collect valuable data whilst respecting human values. Currently, a panel on non-experts in either computer science or research ethics are given a single moment to judge the full methodology of a research proposal or the resulting paper. When a thumbs-down is issued, researchers have no or limited opportunity to remedy their ethical shortcomings. I argued that a better approach would be an iterative process with in-person meetings and more in-depth consideration of design alternatives, as demonstrated in a recent paper about Advertising as a Platform for Internet measurements (pdf). This is the approach advocates in the Networked Systems Ethics Guidelines. Cross-disciplinary conversation, rather than one-time decisions, allow for a mutual understanding between the gatekeepers of ethical standards and designers of useful computer science research.

 

See the video of the panel here.

Getting serious about research ethics in computer science

Digital technology mediates our public and private lives. That makes computer science a powerful discipline, but it also means that ethical considerations are essential in the development of these technologies. Not all new developments may be welcomed by users, such as a patent application by Facebook that enables the company to identify their users’ emotions through cameras on their devices. A critical approach to developing digital technologies, guided by philosophical and ethical principles, will allow interventions that improve society in meaningful ways.

The Center for Information Technology Policy recently organized a conference to discuss research ethics in different computer science communities, such as machine learning, security, and Internet measurement.  This blog post is the first in a series that summarizes and builds on the panel discussions at the conference.

Prof. Arvind Narayanan points out that computer science sub-communities have traditionally developed their own community standards about what is considered to be ethical. See for example responsible vulnerability disclosure standards in information security, or the Menlo Report for the Internet measurement discipline. This allows norms and standards to be tailored to the needs of sub-disciplines. However, the increasing responsibilities of researchers and sub-communities, arising from the increasing power and reach of computer science, are sometimes met with confusion. There is a tendency to see ethical considerations as a “policy issue” to be dealt with by others.

Prof. Melissa Lane of the University Center for Human Values points out that while ethics is rooted in understanding community standards and norms, these do not exhaust it, as some researchers in computer science and other fields can sometimes be tempted to think.  Rather, the academic study of ethics provides the tools to critically reflect on these norms and challenge existing and new practices. A meaningful computer science research ethics therefore does not just translate existing norms into functional requirements, but explores how values are enabled, operationalized, or stifled through technology. A careful analysis of a particular context may even uncover new values that were previously taken for granted or not even considered to be a norm. Think, for example, of “disattendability” — the idea of going about your business without anyone tracking you or paying attention to you. We usually take this for granted in the physical world, but on the Internet, ad trackers, among others, actively violate this norm on an ongoing basis. By understanding the effects of design choices and methodologies, ethics guides technology designers to choose the most appropriate approach among the available alternatives.

Ethics is known for its somewhat conflicting theories, such as consequentialism (“Ends justify the Means”) and deontology (“Act in such a way that you treat humanity […] never merely as a means to an end, but always at the same time as an end”). Prof. Susan Brison cautions against an approach that simply takes an ethical theory and applies it to a technology. She raised the question whether computer science research and data science may require new types of ethics, or evolved theories. Digital data is changing the underlying properties of information, whereby our traditional ways of thinking are being challenged in important ways. For example, micro-targeting of bespoke political messages to individuals circumvents the ability to let ‘good speech’ drown out ‘bad speech’, which is a foundational idea for the concept of freedom of speech.

In my research, I’ve found that ethical guidelines can be incomplete, inaccessible, or conflicting, and existing legal statutes from previous technological eras may not be directly applicable to current technology. This has resulted in computer science communities being somewhat confused about their ethical and legal responsibilities. The upcoming posts in this series will explore some of the ethical standards in machine learning, security, algorithmic transparency, and Internet measurement. We welcome any feedback to move this discussion forward at a crucial time for the ethics of computer science.

See the introduction to the conference here.