December 9, 2022

Conference on Social Protection by Artificial Intelligence: Decoding Human Rights in a Digital Age

Christiaan van Veen[1] and Ben Zevenbergen [2]

Governments around the world are increasingly using Artificial Intelligence and other digital technologies to streamline and transform their social protection and welfare systems. This move is usually presented as a means by which to provide an improved and enhanced system and to be better able to assist individuals in a more targeted and efficient manner. But because social protection budgets represent such a significant part of State expenditure in most countries, and because austerity and tax-cuts continue to drive policy, the driving force is usually the prospect of major budgetary savings and a greatly slimmed down system of benefits. But it is becoming increasingly apparent that the impact of these new technologies on the nature of the social protection systems themselves and on the lives of the many individuals who rely upon them can be far-reaching and very often problematic.  There are many examples of systems that are being challenged, ranging from the disastrous ‘robo-debt’ saga in Australia to the litigation and protest against the massive biometric identification system – Aadhaar – in India. Yet, the push for digital innovation in this area of government is certain to continue.

These developments have significant implications for the human rights of roughly half of the world’s population who are covered by social protection measures, as well as those who are not yet covered. Social protection itself is a human right[3] with a long and rich history, dating back to the creation of the International Labour Organization by the 1919 Treaty of Versailles. The introduction of digital technologies in social protection systems risks creating barriers to access to this right, although one can also imagine ways in which technology can facilitate access to social protection. A range of other human rights are implicated with the introduction of these new technologies in social protection systems, ranging from the right to a remedy to the right to privacy.

Despite the significant risks and opportunities involved with the introduction of digital technologies, there has been only limited research and analysis undertaken to better understand the implications for the protection of human rights, especially in the area of social protection/welfare. The poorest and most vulnerable individuals, both in the Global North and Global South, are inevitably the ones who will be most affected by these developments.

To highlight these issues, the Center for Information Technology Policy and the United Nations Special Rapporteur on extreme poverty and human rights, organized a conference at Princeton University on April 12, 2019. The conference brought together leading experts from academia, NGOs, international organizations and the private sector to further explore the implications of digital technologies in social protection systems. The conference was also part of a consultation for a report that the UN Special Rapporteur is preparing and will present to the United Nations General Assembly in October of this year.

Below, a few of the experts who spoke at the conference present some of their key issues and concerns where it comes to the human rights implications of digital technologies in welfare systems.

Cary Coglianese, Edward B. Shils Professor of Law at the University of Pennsylvania Law School

Government has an important responsibility to help provide social services and financial support to those in need. Let us imagine a future where, seeking to fulfill this responsibility, government develops a sophisticated system to help it identify those applicants who qualify for support. But imagine further that, in the end, this identification system turns out to award benefits arbitrarily and to prefer white applicants over applicants of color. Such a system would be properly condemned as unfair. And this is exactly what worries critics who oppose the use of artificial intelligence in administering social programs.

Yet the future imagined above actually appears to have arrived long ago. By many accounts, the scenario I have painted describes the system already in place in the United States and presumably other countries. It is just that the “technology” underlying the current identification system is not artificial intelligence but human decision-making. The U.S. Social Security Administration’s (SSA) disability system, for example, relies on more than a thousand human adjudicators. Although most of these officials are no doubt well-trained and dedicated, they also work under heavy caseloads. And for decades, studies have suggested that racial disparities exist in SSA disability awards, with certain African-American applicants tending to receive less favorable outcomes compared with white applicants.

Any system that relies on thousands of human decision-makers working at high capacity will surely yield variable outcomes. A 2011 report issued by independent researchers offers a stark illustration of the potential for variability across humans: among the fifteen most active administrative judges in a Dallas SSA office, “the judge grant rates in this single location ranged … from less than 10 percent being granted to over 90 percent.” The researchers reported that three judges in this office awarded benefits to no more than 30 percent of their applicants, while three judges awarded to more than 70 percent.

In light of reasonable concerns about arbitrariness and bias in human decisions, the relevant question to ask about artificial intelligence is not whether it will be free of any bias or unexplainable variation. Rather, the question should be whether artificial intelligence can perform better than the current human-based system. Anyone concerned about fairness in government decision-making should entertain the possibility that digital algorithms might sometimes prove to be fairer and more consistent than humans. At the very least, it might turn out to be easier to remedy biased algorithms than to remove deeply ingrained implicit biases from human decision-making.

Jonathan McCully and Nani Jansen Reventlow, Digital Freedom Fund

International law obliges states to provide an effective remedy to victims of human rights violations, but how can this obligation be met in the age of AI? At the conference, a number of points were raised in relation to this question.

For systems of redress or reparation to work, there needs to be a traceable line of responsibility. This is muddied in the AI context as public and private entities claim that certain decisions are reached by machine learning algorithms that lack human intervention. Human rights are devoid of content if victims cannot hold a natural or legal person to account for decisions violating their rights. Therefore, liability regimes should not allow individuals, private entities or public authorities to hide behind their AI. 

For individuals to effectively pursue remedies for AI-related human rights violations, there needs to be an equality of arms. This is also made difficult by AI, where the “allure of objectivity” presented by algorithms can mean that victims are held to a higher standard of evidence compared to those deploying an algorithm. This needs to be corrected.

Finally, like surveillance, AI-related human rights violations can often be hidden from victims. Those who have been subject to an AI-based decision do not necessarily know about it and, even before a decision has been reached against an individual, the models generating these decisions are often trained on datasets that have been processed without the knowledge or consent of those to whom the data relates. Transparency is, therefore, vital to an individual’s ability to pursue remedies in the AI context.  

Jennifer Raso, Assistant Professor, University of Alberta Faculty of Law

Current discussions about algorithmic systems and social protection tend to overlook two key issues. First, the “new” technologies in today’s welfare programs are evolutionary rather than revolutionary. For decades, social assistance offices have been the first sites in which governments introduce new tools to streamline bureaucratic decisions in a context of perpetuated (and seemingly perpetual) resource scarcity. These tools (new and old) are laborious for all who interact with them. They regularly malfunction and require intrusive data about benefits recipients. Such tools perform a dual deterrence: they discourage people from seeking state-funded assistance; and they prevent front-line workers from providing vulnerable individuals access to last-resort assistance.

Second, by centring our debates on privacy and transparency, we fail to address all that is at stake. Focusing on data protection ignores that data intensity is a long-standing feature of social assistance programs. What does privacy mean to someone who must report intimate personal details to remain eligible for welfare benefits? Likewise, transparency conversations overlook the importance of substantive outcomes. How would a transparent decision-making process address the fact that, in many places, welfare rates fall far short of covering one’s basic needs? Instead, we should be into the needs and interests of people who require assistance.

Going forward, we must centre the experiences of those most deeply affected by algorithmic systems. To fully comprehend the impact of these tools in social protection programs, and their potential human rights implications, it is crucial that we attend to the people and communities most targeted by algorithmic systems, and to the front-line workers responsible for maintaining and working with these tools.

Please find here the video of the opening and first panel of the conference, and here the video of the second panel.


[1] Director of the Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at New York University School of Law and Special Advisor on new technologies and human rights to the UN Special Rapporteur on extreme poverty and human rights: https://chrgj.org/people/christiaan-van-veen/
[2] Professional Specialist at CITP, Princeton University.
[3] See, e.g., article 9 of the International Covenant on Economic, Social and Cultural Rights, ratified by 169 States.

Princeton Dialogues of AI and Ethics: Launching case studies

Summary: We are releasing four case studies on AI and ethics, as part of the Princeton Dialogues on AI and Ethics.

The impacts of rapid developments in artificial intelligence (“AI”) on society—both real and not yet realized—raise deep and pressing questions about our philosophical ideals and institutional arrangements. AI is currently applied in a wide range of fields—such as medical diagnosis, criminal sentencing, online content moderation, and public resource management—but it is only just beginning to realize its potential to influence practically all areas of human life, including geopolitical power balances. As these technologies advance and increasingly come to mediate our everyday lives, it becomes necessary to consider how they may reflect prevailing philosophical perspectives and preferences. We must also assess how the architectural design of AI technologies today might influence human values in the future. This step is essential in order to identify the positive opportunities presented by AI and unleash these technologies’ capabilities in the most socially advantageous way possible while being mindful of potential harms. Critics question the extent to which individual engineers and proprietors of AI should take responsibility for the direction of these developments, or whether centralized policies are needed to steer growth and incentives in the right direction. What even is the right direction? How can it be best achieved?

Princeton’s University Center for Human Values (UCHV) and the Center for Information Technology Policy (CITP) are excited to announce a joint research project, “The Princeton Dialogues on AI and Ethics,” in the emerging field of artificial intelligence (broadly defined) and its interaction with ethics and political theory. The aim of this project is to develop a set of intellectual reasoning tools to guide practitioners and policy makers, both current and future, in developing the ethical frameworks that will ultimately underpin their technical and legislative decisions. More than ever before, individual-level engineering choices are poised to impact the course of our societies and human values. And yet there have been limited opportunities for AI technology actors, academics, and policy makers to come together to discuss these outcomes and their broader social implications in a systematic fashion. This project aims to provide such opportunities for interdisciplinary discussion, as well as in-depth reflection.

We convened two invitation-only workshops in October 2017 and March 2018, in which philosophers, political theorists, and machine learning experts met to assess several real-world case studies that elucidate common ethical dilemmas in the field of AI. The aim of these workshops was to facilitate a collaborative learning experience which enabled participants to dive deeply into the ethical considerations that ought to guide decision-making at the engineering level and highlight the social shifts they may be affecting. The first outcomes of these deliberations have now been published in the form of case studies. To access these educational materials, please see our dedicated website https://aiethics.princeton.edu. These cases are intended for use across university departments and in corporate training in order to equip the next generation of engineers, managers, lawyers, and policy makers with a common set of reasoning tools for working on AI governance and development.

In March 2018, we also hosted a public conference, titled “AI & Ethics,” where interested academics, policy makers, civil society advocates, and private sector representatives from diverse fields came to Princeton to discuss topics related to the development and governance of AI: “International Dimensions of AI” and “AI and Its Democratic Frontiers”. This conference sought to use the ethics and engineering knowledge foundations developed through the initial case studies to inspire discussion on AI technology’s wider social effects.

This project is part of a wider effort at Princeton University to investigate the intersection between AI technology, politics, and philosophy. There is a particular emphasis on the ways in which the interconnected forces of technology and its governance simultaneously influence and are influenced by the broader social structures in which they are situated. The Princeton Dialogues on AI and Ethics makes use of the university’s exceptional strengths in computer science, public policy, and philosophy. The project also seeks opportunities for cooperation with existing projects in and outside of academia.

Getting serious about research ethics: AI and machine learning

[This blog post is a continuation of our series about research ethics in computer science.]

The widespread deployment of artificial intelligence and specifically machine learning algorithms causes concern for some fundamental values in society, such as employment, privacy, and discrimination. While these algorithms promise to optimize social and economic processes, research in this area has exposed some major deficiencies in the social consequences of their operation. Some consequences may be invisible or intangible, such as erecting computational barriers to social mobility through a variety of unintended biases, while others may be directly life threatening. At the CITP’s recent conference on computer science ethics, Joanna Bryson, Barbara Engelhardt, and Matt Salganik discussed how their research led them to work on machine learning ethics.

Joanna Bryson has made a career researching artificial intelligence, machine learning, and understanding their consequences on society. She has found that people tend to identify with the perceived consciousness of artificially intelligent artifacts, such as robots, which then complicates meaningful conversations about the ethics of their development and use. By equating artificially intelligent systems to humans or animals, people deduce its moral status and can ignore their engineered nature.

While the cognitive power of AI systems can be impressive, Bryson argues they do not equate to humans and should not be regulated as such. On the one hand, she demonstrates the power of an AI system to replicate societal biases in a recent paper (co-authored with CITP’s Aylin Caliskan and Arvind Narayanan) by letting systems trained on a corpus of text from the World Wide Web learn the implicit biases around the gender of certain professions. On the other hand, she argues that machines cannot ‘suffer’ in the same way as humans do, which is one of the main deterrents for humans in current legal systems. Bryson proposes we understand both AI and ethics as human-made artifacts. It is therefore appropriate to rely ethics – rather than science – to determine the moral status of artificially intelligent systems.

Barbara Engelhardt’s work focuses on machine learning in computational biology, specifically genomics and medicine. Her main area of concern is the reliance on recommendation systems, such as we encounter on Amazon and Netflix, to make decisions in other domains such as healthcare, financial planning, and career decisions. These machine learning systems rely on data as well as social networks to make inferences.

Engelhardt describes examples where using patient records to inform medical decisions can lead to erroneous recommendation systems for diagnosis as well as harmful medical interventions. For example, the symptoms of heart disease differ substantially between men and women, and so do their appropriate treatments. Most data collected about this condition was from men, leaving a blind spot for the diagnosis of heart disease in women. Bias, in this case, is useful and should be maintained for correct medical interventions. In another example, however, data was collected from a variety of hospitals in somewhat segregated poor and wealthy areas. The data appear to show that cancers in children from hispanic and caucasian races develop differently. However, inferences based on this data fail to take into account the biasing effect of economic status in determining at which stage of symptoms different families decide seek medical help. In turn, this determines the stage of development at which the oncological data is collected. The recommendation system with this type of bias confuses race with economic barriers to medical help, which will lead to harmful diagnosis and treatments.

Matt Salganik proposes that the machine learning community draws some lessons from ethics procedures in social science. Machine learning is a powerful tool the can be used responsibly or inappropriately. He proposes that it can be the task of ethics to guide researchers, engineers, and developers to think carefully about the consequences of their artificially intelligent inventions. To this end, Salganik proposes a hope-based and principle-based approach to research ethics in machine learning. This is opposed to a fear-based and rule-based approach in social science, or the more ad hoc ethics culture that we encounter in data and computer science. For example, machine learning ethics should include pre-research review through forms that are reviewed by third parties to avoid groupthink and encourage researchers’ reflexivity. Given the fast pace of development, though, the field should avoid a compliance mentality typically found at institutional review boards of univeristies. Any rules to be complied with are unlikely to stand the test of time in the fast-moving world of machine learning, which would result in burdensome and uninformed ethics scrutiny. Salganik develops these themes in his new book Bit By Bit: Social Research in the Digital Age, which has an entire chapter about ethics.”

See a video of the panel here.

Getting serious about research ethics: Security and Internet Measurement

[This blog post is a continuation of our series about research ethics in computer science that we started last week]

Research projects in the information security and Internet measurement sub-disciplines typically interact with third-party systems or devices to collect a large amounts of data. Scholars engaging in these fields are interested to collect data about technical phenomenon. As a result of the widespread use of the Internet, their experiments can interfere with human use of devices and reveal all sorts of private information, such as their browsing behaviour. As awareness of the unintended impact on Internet users grew, these communities have spent considerable time debating their ethical standards at conferences, dedicated workshops, and in journal publications. Their efforts have culminated in guidelines for topics such as vulnerability disclosure or privacy, whereby the aim is to protect unsuspecting Internet users and human implicated in technical research.

 

Prof. Nick Feamster, Prof. Prateek Mittal, moderator Prof. Elana Zeide, and I discussed some important considerations for research ethics in a panel dedicated to these sub-disciplines at the recent CITP conference on research ethics in computer science communities. We started by explaining that gathering empirical data is crucial to infer the state of values such as privacy and trust in communication systems. However, as methodological choices in computer science will often have ethical impacts, researchers need to be empowered to reflect on their experimental setup meaningfully.

 

Prof. Feamster discussed several cases where he had sought advice from ethical oversight bodies, but was left with unsatisfying guidance. For example, when his team conducted Internet censorship measurements (pdf), they were aware that they were initiating requests and creating data flows from devices owned by unsuspecting Internet users. These new information flows were created in realms where adversaries were also operating, for example in the form of a government censors. This may pose a risk to the owners of devices that were implicated in the experimentation and data collection. The ethics board, however, concluded that such measurements did not meet the strict definition of “human subjects research”, which thereby excluded the need for formal review. Prof. Feamster suggests computer scientists reassess how they think about their technologies or newly initiated data flows that can be misused by adversaries, and take that into account in ethical review procedures.

 

Ethical tensions and dilemmas in technical Internet research could be seen as interesting research problems for scholars, argued Prof. Mittal. For example, to reason about privacy and trust in the anonymous Tor network, researchers need to understand to what extent adversaries can exploit vulnerabilities and thus observe Internet traffic of individual users. The obvious, relatively easy, and ethically dubious measurement would be to attack existing Tor nodes and attempt to collect real-time traffic of identifiable users. However, Prof. Mittal gave an insight into his own critical engagement with alternative design choices, which led his team to create a new node within Princeton’s university network that they subsequently attacked. This more lab-based approach eliminates risks for unsuspecting Internet users, but allowed for the same inferences to be done.

 

I concluded the panel, suggesting that ethics review boards at universities, academic conferences, and scholarly journals engage actively with computer scientists to collect valuable data whilst respecting human values. Currently, a panel on non-experts in either computer science or research ethics are given a single moment to judge the full methodology of a research proposal or the resulting paper. When a thumbs-down is issued, researchers have no or limited opportunity to remedy their ethical shortcomings. I argued that a better approach would be an iterative process with in-person meetings and more in-depth consideration of design alternatives, as demonstrated in a recent paper about Advertising as a Platform for Internet measurements (pdf). This is the approach advocates in the Networked Systems Ethics Guidelines. Cross-disciplinary conversation, rather than one-time decisions, allow for a mutual understanding between the gatekeepers of ethical standards and designers of useful computer science research.

 

See the video of the panel here.

Getting serious about research ethics in computer science

Digital technology mediates our public and private lives. That makes computer science a powerful discipline, but it also means that ethical considerations are essential in the development of these technologies. Not all new developments may be welcomed by users, such as a patent application by Facebook that enables the company to identify their users’ emotions through cameras on their devices. A critical approach to developing digital technologies, guided by philosophical and ethical principles, will allow interventions that improve society in meaningful ways.

The Center for Information Technology Policy recently organized a conference to discuss research ethics in different computer science communities, such as machine learning, security, and Internet measurement.  This blog post is the first in a series that summarizes and builds on the panel discussions at the conference.

Prof. Arvind Narayanan points out that computer science sub-communities have traditionally developed their own community standards about what is considered to be ethical. See for example responsible vulnerability disclosure standards in information security, or the Menlo Report for the Internet measurement discipline. This allows norms and standards to be tailored to the needs of sub-disciplines. However, the increasing responsibilities of researchers and sub-communities, arising from the increasing power and reach of computer science, are sometimes met with confusion. There is a tendency to see ethical considerations as a “policy issue” to be dealt with by others.

Prof. Melissa Lane of the University Center for Human Values points out that while ethics is rooted in understanding community standards and norms, these do not exhaust it, as some researchers in computer science and other fields can sometimes be tempted to think.  Rather, the academic study of ethics provides the tools to critically reflect on these norms and challenge existing and new practices. A meaningful computer science research ethics therefore does not just translate existing norms into functional requirements, but explores how values are enabled, operationalized, or stifled through technology. A careful analysis of a particular context may even uncover new values that were previously taken for granted or not even considered to be a norm. Think, for example, of “disattendability” — the idea of going about your business without anyone tracking you or paying attention to you. We usually take this for granted in the physical world, but on the Internet, ad trackers, among others, actively violate this norm on an ongoing basis. By understanding the effects of design choices and methodologies, ethics guides technology designers to choose the most appropriate approach among the available alternatives.

Ethics is known for its somewhat conflicting theories, such as consequentialism (“Ends justify the Means”) and deontology (“Act in such a way that you treat humanity […] never merely as a means to an end, but always at the same time as an end”). Prof. Susan Brison cautions against an approach that simply takes an ethical theory and applies it to a technology. She raised the question whether computer science research and data science may require new types of ethics, or evolved theories. Digital data is changing the underlying properties of information, whereby our traditional ways of thinking are being challenged in important ways. For example, micro-targeting of bespoke political messages to individuals circumvents the ability to let ‘good speech’ drown out ‘bad speech’, which is a foundational idea for the concept of freedom of speech.

In my research, I’ve found that ethical guidelines can be incomplete, inaccessible, or conflicting, and existing legal statutes from previous technological eras may not be directly applicable to current technology. This has resulted in computer science communities being somewhat confused about their ethical and legal responsibilities. The upcoming posts in this series will explore some of the ethical standards in machine learning, security, algorithmic transparency, and Internet measurement. We welcome any feedback to move this discussion forward at a crucial time for the ethics of computer science.

See the introduction to the conference here.