June 20, 2019

Conference on Social Protection by Artificial Intelligence: Decoding Human Rights in a Digital Age

Christiaan van Veen[1] and Ben Zevenbergen [2]

Governments around the world are increasingly using Artificial Intelligence and other digital technologies to streamline and transform their social protection and welfare systems. This move is usually presented as a means by which to provide an improved and enhanced system and to be better able to assist individuals in a more targeted and efficient manner. But because social protection budgets represent such a significant part of State expenditure in most countries, and because austerity and tax-cuts continue to drive policy, the driving force is usually the prospect of major budgetary savings and a greatly slimmed down system of benefits. But it is becoming increasingly apparent that the impact of these new technologies on the nature of the social protection systems themselves and on the lives of the many individuals who rely upon them can be far-reaching and very often problematic.  There are many examples of systems that are being challenged, ranging from the disastrous ‘robo-debt’ saga in Australia to the litigation and protest against the massive biometric identification system – Aadhaar – in India. Yet, the push for digital innovation in this area of government is certain to continue.

These developments have significant implications for the human rights of roughly half of the world’s population who are covered by social protection measures, as well as those who are not yet covered. Social protection itself is a human right[3] with a long and rich history, dating back to the creation of the International Labour Organization by the 1919 Treaty of Versailles. The introduction of digital technologies in social protection systems risks creating barriers to access to this right, although one can also imagine ways in which technology can facilitate access to social protection. A range of other human rights are implicated with the introduction of these new technologies in social protection systems, ranging from the right to a remedy to the right to privacy.

Despite the significant risks and opportunities involved with the introduction of digital technologies, there has been only limited research and analysis undertaken to better understand the implications for the protection of human rights, especially in the area of social protection/welfare. The poorest and most vulnerable individuals, both in the Global North and Global South, are inevitably the ones who will be most affected by these developments.

To highlight these issues, the Center for Information Technology Policy and the United Nations Special Rapporteur on extreme poverty and human rights, organized a conference at Princeton University on April 12, 2019. The conference brought together leading experts from academia, NGOs, international organizations and the private sector to further explore the implications of digital technologies in social protection systems. The conference was also part of a consultation for a report that the UN Special Rapporteur is preparing and will present to the United Nations General Assembly in October of this year.

Below, a few of the experts who spoke at the conference present some of their key issues and concerns where it comes to the human rights implications of digital technologies in welfare systems.

Cary Coglianese, Edward B. Shils Professor of Law at the University of Pennsylvania Law School

Government has an important responsibility to help provide social services and financial support to those in need. Let us imagine a future where, seeking to fulfill this responsibility, government develops a sophisticated system to help it identify those applicants who qualify for support. But imagine further that, in the end, this identification system turns out to award benefits arbitrarily and to prefer white applicants over applicants of color. Such a system would be properly condemned as unfair. And this is exactly what worries critics who oppose the use of artificial intelligence in administering social programs.

Yet the future imagined above actually appears to have arrived long ago. By many accounts, the scenario I have painted describes the system already in place in the United States and presumably other countries. It is just that the “technology” underlying the current identification system is not artificial intelligence but human decision-making. The U.S. Social Security Administration’s (SSA) disability system, for example, relies on more than a thousand human adjudicators. Although most of these officials are no doubt well-trained and dedicated, they also work under heavy caseloads. And for decades, studies have suggested that racial disparities exist in SSA disability awards, with certain African-American applicants tending to receive less favorable outcomes compared with white applicants.

Any system that relies on thousands of human decision-makers working at high capacity will surely yield variable outcomes. A 2011 report issued by independent researchers offers a stark illustration of the potential for variability across humans: among the fifteen most active administrative judges in a Dallas SSA office, “the judge grant rates in this single location ranged … from less than 10 percent being granted to over 90 percent.” The researchers reported that three judges in this office awarded benefits to no more than 30 percent of their applicants, while three judges awarded to more than 70 percent.

In light of reasonable concerns about arbitrariness and bias in human decisions, the relevant question to ask about artificial intelligence is not whether it will be free of any bias or unexplainable variation. Rather, the question should be whether artificial intelligence can perform better than the current human-based system. Anyone concerned about fairness in government decision-making should entertain the possibility that digital algorithms might sometimes prove to be fairer and more consistent than humans. At the very least, it might turn out to be easier to remedy biased algorithms than to remove deeply ingrained implicit biases from human decision-making.

Jonathan McCully and Nani Jansen Reventlow, Digital Freedom Fund

International law obliges states to provide an effective remedy to victims of human rights violations, but how can this obligation be met in the age of AI? At the conference, a number of points were raised in relation to this question.

For systems of redress or reparation to work, there needs to be a traceable line of responsibility. This is muddied in the AI context as public and private entities claim that certain decisions are reached by machine learning algorithms that lack human intervention. Human rights are devoid of content if victims cannot hold a natural or legal person to account for decisions violating their rights. Therefore, liability regimes should not allow individuals, private entities or public authorities to hide behind their AI. 

For individuals to effectively pursue remedies for AI-related human rights violations, there needs to be an equality of arms. This is also made difficult by AI, where the “allure of objectivity” presented by algorithms can mean that victims are held to a higher standard of evidence compared to those deploying an algorithm. This needs to be corrected.

Finally, like surveillance, AI-related human rights violations can often be hidden from victims. Those who have been subject to an AI-based decision do not necessarily know about it and, even before a decision has been reached against an individual, the models generating these decisions are often trained on datasets that have been processed without the knowledge or consent of those to whom the data relates. Transparency is, therefore, vital to an individual’s ability to pursue remedies in the AI context.  

Jennifer Raso, Assistant Professor, University of Alberta Faculty of Law

Current discussions about algorithmic systems and social protection tend to overlook two key issues. First, the “new” technologies in today’s welfare programs are evolutionary rather than revolutionary. For decades, social assistance offices have been the first sites in which governments introduce new tools to streamline bureaucratic decisions in a context of perpetuated (and seemingly perpetual) resource scarcity. These tools (new and old) are laborious for all who interact with them. They regularly malfunction and require intrusive data about benefits recipients. Such tools perform a dual deterrence: they discourage people from seeking state-funded assistance; and they prevent front-line workers from providing vulnerable individuals access to last-resort assistance.

Second, by centring our debates on privacy and transparency, we fail to address all that is at stake. Focusing on data protection ignores that data intensity is a long-standing feature of social assistance programs. What does privacy mean to someone who must report intimate personal details to remain eligible for welfare benefits? Likewise, transparency conversations overlook the importance of substantive outcomes. How would a transparent decision-making process address the fact that, in many places, welfare rates fall far short of covering one’s basic needs? Instead, we should be into the needs and interests of people who require assistance.

Going forward, we must centre the experiences of those most deeply affected by algorithmic systems. To fully comprehend the impact of these tools in social protection programs, and their potential human rights implications, it is crucial that we attend to the people and communities most targeted by algorithmic systems, and to the front-line workers responsible for maintaining and working with these tools.

Please find here the video of the opening and first panel of the conference, and here the video of the second panel.

[1] Director of the Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at New York University School of Law and Special Advisor on new technologies and human rights to the UN Special Rapporteur on extreme poverty and human rights: https://chrgj.org/people/christiaan-van-veen/
[2] Professional Specialist at CITP, Princeton University.
[3] See, e.g., article 9 of the International Covenant on Economic, Social and Cultural Rights, ratified by 169 States.

How to do a Risk-Limiting Audit

In the U.S. we use voting machines to count the votes. Most of the time they’re very accurate indeed, but they can make big mistakes if there’s a bug in the software, or if a hacker installs fraudulent vote-counting software, or if there’s a misconfigured ballot-definition file, or if the scanner is miscalibrated. Therefore we need a Risk-Limiting Audit of every election to assure, independently of the voting machines, that they got the correct outcome. If your election official picks a risk-limit of 5%, that means that if the voting system got the wrong outcome, there’s a 95% chance that the RLA will correct it (and there’s a 0% chance the RLA will mess up an already-correct outcome).

But how does one conduct an RLA? The statistics are not trivial; the administrative procedures are not obvious–how do you handle all those batches of paper ballots? And every state has different election procedures, so there’s no one-size-fits all RLA method.

Two good ways to learn something are to read a book or find an experienced teacher. But until recently, most (but not all) papers about RLAs were difficult to understand for the election-administrator audience, and practically no one had experience running RLAs because they’re so new.

That’s changing for the better. More states are conducting RLA pilots, that means more people have experience designing and implementing RLAs, and some of those people do us the public service of writing it down in a handbook for election administrators.

Jennifer Morrell has just published the first two parts of a guide to the practical aspects of RLAs: what are they, why do them, how to do them.

Knowing It's Right, Part One Knowing It's Right, Part Two

Knowing It’s Right, Part One: A Practical Guide to Risk-Limiting Audits. A high level overview for state and local stakeholders who want to know more about RLAs before moving on to the implementation phase.

Knowing It’s Right, Part Two: Risk-Limiting Audit Implementation Workbook. Soup-to-nuts information on how election officials can conduct a ballot-comparison audit.

I really like these manuals. And if you’re looking for experts with real experience in RLAs, in addition to Ms. Morrell there are the authors of these experience reports on RLA pilots:

Orange County, CA Pilot Risk-Limiting Audit, by Stephanie Singer and Neal McBurnett, Verified Voting Foundation, December 2018.

City of Fairfax,VA Pilot Risk-Limiting Audit, by Mark Lindeman, Verified Voting Foundation, December 2018.

And stay tuned at risklimitingaudits.org for reports from Indiana, Rhode Island, Michigan, and perhaps even New Jersey.

Choosing Between Content Moderation Interventions

How can we design remedies for content “violations” online?

Speaking today at CITP is Eric Goldman (@ericgoldman), a professor of law and co-director of the High Tech Law Institute, at Santa Clara University School of Law. Before he became a full-time academic in 2002, Eric practiced Internet law for eight years in the Silicon Valley. His research and teaching focuses on Internet, IP and advertising law topics, and he blogs on these topics at the Technology & Marketing Law Blog.

Eric reminds us that content moderation questions are front page stories every week. Lawmakers and tech companies are wondering how to create a world where everyone can have their say, people have a chance to hear from them, and people are protected from harms.

Decisions about content moderation depend on a set of questions, says Eric:

“What rules govern online content?” “Who creates those rules? Who adjudicates rule violations?” Eric is most interested in a final question: “what consequences are imposed for rule violations?

So what do should we do once a content violation has been observed? The traditional view is to delete the content or account or to keep the content and account. For example, under the Digital Millennium Copyright Act, platforms are required to “remove or disable access to” copyrighted material. It allows no option less than removing the material from visibility. The DMCA also specifies two other remedies: terminating “repeat infringers” and issue subpoenas to identify/unmask alleged infringers. Overall however, the primary intervention is to remove things, and there’s no lesser action

Next Eric, tells us about civil society principles that adopt a similar idea of removal as the primary remedy. For example, the Manila Principles on Intermediary Liability assume that removal is the one available intervention, but that it should be necessary, proportional, and adopt “the least restrictive technical means.” Similarly, the Santa Clara Principles assume that removal is the one available option.

Eric reminds us that there are many remedies between removal and keeping content. Why should we pay attention to them? With a wider range of options, we can (a) avoid collateral damage from overbroad remedies and develop a (b) broader remedy toolkit to match the needs of different communities. With a wider palette of options, we would also need principles for choosing between those remedies. Eric wants to be able to suggest options that regulators or platforms have at their disposal when making policy decisions.

To illustrate the value of being able to differentiate between remedies, Eric talks about communities that have rich sets of rules with a range of consequences other than full approval or removal, such as churches, fraternities, and sports leagues.

Eric then offers us a taxonomy of remedies, drawn from examples in use online: (a) content restrictions, (b) account restrictions, (c) visibility reductions, (d) financial levers, and (e) other.

Eric asks: once we have listed remedies, how could we possibly choose among them? Eric talks about different theories for choosing – and he doesn’t think that those models are useful for this conversation. Furthermore, conversations about government-imposed remedies are different from internet content violations.

Unlike internet content policies, says Eric, government remedies:

  • are determined by elected officials
  • funded by taxes
  • non-compliance is enforced by police power
  • some remedies are only available to the government (like jail/death)
  • are subject to constitutional limits

Finally, Eric shares some early thoughts about how to choose among possible remedies:

  • Remedy selection manifests a service’s normative priorities, which differ
  • Possible questions to ask when choosing among remedies:
    • How bad is the rule violation?
    • How confident is the service that the rule was actually violated?
    • How open is the community?
    • How will the remedy affect other community members?
    • How to balance between behavior conformance with user engagement?
  • Site design can prevent violations
    • Educate and socialize contributors (for example)
  • Services with only binary remedies aren’t well-positioned to solve problems, and maybe other actors are in a better position
  • Typically, private remedies are better than judicially imposed remedies, but at cost of due process
  • Remedies should be necessary & proportionate
  • Remedies should empower users to choose for themselves what to do