June 25, 2019

Misunderstandings of the Past

People ask why I research computer history. With so many Internet laws and policies to work on, why delve back into the old technologies of the 1940s? The reason why is that past is prologue and early programming pioneers and their innovations may provide insight into our dilemmas today. Our STEM problems of low percentages of women and minorities entering computer science may lay in a misunderstanding of the past.

In twenty years of research of the ENIAC Programmers, I learned two things. First, that women (and men) engaged in incredible acts of computing innovation during and just after WWII, and this work established the foundation of modern computing and programming. Second, some historians oppose the telling of a more complete computing history and seem determined to maintain an “all white, all male” view of history. But that is not what the past shows us.

Innovation drives need and need drives invention. The great ENIAC computer is one great example – the world’s first modern computer (all-electronic, programmable, general-purpose) commissioned in 1942 during the dark days of WWII.  The story is one that shows us a fascinating and diverse group of inventors.

At the start of US entrance into WWII, the Army’s Ballistics Research Labs realized it need large numbers of ballistic trajectory calculations. Gunners needed to know what angle to shoot their artillery to hit a target 8 to 10 miles away.  A special differential calculus equation could provide the answer – and the angle – by computing it required a person who knew differential calculus (a rare skill in those days). No electro-mechanical machines could do it alone. 

In 1942, BRL relocated to Philadelphia and the halls of the Moore School of Electrical Engineering (University of Pennsylvania). BRL located women math graduates from schools nearby, including Drexel University, Temple University and Chestnut Hill College. Ultimately, the Computing Project expanded to almost 100 women. To fill its ranks, the Army reached up to New York and out to Missouri. Brilliant women “Computers” worked day and night, six days a week, calculating thousands of ballistics trajectories which were compiled into artillery firing tables and sent to soldiers in the battlefields. It was a tremendous effort.   

Second, the Army and BRL agreed to commission a highly-experimental machine, the first modern computer, to speed up trajectory calculations. Called the Electronic Numerical Integrator and Computer and nicknamed “ENIAC,” the computer would calculate ballistics trajectories in seconds, instead of days, but only if co-inventors Dr. John Mauchly and J. Presper Eckert could get the new machine to work, including its 18,000 vacuum tubes. Key technologists of the time, of course, told the Army that the ENIAC would never work.  But in the dark days of the war with new artillery being manufactured and a growing need for firing tables, ENIAC was a risk the Army was willing to take.  

Mauchly and Eckert brought a group of young engineers – American, Chinese, even albino – to build ENIAC’s 40 units. As ENIAC neared completion of construction, BRL’s Lieutenant Herman Goldstine selected six women from the Computing Project to program the ENIAC. They were Kathleen McNulty Mauchly Antonelli, Jean Jennings Bartik, Betty Snyder Holberton, Marlyn Wescoff Meltzer, Ruth Lichterman Teitelbaum and Frances Bilas Spence.

To say the women’s programming job was difficult is an understatement. ENIAC had no technical or operating manuals (they would be written the following summer) and no programming codes (written for the next computer by ENIAC Programmer Betty Holberton a few years later for UNIVAC, the first commercial computer). The women studied ENIAC’s wiring and logical diagrams and taught themselves how to program it. Then they sat down and figured out to break down the differential calculus ballistics trajectory program into the small, discrete steps the computer can handle – just as programmers do today. 

Then they figured out how to program their steps onto the computer – via a “direct programming” interface of hundreds of cables and 3000 switches. It is a bit like modern programming adding cartwheels and backflips. The women created flowcharts to capture every logical step of the trajectory equation and every physical one too: every switch, every cable, ever setting. With the “old Army spirit,” they did a task no one had done before. Tom Petzinger, Jr., celebrated their work in his Wall Street Journal article, The History of Software Begins with Brainy Women’s Work (Nov. 15, 1996).

On February 15, 1946, the ENIAC went from top secret status to front page news.  Heralded by The York Times, Philadelphia Evening Bulletin and Boston Globe, the world learned technology had taken a giant step forward. The same day the Moore School ran a demonstration for Army officers and leading technologists which featured the women’s ballistics trajectory. Their program ran flawlessly and indeed calculated the ballistics trajectory in only a few seconds. 

After the war, the Army asked all six ENIAC Programmers to continue their work – no solider returning home from the battlefield could program ENIAC. BRL needed the ENIAC Programmers to teach the next generation of ENIAC programmers, and some did. Others made other pivotal contributions: Jean Bartik led the team that converted ENIAC to one of the world’s first stored program computer and her best friend Betty Holberton joined Eckert Mauchly Computer Corporation and wrote critical new programming tools for UNIVAC I, the first commercial computer, including the C-10 instruction code (predecessor to programming languages). 

Alas, over half a century after their work, a small group of historians sees fit to disparage the ENIAC Programmers.  In his 2010 book, The Computer Boys Take Over, Nathan Ensmenger devoted an entire section to “Glorified Clerical Workers” and heaped personal insults on these hard-working WWII civilian employees. Despite honors from IEEE Computer Society, Computer History Museum and Women in Technology International received by the women at the time of publication, Ensmenger wrote: 

  • “The low priority given to the programming [of ENIAC] was reflected in who was assigned to the task. [p. 35],
  • “coders were obviously low on the intellectual and professional status hierarchy.” [pp.35-36], and  
  • the use of the word software in this context is, of course, anachronistic – the distinctions and the gender connotations it embodies – between “hard” technical mastery, and the “software,” more social (and implicitly, of secondary importance) aspects of computer work – are applicable even in the earliest of electronic computing development projects.” [p. 14]

As a friend of the ENIAC Programmers and recorder of their oral histories, I can picture hear Jean Jennings Bartik’s response – and hearty belly laugh and the reminder that “the engineers treated us with a great deal of respect.” The Computers: The Remarkable Story of the ENIAC Programmers, documentary at www.eniacprogrammers.org).

The historians’ misunderstanding appears to originate in the women’s Army classified of “subprofessional” (despite their college degrees). Yet, we know from Bletchley Park and Code Girls’ stories of women cryptographies that women in top secret wartime roles hid in plain sgith — often behind titles of “secretary” and “clerk.” Why not evaluate the women by the depth of their education, the quality of their work, and the extent of their innovation? 

The negative language of the critique of the ENIAC Programmers, as is the book’s cover art, a picture of a lone white man standing before a huge mainframe computer.  Overall, the book sends a clear message: girls do not look to computer science for education or jobs.

We can do better. I talk to groups of young technologists around the world and share the story of the ENIAC Team – women and men who worked together and changed the world. The audiences light up. Knowing pioneers of computing and programming came from different races and backgrounds is exciting and inspiring. Our computing history is rich and inclusive – so why not share it? In the future, I hope we will and thank Princeton for the times we shared my documentary, The Computers.The discussions afterwards were priceless!

Conference on Social Protection by Artificial Intelligence: Decoding Human Rights in a Digital Age

Christiaan van Veen[1] and Ben Zevenbergen [2]

Governments around the world are increasingly using Artificial Intelligence and other digital technologies to streamline and transform their social protection and welfare systems. This move is usually presented as a means by which to provide an improved and enhanced system and to be better able to assist individuals in a more targeted and efficient manner. But because social protection budgets represent such a significant part of State expenditure in most countries, and because austerity and tax-cuts continue to drive policy, the driving force is usually the prospect of major budgetary savings and a greatly slimmed down system of benefits. But it is becoming increasingly apparent that the impact of these new technologies on the nature of the social protection systems themselves and on the lives of the many individuals who rely upon them can be far-reaching and very often problematic.  There are many examples of systems that are being challenged, ranging from the disastrous ‘robo-debt’ saga in Australia to the litigation and protest against the massive biometric identification system – Aadhaar – in India. Yet, the push for digital innovation in this area of government is certain to continue.

These developments have significant implications for the human rights of roughly half of the world’s population who are covered by social protection measures, as well as those who are not yet covered. Social protection itself is a human right[3] with a long and rich history, dating back to the creation of the International Labour Organization by the 1919 Treaty of Versailles. The introduction of digital technologies in social protection systems risks creating barriers to access to this right, although one can also imagine ways in which technology can facilitate access to social protection. A range of other human rights are implicated with the introduction of these new technologies in social protection systems, ranging from the right to a remedy to the right to privacy.

Despite the significant risks and opportunities involved with the introduction of digital technologies, there has been only limited research and analysis undertaken to better understand the implications for the protection of human rights, especially in the area of social protection/welfare. The poorest and most vulnerable individuals, both in the Global North and Global South, are inevitably the ones who will be most affected by these developments.

To highlight these issues, the Center for Information Technology Policy and the United Nations Special Rapporteur on extreme poverty and human rights, organized a conference at Princeton University on April 12, 2019. The conference brought together leading experts from academia, NGOs, international organizations and the private sector to further explore the implications of digital technologies in social protection systems. The conference was also part of a consultation for a report that the UN Special Rapporteur is preparing and will present to the United Nations General Assembly in October of this year.

Below, a few of the experts who spoke at the conference present some of their key issues and concerns where it comes to the human rights implications of digital technologies in welfare systems.

Cary Coglianese, Edward B. Shils Professor of Law at the University of Pennsylvania Law School

Government has an important responsibility to help provide social services and financial support to those in need. Let us imagine a future where, seeking to fulfill this responsibility, government develops a sophisticated system to help it identify those applicants who qualify for support. But imagine further that, in the end, this identification system turns out to award benefits arbitrarily and to prefer white applicants over applicants of color. Such a system would be properly condemned as unfair. And this is exactly what worries critics who oppose the use of artificial intelligence in administering social programs.

Yet the future imagined above actually appears to have arrived long ago. By many accounts, the scenario I have painted describes the system already in place in the United States and presumably other countries. It is just that the “technology” underlying the current identification system is not artificial intelligence but human decision-making. The U.S. Social Security Administration’s (SSA) disability system, for example, relies on more than a thousand human adjudicators. Although most of these officials are no doubt well-trained and dedicated, they also work under heavy caseloads. And for decades, studies have suggested that racial disparities exist in SSA disability awards, with certain African-American applicants tending to receive less favorable outcomes compared with white applicants.

Any system that relies on thousands of human decision-makers working at high capacity will surely yield variable outcomes. A 2011 report issued by independent researchers offers a stark illustration of the potential for variability across humans: among the fifteen most active administrative judges in a Dallas SSA office, “the judge grant rates in this single location ranged … from less than 10 percent being granted to over 90 percent.” The researchers reported that three judges in this office awarded benefits to no more than 30 percent of their applicants, while three judges awarded to more than 70 percent.

In light of reasonable concerns about arbitrariness and bias in human decisions, the relevant question to ask about artificial intelligence is not whether it will be free of any bias or unexplainable variation. Rather, the question should be whether artificial intelligence can perform better than the current human-based system. Anyone concerned about fairness in government decision-making should entertain the possibility that digital algorithms might sometimes prove to be fairer and more consistent than humans. At the very least, it might turn out to be easier to remedy biased algorithms than to remove deeply ingrained implicit biases from human decision-making.

Jonathan McCully and Nani Jansen Reventlow, Digital Freedom Fund

International law obliges states to provide an effective remedy to victims of human rights violations, but how can this obligation be met in the age of AI? At the conference, a number of points were raised in relation to this question.

For systems of redress or reparation to work, there needs to be a traceable line of responsibility. This is muddied in the AI context as public and private entities claim that certain decisions are reached by machine learning algorithms that lack human intervention. Human rights are devoid of content if victims cannot hold a natural or legal person to account for decisions violating their rights. Therefore, liability regimes should not allow individuals, private entities or public authorities to hide behind their AI. 

For individuals to effectively pursue remedies for AI-related human rights violations, there needs to be an equality of arms. This is also made difficult by AI, where the “allure of objectivity” presented by algorithms can mean that victims are held to a higher standard of evidence compared to those deploying an algorithm. This needs to be corrected.

Finally, like surveillance, AI-related human rights violations can often be hidden from victims. Those who have been subject to an AI-based decision do not necessarily know about it and, even before a decision has been reached against an individual, the models generating these decisions are often trained on datasets that have been processed without the knowledge or consent of those to whom the data relates. Transparency is, therefore, vital to an individual’s ability to pursue remedies in the AI context.  

Jennifer Raso, Assistant Professor, University of Alberta Faculty of Law

Current discussions about algorithmic systems and social protection tend to overlook two key issues. First, the “new” technologies in today’s welfare programs are evolutionary rather than revolutionary. For decades, social assistance offices have been the first sites in which governments introduce new tools to streamline bureaucratic decisions in a context of perpetuated (and seemingly perpetual) resource scarcity. These tools (new and old) are laborious for all who interact with them. They regularly malfunction and require intrusive data about benefits recipients. Such tools perform a dual deterrence: they discourage people from seeking state-funded assistance; and they prevent front-line workers from providing vulnerable individuals access to last-resort assistance.

Second, by centring our debates on privacy and transparency, we fail to address all that is at stake. Focusing on data protection ignores that data intensity is a long-standing feature of social assistance programs. What does privacy mean to someone who must report intimate personal details to remain eligible for welfare benefits? Likewise, transparency conversations overlook the importance of substantive outcomes. How would a transparent decision-making process address the fact that, in many places, welfare rates fall far short of covering one’s basic needs? Instead, we should be into the needs and interests of people who require assistance.

Going forward, we must centre the experiences of those most deeply affected by algorithmic systems. To fully comprehend the impact of these tools in social protection programs, and their potential human rights implications, it is crucial that we attend to the people and communities most targeted by algorithmic systems, and to the front-line workers responsible for maintaining and working with these tools.

Please find here the video of the opening and first panel of the conference, and here the video of the second panel.


[1] Director of the Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at New York University School of Law and Special Advisor on new technologies and human rights to the UN Special Rapporteur on extreme poverty and human rights: https://chrgj.org/people/christiaan-van-veen/
[2] Professional Specialist at CITP, Princeton University.
[3] See, e.g., article 9 of the International Covenant on Economic, Social and Cultural Rights, ratified by 169 States.

How to do a Risk-Limiting Audit

In the U.S. we use voting machines to count the votes. Most of the time they’re very accurate indeed, but they can make big mistakes if there’s a bug in the software, or if a hacker installs fraudulent vote-counting software, or if there’s a misconfigured ballot-definition file, or if the scanner is miscalibrated. Therefore we need a Risk-Limiting Audit of every election to assure, independently of the voting machines, that they got the correct outcome. If your election official picks a risk-limit of 5%, that means that if the voting system got the wrong outcome, there’s a 95% chance that the RLA will correct it (and there’s a 0% chance the RLA will mess up an already-correct outcome).

But how does one conduct an RLA? The statistics are not trivial; the administrative procedures are not obvious–how do you handle all those batches of paper ballots? And every state has different election procedures, so there’s no one-size-fits all RLA method.

Two good ways to learn something are to read a book or find an experienced teacher. But until recently, most (but not all) papers about RLAs were difficult to understand for the election-administrator audience, and practically no one had experience running RLAs because they’re so new.

That’s changing for the better. More states are conducting RLA pilots, that means more people have experience designing and implementing RLAs, and some of those people do us the public service of writing it down in a handbook for election administrators.

Jennifer Morrell has just published the first two parts of a guide to the practical aspects of RLAs: what are they, why do them, how to do them.

Knowing It's Right, Part One Knowing It's Right, Part Two

Knowing It’s Right, Part One: A Practical Guide to Risk-Limiting Audits. A high level overview for state and local stakeholders who want to know more about RLAs before moving on to the implementation phase.

Knowing It’s Right, Part Two: Risk-Limiting Audit Implementation Workbook. Soup-to-nuts information on how election officials can conduct a ballot-comparison audit.

I really like these manuals. And if you’re looking for experts with real experience in RLAs, in addition to Ms. Morrell there are the authors of these experience reports on RLA pilots:

Orange County, CA Pilot Risk-Limiting Audit, by Stephanie Singer and Neal McBurnett, Verified Voting Foundation, December 2018.

City of Fairfax,VA Pilot Risk-Limiting Audit, by Mark Lindeman, Verified Voting Foundation, December 2018.

And stay tuned at risklimitingaudits.org for reports from Indiana, Rhode Island, Michigan, and perhaps even New Jersey.