March 29, 2024

Calling for Investing in Equitable AI Research in Nation’s Strategic Plan

By Solon Barocas, Sayash Kapoor, Mihir Kshirsagar, and Arvind Narayanan

In response to the Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan (“Strategic Plan”) we submitted comments  providing suggestions for how the Strategic Plan for government funding priorities should focus resources to address societal issues such as equity, especially in communities that have been traditionally underserved. 

The Strategic Plan highlights the importance of investing in research about developing trust in AI systems, which includes requirements for robustness, fairness, explainability, and security. We argue that the Strategic Plan should go further by explicitly including a commitment to making investments in research that examines how AI systems can affect the equitable distribution of resources. Specifically, there is a risk that without such a commitment, we make investments in AI research that can marginalize communities that are disadvantaged. Or, even in cases where there is no direct harm to a community, the research support focuses on classes of problems that benefit the already advantaged communities, rather than problems facing disadvantaged communities.  

We make five recommendations for the Strategic Plan:  

First, we recommend that the Strategic Plan outline a mechanism for a broader impact review when funding AI research. The challenge is that the existing mechanisms for ethics review of research projects – Institutional Review Boards (“IRB”) –  do not adequately identify downstream harms stemming from AI applications. For example, on privacy issues, an IRB ethics review would focus on the data collection and management process. This is also reflected in the Strategic Plan’s focus on two notions of privacy: (i) ensuring the privacy of data collected for creating models via strict access controls, and (ii) ensuring the privacy of the data and information used to create models via differential privacy when the models are shared publicly. 

But both of these approaches are focused on the privacy of the people whose data has been collected to facilitate the research process, not the people to whom research findings might be applied. 

Take, for example, the potential impact of face recognition for detecting ethnic minorities. Even if the researchers who developed such techniques had obtained approval from the IRB for their research plan, secured the informed consent of participants, applied strict access control to the data, and ensured that the model was differentially private, the resulting model could still be used without restriction for surveillance of entire populations, especially as institutional mechanisms for ethics review such as IRBs do not consider downstream harms during their appraisal of research projects. 

We recommend that the Strategic Plan include as a research priority supporting the development of alternative institutional mechanisms to detect and mitigate the potentially negative downstream effects of AI systems. 

Second, we recommend that the Strategic Plan include provisions for funding research that would help us understand the impact of AI systems on communities, and how AI systems are used in practice. Such research can also provide a framework for informing decisions on which research questions and AI applications are too harmful to pursue and fund. 

We recognize that it may be challenging to determine what kind of impact AI research might have as it affects a broad range of potential applications. In fact, many AI research findings will have dual use: some applications of these findings may promise exciting benefits, while others would seem likely to cause harm. While it is worthwhile to weigh these costs and benefits, decisions about where to invest resources should also depend on distributional considerations: who are the people likely to suffer these costs and who are those who will enjoy the benefits? 

While there have been recent efforts to incorporate ethics review into the publishing processes of the AI research community, adding similar considerations to the Strategic Plan would help to highlight these concerns much earlier in the research process. Evaluating research proposals according to these broader impacts would help to ensure that ethical and societal considerations are incorporated from the beginning of a research project, instead of remaining an afterthought.

Third, our comments highlight the reproducibility crisis in fields adopting machine learning methods and the need for the government to support the creation of computational reproducibility infrastructure and a reproducibility clearinghouse that sets up benchmark datasets for measuring progress in scientific research that uses AI and ML. We suggest that the Strategic Plan borrow from the NIH’s practices to make government funding conditional on disclosing research materials, such as the code and data, that would be necessary to replicate a study.

Fourth, we focus attention on the industry phenomenon of using a veneer of AI to lend credibility to pseudoscience as AI snake oil. We see evaluating validity as a core component of ethical and responsible AI research and development. The strategic plan could support such efforts by prioritizing funding for setting standards for and making tools available to independent researchers to validate claims of effectiveness of AI applications. 


Fifth, we document the need to address the phenomenon of “runaway datasets” — the practice of broadly releasing datasets used for AI applications without mechanisms of oversight or accountability for how that information can be used. Such datasets raise serious privacy concerns and they may be used to support research that is counter to the intent of the people who have contributed to them. The Strategic Plan can play a pivotal role in mitigating these harms by establishing and supporting appropriate data stewardship models, which could include supporting the development of centralized data clearinghouses to regulate access to datasets.