The Senate Committee on the Judiciary’s Subcommittee on Privacy, Technology, and the Law hearing titled “Oversight of AI: Insiders’ Perspective” on September 17, 2024 sought to understand how and why the government can and should regulate the burgeoning industry. I attended the hearing and am writing to share my impressions here.
Chock-full of analogies that help lay audiences conceptualize the issues at hand, the witnesses emphasized that the proverbial horse is not yet out of the barn, i.e., it is not too late to act and to have a significant impact on the direction of the industry and technology.
The panel consisted of Helen Toner, former member of OpenAI’s nonprofit board, William Saunders, former member of Technical Staff at OpenAI, David Evan Harris, Senior Policy Advisor for California Initiative for Technology and Democracy, and Margaret Mitchell, current Researcher and Chief Ethics Scientist at Hugging Face and a Former Staff Research Scientist at Google AI.
Led by committee chair Senator Blumenthal, additional questioning was conducted by Senators Hawley, Durbin, Kennedy, Klobuchar, Padilla, and Blackburn.
In social media’s early days, Congress listened to the pleas of social media companies to let them grow unfettered by legislation; efforts to regulate only came after the harms became noticeable, and comprehensive federal action has yet to come to fruition. Referencing that experience, Blumenthal expressed that that approach was too little too late, and that large profit-vested companies should not be left to self-regulate.
In their questioning, lawmakers expressed concerns regarding a variety of potential AI harms:
- International democratic interference and election security
- Deepfakes and voice cloning schemes
- AGI prospects for harm, particularly around warfare and bioweapons
- Whistleblower protections
Harris discussed how in the current regulation regime, response to existing legislation feels voluntary, with tech companies not acting upon their public promises. With requirements and testing not yet standardized, it is difficult to address these issues within industry. Rules and guidance should be clear and laid out in advance.
The witnesses expressed a need for effective oversight regarding a myriad of issues in the use of AI systems. I focus on three recommendations that came up repeatedly from the witnesses:
- Whistleblower protections. The witnesses explained that employees need clarity on who employees can talk to internally and externally, including the government and media, when they should speak up, the rights, protections, and awards they may be afforded based on their methodology. Mitchell suggested it would be helpful to have this information given to employees in orientation and onboarding processes.
My previous work has focused on whistleblower protections. These protections are important because they allow employees to raise safety concerns and potential violations externally without risking their job, safety, and livelihood through confidentiality and anonymity protections. Ideally, the companies themselves would build and acknowledge the benefits of a culture of openness, where employees can disclose potential problems.
When Congress passes whistleblower laws, it opens the door for rational, safe whistleblowing. Real oversight must address the concerns the majority of employees have, including the need for avenues to report potential misconduct. When that exists, you get good information – transparency via whistleblower protections gives regulators a window into the questionable internal conduct of these companies that they would not have otherwise. This is important for regulators to develop effective strategies to properly address the industry.
I believe the need for whistleblower protections is especially crucial when regulation of the industry lags, as concerning problems may not yet be considered illegal. In such a climate, it is not clear what whistleblower rights are, and industry-specific protections that address this gap can have major impact in the interim. Focusing on the bright line of legal vs. illegal conduct will defer many important issues.
- Research & Training. The witnesses specifically mention the need for proper funding for bodies like the US AI Safety Institute, recommending the implementation of a system to check in with agencies regarding how well AI integration is going, and whether they have proper funding, staff, and resources to address their needs. Toner also mentioned the importance of educating the government about AI and implementations.
Efforts to educate government employees are already underway. The AI Training Act passed in 2022 requires the Office of Management and Budget to establish or otherwise provide an AI training program for the program management and logistics workforce of executive agencies with exceptions. The program ensures that the workforce knows the capabilities and risks associated with AI.
Pursuant to this Act, faculty at CITP are teaching the leadership and policy track in the 2024 AI Training Series for Government Employees alongside Stanford HAI, GWU Law School. The series aims to educate the government workforce on the basics and current landscape of AI as it relates to Acquisition (GWU), Technology (Stanford), and Leadership and Policy (CITP), provide access to academic AI leaders for all government employees, and encourage responsible use of AI across the government to improve operations.
I think further education and training will continue to be necessary as the integration of AI grows alongside the technology itself.
- Independent oversight. Standardized reporting. Operationalize transparency requirements and mandate and standardize the sharing of testing results pre and post deployment. Mitchell suggests that this should be enacted via documentation and disclosure. For documentation, this includes artifacts corresponding to the four major phases of AI development:
- Data preparation
- Model training & additional system coding
- Evaluation & Analysis
- Deployment
I found Mitchell’s analogy to a cake helpful in understanding the scope of the issue: she explained that we know the product – the cake – but we do not know what the ingredients are, how they are mixed together, and who tastes them, and what judgment standards are used before they are sold to the public. She explains that data analysis is not a norm in the AI industry, and that methods must be invented to find problematic content. My key takeaway from the discussion is that regulation must prioritize requirements for third-party testing, both before and after deployment. And results from these tests must be shared. Further, researchers who are testing the models must be properly vetted. Creating an independent oversight organization and mandated transparency requirements, as in Senator Blumenthal and Senator Hawley’s proposed framework, would be important steps towards these goals.
Other suggestions from the panel for using oversight to ensure the responsible building of systems include:
- Creating a license to practice and ethics code, then require AI Engineers to obtain licenses and comply with industry standards
- Ensuring that safety teams at companies are getting the resources they are promised
- Setting standards for provenance labeling of synthetic content
- Requiring companies to have their products frequently self-disclose that it’s AI. The disclosure must be accessible to people who interact with the content, and platforms are responsible for ensuring that content on their platform is appropriately flagged to users.
- Establishing clear liability and responsibility on companies for the harm caused by their products. This will help ensure that AI developers and deployers are incentivized to take reasonable care when their products carry a risk of causing serious damage.
Regulation Cannot be Delayed
Toner’s testimony challenged the argument that it is too early for any regulation because the science of how AI works and how to make it safe is too nascent. She argued that the AI industry needs rules of the road, which the witnesses compare to that of the FDA, with clear standards where products are recalled as soon as potential harm or safety risk is detected. Harris drew an analogy to credit card regulation, as an instance where regulation did not impede the flourishing of an industry.
The witnesses also argued that whatever these companies say about it being too early for any regulation, the reality is that billions of dollars are being poured into building and deploying increasingly advanced AI systems, systems that are affecting hundreds of millions of people’s lives even in the absence of scientific consensus about how they work or what will be built next. Thus, they believed a wait and see approach to policy in this sphere is not an option.
Finally, the witnesses also pushed back on the argument that the United States risks losing an innovation race to China if it started to regulate AI systems. Toner explained that China is heavily regulating their AI sector, scrambling to keep up with the United States, and is facing serious “macro headwinds” in terms of economic problems and access to semiconductors after U.S. export controls.
Speak Your Mind