December 15, 2024

Using an Old Model for New Questions on Influence Operations

Alicia Wanless, Kristen DeCaires Gall, and Jacob N. Shapiro
Freedom to Tinker: https://freedom-to-tinker.com/

Expanding the knowledge base around influence operations has proven challenging, despite known threats to elections,COVID-related misinformation circulating worldwide, and recent tragic events at the U.S. Capitol fueled in part by political misinformation and conspiracy theories. Credible, replicable evidence from highly sensitive data can be difficult to obtain. The bridge between industry and academia remains riddled with red tape. Intentional and systemic obstructions continue to hinder research on a range of important questions about how influence operations spread, their effects, and the efficacy of countermeasures.

A key part of the challenge lies in the basic motivations for both industry and academic sectors. Tech companies have little incentive to share sensitive data or allocate resources to an effort that does not end in a commercial product, and may even jeopardize their existing one. As a result, cross-platform advances to manage the spread of influence operations have been limited, with the notable exception of successful counter-terrorism data sharing. Researchers who seek to build relationships with specific companies encounter well-documented obstacles in accessing and sharing information, and subtler ones in the time-consuming process of learning how to navigate internal politics. Companies face difficulties recruiting in-house experts from academia as well, as many scholars worry about publication limitations and lack of autonomy when moving to industry.  

The combination of these factors leaves a gap in research on non-commercial issues, at least in relation to the volume of consumer data tech companies ingest. And, unfortunately, studying influence in a purely academic setting presents all the challenges of normal research—inconsistent funding streams, access to quality data, and retaining motivated research staff—as well as the security and confidentiality issues that accompany any mass transfer of data. 

We are left with a lack of high-quality, long-term research on influence operations. 

Fortunately, a way forward exists. The U.S. government long-ago recognized that neither market nor academic incentives can motivate all the research large organizations need. Following World War II, it created a range of independent research institutions. Among them, the Federally Funded Research and Development Centers (FFRDCs) were created explicitly to “provide federal agencies with R&D capabilities that cannot be effectively met by the federal government or the private sector alone”. FFRDCs – IDAMITRE, and RAND for example – are non-profit organizations funded by Congress for longer periods of time (typically five years) to pursue specific limited research agendas. They are prohibited from competing for other contracts, which enable for-profit firms to share sensitive data with them, even outside of the protections of the national security classification system, and can invest in staffing choices and projects that span short government budget cycles. These organizations bridge the divide between university research centers and for-profit contractors, allowing them to fill critical analytical gaps for important research questions. 

The FFRDC model is far from perfect. Like many government contractors, some have historically had cost inefficiencyand security issues. But by solving a range of execution challenges, they enable important, but not always market-driven research on topics ranging from space exploration, to renewable energy, to cancer treatment. 

Adopting a similar model of a multi-stakeholder research and development center (MRDC) funded by industry and civil society could lay a foundation for collaboration on issues pertaining to misinformation and influence operations by accomplishing five essential tasks

  • Facilitate funding for long-term projects.
    • Provide infrastructure for developing shared research agendas and a mechanism for executing studies.
    • Create conditions that help build trusted, long-term relationships between sectors.
    • Offer career opportunities for talented researchers wishing to do basic research with practical application.
    • Guard against inappropriate disclosures while enabling high-credibility studies with sensitive information that cannot be made public.

The MDRC model fills a very practical need for flexibility and speed on the front end of addressing immediate problems, such as understanding what, if any, role foreign nations played in the discussions which led up to January 6. Such an organization would provide a bridge for academics and practitioners to come together quickly and collaborate for a sustained period, months or years, on real-world operational issues. A research project at a university can take six months to a year to set up funding and fully staff a project. Furthermore, most universities, and even organizations like the Stanford Internet Observatory fully dedicated to these issues, cannot do “work for hire”. Meaning, if there’s no unique intellectual product or no true research question at hand, their ability to work on a given problem is limited or non-existent. An established contract organization that clearly owns a topic, fully staffed with experts in house, minimizes these hindrances.

Because an MDRC focused on influence operations does not fit neatly into existing organizational structures, its initial setup should be an iterative process. It should start with two or more tech companies joining with a cluster of academic organizations on a discrete set of deliverables, all with firm security agreements in place. Once the initial set of projects proves the model’s value, and plans for budgets and researcher time are solidified, the organization could be expanded. The negative impact of internet platforms’ impact on society did not grow over night, and we certainly do not expect the solution to either. And, tempting as it is to think the U.S. government could simply fund such an institution, it likely needs to remain independent of government funding in order to avoid collusion concerns from the international community. 

Steps toward bridging the gap between academia and the social media firms have already taken place. Facebook’s recent provision of academic access to Crowdtangle, meant in part to provide increased transparency on influence operations and disinformation, is a good step, as is its data-sharing partnership with several universities to look at election-related content. Such efforts will enable some work currently stymied by data sharing, but they do not address the deeper incentive-related issues. 

Establishing a long-term MDRC around the study of influence operations and misinformation is more crucial than ever. It is a logical way forward to address these questions at the scale they deserve.