July 29, 2021

Archives for January 2021

Using an Old Model for New Questions on Influence Operations

Alicia Wanless, Kristen DeCaires Gall, and Jacob N. Shapiro
Freedom to Tinker: https://freedom-to-tinker.com/

Expanding the knowledge base around influence operations has proven challenging, despite known threats to elections,COVID-related misinformation circulating worldwide, and recent tragic events at the U.S. Capitol fueled in part by political misinformation and conspiracy theories. Credible, replicable evidence from highly sensitive data can be difficult to obtain. The bridge between industry and academia remains riddled with red tape. Intentional and systemic obstructions continue to hinder research on a range of important questions about how influence operations spread, their effects, and the efficacy of countermeasures.

A key part of the challenge lies in the basic motivations for both industry and academic sectors. Tech companies have little incentive to share sensitive data or allocate resources to an effort that does not end in a commercial product, and may even jeopardize their existing one. As a result, cross-platform advances to manage the spread of influence operations have been limited, with the notable exception of successful counter-terrorism data sharing. Researchers who seek to build relationships with specific companies encounter well-documented obstacles in accessing and sharing information, and subtler ones in the time-consuming process of learning how to navigate internal politics. Companies face difficulties recruiting in-house experts from academia as well, as many scholars worry about publication limitations and lack of autonomy when moving to industry.  

The combination of these factors leaves a gap in research on non-commercial issues, at least in relation to the volume of consumer data tech companies ingest. And, unfortunately, studying influence in a purely academic setting presents all the challenges of normal research—inconsistent funding streams, access to quality data, and retaining motivated research staff—as well as the security and confidentiality issues that accompany any mass transfer of data. 

We are left with a lack of high-quality, long-term research on influence operations. 

Fortunately, a way forward exists. The U.S. government long-ago recognized that neither market nor academic incentives can motivate all the research large organizations need. Following World War II, it created a range of independent research institutions. Among them, the Federally Funded Research and Development Centers (FFRDCs) were created explicitly to “provide federal agencies with R&D capabilities that cannot be effectively met by the federal government or the private sector alone”. FFRDCs – IDAMITRE, and RAND for example – are non-profit organizations funded by Congress for longer periods of time (typically five years) to pursue specific limited research agendas. They are prohibited from competing for other contracts, which enable for-profit firms to share sensitive data with them, even outside of the protections of the national security classification system, and can invest in staffing choices and projects that span short government budget cycles. These organizations bridge the divide between university research centers and for-profit contractors, allowing them to fill critical analytical gaps for important research questions. 

The FFRDC model is far from perfect. Like many government contractors, some have historically had cost inefficiencyand security issues. But by solving a range of execution challenges, they enable important, but not always market-driven research on topics ranging from space exploration, to renewable energy, to cancer treatment. 

Adopting a similar model of a multi-stakeholder research and development center (MRDC) funded by industry and civil society could lay a foundation for collaboration on issues pertaining to misinformation and influence operations by accomplishing five essential tasks

  • Facilitate funding for long-term projects.
    • Provide infrastructure for developing shared research agendas and a mechanism for executing studies.
    • Create conditions that help build trusted, long-term relationships between sectors.
    • Offer career opportunities for talented researchers wishing to do basic research with practical application.
    • Guard against inappropriate disclosures while enabling high-credibility studies with sensitive information that cannot be made public.

The MDRC model fills a very practical need for flexibility and speed on the front end of addressing immediate problems, such as understanding what, if any, role foreign nations played in the discussions which led up to January 6. Such an organization would provide a bridge for academics and practitioners to come together quickly and collaborate for a sustained period, months or years, on real-world operational issues. A research project at a university can take six months to a year to set up funding and fully staff a project. Furthermore, most universities, and even organizations like the Stanford Internet Observatory fully dedicated to these issues, cannot do “work for hire”. Meaning, if there’s no unique intellectual product or no true research question at hand, their ability to work on a given problem is limited or non-existent. An established contract organization that clearly owns a topic, fully staffed with experts in house, minimizes these hindrances.

Because an MDRC focused on influence operations does not fit neatly into existing organizational structures, its initial setup should be an iterative process. It should start with two or more tech companies joining with a cluster of academic organizations on a discrete set of deliverables, all with firm security agreements in place. Once the initial set of projects proves the model’s value, and plans for budgets and researcher time are solidified, the organization could be expanded. The negative impact of internet platforms’ impact on society did not grow over night, and we certainly do not expect the solution to either. And, tempting as it is to think the U.S. government could simply fund such an institution, it likely needs to remain independent of government funding in order to avoid collusion concerns from the international community. 

Steps toward bridging the gap between academia and the social media firms have already taken place. Facebook’s recent provision of academic access to Crowdtangle, meant in part to provide increased transparency on influence operations and disinformation, is a good step, as is its data-sharing partnership with several universities to look at election-related content. Such efforts will enable some work currently stymied by data sharing, but they do not address the deeper incentive-related issues. 

Establishing a long-term MDRC around the study of influence operations and misinformation is more crucial than ever. It is a logical way forward to address these questions at the scale they deserve.  

CITP’s Summer Fellowship Program for Training Public Interest Technologists

In 2020, CITP launched the Public Interest Technology Summer Fellowship (PIT-SF) program aimed at rising juniors and seniors interested in getting first-hand experience working on technology policy at the federal, state and local level. The program is supported by the PIT-UN network and accepts students from member universities. We pay students a stipend and cover their reasonable travel costs. We are delighted to announce that applications are open for second year of the program. This post describes the firsthand reflections of three students from the program’s inaugural cohort. 

Who are we and where were our fellowships?

Manish Nagireddy: I’m a sophomore at Carnegie Mellon studying statistics and machine learning. I worked in the AI/ML division of the Data Science and Analytics group at the Consumer Financial Protection Bureau (CFPB).

Julia Meltzer: I’m a junior at Stanford, doing a major in Symbolic Systems (part linguistics, part computer science, part philosophy, and part psychology) and minoring in Ethics and Technology. I worked on the Policy Team for the NYC Mayor’s Office of the Chief Technology Officer (MoCTO).

Meena Balan: I’m a junior at Georgetown studying International Politics with a concentration in International Law, Ethics, and Institutions, and minors in Russian and Computer Science. Last summer I had the opportunity to work with the Office of Policy Planning (OPP) at the Federal Trade Commission (FTC). 

What made you apply for the PIT-SF fellowship? 

Meena: As a student of both the practical and the qualitative aspects of technology, I am strongly drawn to the PIT field because of the opportunity to combine my interests in law, policy, ethics, and technological governance and engage with the social and economic impacts of technology at a national scale. In addition to gaining unique real-world experience and working on cutting-edge issues in the field, I found the PIT-SF fellowship particularly compelling because of its emphasis on mentorship, both from peers and experts in the field, which I believed would help me to grapple more meaningfully with issues I had previously only encountered in a classroom environment. 

Julia: I have long been attracted to and inspired by the Public Interest Technology (PIT) sphere which allows technologists, policymakers, activists, and experts in all fields to ensure that the technological era is just and that the incredible power tech offers is used for social good. As a student with interests in policy, programming, and social impact, I was thrilled to find the rare opportunity to make a difference, in an entry-level position, working on the problems I find most essential. The fellowship also offered the benefit of wisdom from the program’s leaders and guest speakers.

Manish: PIT to me, at face value, means creating and using technology in responsible manners. Specifically, this term represents the mindset of always keeping social values and humanitarian ethics when designing sophisticated technological systems. I applied to this fellowship because it offered a unique opportunity to combine my love of technology for social good as well as gain insight into how government agencies deal with tech-related issues.

How did the PIT-SF fellowship influence you?

Julia: From CITP and the orientation for the fellowship, I learned about the wide range of policy issues central to regulating technology. The personal narratives that guest speakers and the program leaders shared provided assurance that there is no wrong way to join the PIT coalition and inspired me to follow the path that I feel drawn to instead of whatever may seem like the correct one.

At MoCTO, I experienced the full range of what it means to work on local (city-wide) PIT efforts. From watching the design team navigate website accessibility to tracking global COVID-19 technical solutions to advocating for new legislation, my summer as a fellow has compelled me to enter a career in civil service at the same intersection into which MoCTO provided me a foray. I’ve had the privilege to continue working for MoCTO where I’ve begun to gain a deep and full understanding of the ways in which technology policy is written and passed into law. Thanks to the role models I found through MoCTO, I am now applying to law schools not only to become a lawyer, but to increase my comprehension of PIT. I learned by watching my supervisor and the rest of our team that a systematic and complete mastery of the technical logistics, the historical use, the social implications, and the legal context are all essential knowledge bases for those working in the PIT sphere.

Meena: As a fellow working with the FTC, I worked on analyzing acquisitions by prominent technology companies. The process of acquisition analysis is one that combines both technical and qualitative skills, allowing me to uniquely leverage my multidisciplinary background to engage with the business structures, technological features, and post-acquisition implications of hundreds of companies. In addition to gaining a better understanding of investment and growth patterns in the tech sector, I developed a deeper understanding of the economic theories and laws underlying antitrust analysis through direct mentorship with experts in the field. At the culmination of my fellowship, my peers and I presented our findings to the OPP and received valuable feedback from senior leadership, which fueled my interest in the field of tech policy and guided me to follow cutting-edge trends in the applications of emerging technologies more closely. 

Through the course of the fellowship, CITP also offered incredible exposure to PIT niches outside of antitrust, empowering me to develop a greater understanding of both public and private sector perspectives and the broader issue landscape. During the bootcamp, fellows were invited to participate in meaningful discussions with industry leaders and senior experts across federal and local government, intelligence, law, and the technology sectors. This provided us with unique opportunities to understand the issues of privacy, equity and access, and algorithmic fairness not only through a regulatory lens, but also in terms of the technical, business, and ethical challenges that play a significant role in shaping PIT initiatives. Given the broad complexity of the PIT field and the evolving nature of professional exposure at the undergraduate level, the PIT-SF fellowship offered impressive and unparalleled real world experience that has contributed significantly to my pursuit of a career at the intersection of technology, law, and policy.

Manish: During my fellowship at the CFPB, I worked on fair-lending models and this introduced me to the field that I wish to join full time: fairness in machine learning. Borne out of a need to create models that maintain equality with respect to various desirable features/metrics, fair-ml is an interdisciplinary topic that deals with both the algorithmic foundations as well as the real-world implications of fairness-aware machine learning systems.

My fellowship directly introduced me to this field and, by the end of my stint at the CFPB, I compiled all of the knowledge I had amassed through a literature deep-dive in the form of a formal summary paper (linked here). Moreover, this fellowship gave me the necessary background for my current role of leading a research team based in Carnegie Mellon’s Human-Computer Interaction Institute (HCII) where the focus is on how industry practitioners formulate and solve fairness-related tasks.

One of the best parts about this fellowship is that public interest technology itself is broad enough of a field to allow for extremely diverse experiences with one common thread: relevance. Every fellowship dealt with, in some capacity, a timely and cutting edge topic. Personally, the field of fair-ml has only been rigorously studied within the past decade, which allowed me to easily find the most important papers and people to read and reach out to, respectively. The ability to find both incredibly pertinent and also rather interesting work is an immediate consequence of my PIT-SF fellowship.

Conclusion: We plan to invite approximately 16 students to this year program, which will operate in a hybrid format. Like last year, we begin with a virtual three-day policy bootcamp led by Mihir Kshirsagar and Tithi Chattopadhyay. The bootcamp will educate students about law and policy, and will feature leading experts as guest speakers in the fields of computer science and policy. After the bootcamp, fellows will travel to (or join virtually) the host government agencies in different cities that our program has matched them with to spend approximately eight weeks working with the agency. We will also have weekly virtual clinic-style seminars to support the fellows during their internships. At the conclusion of the summer, we aim to bring the 2021 and 2020 PIT-SF fellows for an in-person debriefing session in Princeton (subject to the latest health guidelines). CITP is committed to building a culturally diverse community, and we are interested in receiving applications from members of groups that have been historically underrepresented in this field. The deadline to apply is February 10, 2021 and the application is available here.

ESS voting machine company sends threats

For over 15 years, election security experts and election integrity advocates have been communicating to their state and local election officials the dangers of touch-screen voting machines. The danger is simple: if fraudulent software is installed in the voting machine, it can steal votes in a way that a recount wouldn’t be able to detect or correct. That was true of the paperless touchscreens of the 2000s, and it’s still true of the ballot-marking devices (BMDs) and “all-in-one” machines such as the ES&S ExpressVote XL voting machine (see section 8 of this paper*). This analysis is based on the characteristics of the technology itself, and doesn’t require any conspiracy theories about who owns the voting-machine company.

In contrast, if an optical-scan voting machine was suspected to be hacked, the recount can assure an election outcome reflects the will of the voters, because the recount examines the very sheets of paper that the voters marked with a pen. In late 2020, many states were glad they used optical-scan voting machines with paper ballots: the recounts could demonstrate conclusively that the election results were legitimate, regardless of what software might have been installed in the voting machines or who owned the voting-machine companies. In fact, the vast majority of the states use optical-scan voting machines with hand-marked paper ballots, and in 2020 we saw clearly why that’s a good thing.

In November and December 2020, certain conspiracy theorists made unsupportable claims about the ownership of Dominion Voting Systems, which manufactured the voting machines used in Georgia. Dominion has sued for defamation.

Dominion is the manufacturer of voting machines used in many states. Its rival, Election Systems and Software (ES&S), has an even bigger share of the market.

Apparently, ES&S must think that amongst all that confusion, the time is right to send threatening Cease & Desist letters to the legitimate critics of their ExpressVote XL voting machine. Their lawyers sent this letter to the leaders of SMART Elections, a journalism+advocacy organization in New York State who have been communicating to the New York State Board of Elections, explaining to the Board why it’s a bad idea to use the ExpressVote XL in New York (or in any state).

ES&S’s lawyers claim that certain facts (which they call “accusations”) are “false, defamatory, and disparaging”, namely: that the “ExpressVote XL can add, delete, or change the votes on individual ballots”, that the ExpressVote XL will “deteriorate our security and our ability to have confidence in our elections,” and that it is a “bad voting machine.”

Well, let me explain it for you. The ExpressVote XL, if hacked, can add, delete, or change votes on individual ballots — and no voting machine is immune from hacking. That’s why optical-scan voting machines are the way to go, because they can’t change what’s printed on the ballot. And let me explain some more: The ExpressVote XL, if adopted, will deteriorate our security and our ability to have confidence in our elections, and indeed it is a bad voting machine. And expensive, too!

It’s been clearly explained in the peer-reviewed literature how touch-screen voting machines–even the ones like the XL that print out paper ballots–can (if hacked) alter votes; and how most voters won’t notice; and how even if some voters do notice, there’s no way to correct the election result. And it’s been explained why machines like the ExpressVote XL are particularly insecure–as I said, see section 8 of this paper*.

And it’s pretty clear that the folks at SMART Elections are aware of these scientific studies, and are basing their journalism and advocacy on good science.

I’ll summarize here what’s explained in the paper: how the ExpressVote XL, if hacked, can change votes. If the machine is hacked, the software can do whatever the hacker has programmed, but the hacker can’t change the hardware. The hardware includes a thermal printer that can make black marks (i.e., print text or barcodes or whatever) on the paper, but the hardware can’t erase marks. Therefore you might think the ExpressVote XL, even if hacked, couldn’t alter votes. But consider this: suppose there are 15 contests on the ballot; suppose the voter makes choices for all 13 contests and chooses not to vote for State Senator. Then what the legitimate software does is, in the line for State Senator, print NO SELECTION MADE. But the hacked software could simply leave that line blank–then, when the voter has reviewed the ballot (or not bothered to), the ballot card is pulled past the printhead into the ballot box, and the printhead (under control of hacked software) can print in a vote for Candidate Smith. Few voters will be worried that the line is blank rather than filled in with NO SELECTION MADE.

You might think, “OK, the ExpressVote XL can fill in undervotes, that’s bad, but it can’t change votes.” But it can! Here is the mechanism: Suppose the voter makes choices in all 15 contests, and chooses Jones for State Senator. The hacked software can print a ballot card with only 14 contests, and leave blank spaces for State Senator. Then, after the voter reviews the ballot card behind glass, the card moves past the printhead into the ballot box. At this time the hacked software can print the hacker’s choice (Smith) for State Senator. If most humans were really good at checking their printout line-by-line with what they marked on the touchscreen, this wouldn’t succeed because the voter would notice the missing line, but voters are only human.

More details and explanation are in the paper*.

* Ballot-Marking Devices Cannot Assure the Will of the Voters, by Andrew W. Appel, Richard A. DeMillo, and Philip B. Stark. Election Law Journal, vol. 19 no. 3, pp. 432-450, September 2020. Non-paywall version, differs in formatting and pagination.