January 25, 2021

CITP’s Summer Fellowship Program for Training Public Interest Technologists

In 2020, CITP launched the Public Interest Technology Summer Fellowship (PIT-SF) program aimed at rising juniors and seniors interested in getting first-hand experience working on technology policy at the federal, state and local level. The program is supported by the PIT-UN network and accepts students from member universities. We pay students a stipend and cover their reasonable travel costs. We are delighted to announce that applications are open for second year of the program. This post describes the firsthand reflections of three students from the program’s inaugural cohort. 

Who are we and where were our fellowships?

Manish Nagireddy: I’m a sophomore at Carnegie Mellon studying statistics and machine learning. I worked in the AI/ML division of the Data Science and Analytics group at the Consumer Financial Protection Bureau (CFPB).

Julia Meltzer: I’m a junior at Stanford, doing a major in Symbolic Systems (part linguistics, part computer science, part philosophy, and part psychology) and minoring in Ethics and Technology. I worked on the Policy Team for the NYC Mayor’s Office of the Chief Technology Officer (MoCTO).

Meena Balan: I’m a junior at Georgetown studying International Politics with a concentration in International Law, Ethics, and Institutions, and minors in Russian and Computer Science. Last summer I had the opportunity to work with the Office of Policy Planning (OPP) at the Federal Trade Commission (FTC). 

What made you apply for the PIT-SF fellowship? 

Meena: As a student of both the practical and the qualitative aspects of technology, I am strongly drawn to the PIT field because of the opportunity to combine my interests in law, policy, ethics, and technological governance and engage with the social and economic impacts of technology at a national scale. In addition to gaining unique real-world experience and working on cutting-edge issues in the field, I found the PIT-SF fellowship particularly compelling because of its emphasis on mentorship, both from peers and experts in the field, which I believed would help me to grapple more meaningfully with issues I had previously only encountered in a classroom environment. 

Julia: I have long been attracted to and inspired by the Public Interest Technology (PIT) sphere which allows technologists, policymakers, activists, and experts in all fields to ensure that the technological era is just and that the incredible power tech offers is used for social good. As a student with interests in policy, programming, and social impact, I was thrilled to find the rare opportunity to make a difference, in an entry-level position, working on the problems I find most essential. The fellowship also offered the benefit of wisdom from the program’s leaders and guest speakers.

Manish: PIT to me, at face value, means creating and using technology in responsible manners. Specifically, this term represents the mindset of always keeping social values and humanitarian ethics when designing sophisticated technological systems. I applied to this fellowship because it offered a unique opportunity to combine my love of technology for social good as well as gain insight into how government agencies deal with tech-related issues.

How did the PIT-SF fellowship influence you?

Julia: From CITP and the orientation for the fellowship, I learned about the wide range of policy issues central to regulating technology. The personal narratives that guest speakers and the program leaders shared provided assurance that there is no wrong way to join the PIT coalition and inspired me to follow the path that I feel drawn to instead of whatever may seem like the correct one.

At MoCTO, I experienced the full range of what it means to work on local (city-wide) PIT efforts. From watching the design team navigate website accessibility to tracking global COVID-19 technical solutions to advocating for new legislation, my summer as a fellow has compelled me to enter a career in civil service at the same intersection into which MoCTO provided me a foray. I’ve had the privilege to continue working for MoCTO where I’ve begun to gain a deep and full understanding of the ways in which technology policy is written and passed into law. Thanks to the role models I found through MoCTO, I am now applying to law schools not only to become a lawyer, but to increase my comprehension of PIT. I learned by watching my supervisor and the rest of our team that a systematic and complete mastery of the technical logistics, the historical use, the social implications, and the legal context are all essential knowledge bases for those working in the PIT sphere.

Meena: As a fellow working with the FTC, I worked on analyzing acquisitions by prominent technology companies. The process of acquisition analysis is one that combines both technical and qualitative skills, allowing me to uniquely leverage my multidisciplinary background to engage with the business structures, technological features, and post-acquisition implications of hundreds of companies. In addition to gaining a better understanding of investment and growth patterns in the tech sector, I developed a deeper understanding of the economic theories and laws underlying antitrust analysis through direct mentorship with experts in the field. At the culmination of my fellowship, my peers and I presented our findings to the OPP and received valuable feedback from senior leadership, which fueled my interest in the field of tech policy and guided me to follow cutting-edge trends in the applications of emerging technologies more closely. 

Through the course of the fellowship, CITP also offered incredible exposure to PIT niches outside of antitrust, empowering me to develop a greater understanding of both public and private sector perspectives and the broader issue landscape. During the bootcamp, fellows were invited to participate in meaningful discussions with industry leaders and senior experts across federal and local government, intelligence, law, and the technology sectors. This provided us with unique opportunities to understand the issues of privacy, equity and access, and algorithmic fairness not only through a regulatory lens, but also in terms of the technical, business, and ethical challenges that play a significant role in shaping PIT initiatives. Given the broad complexity of the PIT field and the evolving nature of professional exposure at the undergraduate level, the PIT-SF fellowship offered impressive and unparalleled real world experience that has contributed significantly to my pursuit of a career at the intersection of technology, law, and policy.

Manish: During my fellowship at the CFPB, I worked on fair-lending models and this introduced me to the field that I wish to join full time: fairness in machine learning. Borne out of a need to create models that maintain equality with respect to various desirable features/metrics, fair-ml is an interdisciplinary topic that deals with both the algorithmic foundations as well as the real-world implications of fairness-aware machine learning systems.

My fellowship directly introduced me to this field and, by the end of my stint at the CFPB, I compiled all of the knowledge I had amassed through a literature deep-dive in the form of a formal summary paper (linked here). Moreover, this fellowship gave me the necessary background for my current role of leading a research team based in Carnegie Mellon’s Human-Computer Interaction Institute (HCII) where the focus is on how industry practitioners formulate and solve fairness-related tasks.

One of the best parts about this fellowship is that public interest technology itself is broad enough of a field to allow for extremely diverse experiences with one common thread: relevance. Every fellowship dealt with, in some capacity, a timely and cutting edge topic. Personally, the field of fair-ml has only been rigorously studied within the past decade, which allowed me to easily find the most important papers and people to read and reach out to, respectively. The ability to find both incredibly pertinent and also rather interesting work is an immediate consequence of my PIT-SF fellowship.

Conclusion: We plan to invite approximately 16 students to this year program, which will operate in a hybrid format. Like last year, we begin with a virtual three-day policy bootcamp led by Mihir Kshirsagar and Tithi Chattopadhyay. The bootcamp will educate students about law and policy, and will feature leading experts as guest speakers in the fields of computer science and policy. After the bootcamp, fellows will travel to (or join virtually) the host government agencies in different cities that our program has matched them with to spend approximately eight weeks working with the agency. We will also have weekly virtual clinic-style seminars to support the fellows during their internships. At the conclusion of the summer, we aim to bring the 2021 and 2020 PIT-SF fellows for an in-person debriefing session in Princeton (subject to the latest health guidelines). CITP is committed to building a culturally diverse community, and we are interested in receiving applications from members of groups that have been historically underrepresented in this field. The deadline to apply is February 10, 2021 and the application is available here.

ESS voting machine company sends threats

For over 15 years, election security experts and election integrity advocates have been communicating to their state and local election officials the dangers of touch-screen voting machines. The danger is simple: if fraudulent software is installed in the voting machine, it can steal votes in a way that a recount wouldn’t be able to detect or correct. That was true of the paperless touchscreens of the 2000s, and it’s still true of the ballot-marking devices (BMDs) and “all-in-one” machines such as the ES&S ExpressVote XL voting machine (see section 8 of this paper*). This analysis is based on the characteristics of the technology itself, and doesn’t require any conspiracy theories about who owns the voting-machine company.

In contrast, if an optical-scan voting machine was suspected to be hacked, the recount can assure an election outcome reflects the will of the voters, because the recount examines the very sheets of paper that the voters marked with a pen. In late 2020, many states were glad they used optical-scan voting machines with paper ballots: the recounts could demonstrate conclusively that the election results were legitimate, regardless of what software might have been installed in the voting machines or who owned the voting-machine companies. In fact, the vast majority of the states use optical-scan voting machines with hand-marked paper ballots, and in 2020 we saw clearly why that’s a good thing.

In November and December 2020, certain conspiracy theorists made unsupportable claims about the ownership of Dominion Voting Systems, which manufactured the voting machines used in Georgia. Dominion has sued for defamation.

Dominion is the manufacturer of voting machines used in many states. Its rival, Election Systems and Software (ES&S), has an even bigger share of the market.

Apparently, ES&S must think that amongst all that confusion, the time is right to send threatening Cease & Desist letters to the legitimate critics of their ExpressVote XL voting machine. Their lawyers sent this letter to the leaders of SMART Elections, a journalism+advocacy organization in New York State who have been communicating to the New York State Board of Elections, explaining to the Board why it’s a bad idea to use the ExpressVote XL in New York (or in any state).

ES&S’s lawyers claim that certain facts (which they call “accusations”) are “false, defamatory, and disparaging”, namely: that the “ExpressVote XL can add, delete, or change the votes on individual ballots”, that the ExpressVote XL will “deteriorate our security and our ability to have confidence in our elections,” and that it is a “bad voting machine.”

Well, let me explain it for you. The ExpressVote XL, if hacked, can add, delete, or change votes on individual ballots — and no voting machine is immune from hacking. That’s why optical-scan voting machines are the way to go, because they can’t change what’s printed on the ballot. And let me explain some more: The ExpressVote XL, if adopted, will deteriorate our security and our ability to have confidence in our elections, and indeed it is a bad voting machine. And expensive, too!

It’s been clearly explained in the peer-reviewed literature how touch-screen voting machines–even the ones like the XL that print out paper ballots–can (if hacked) alter votes; and how most voters won’t notice; and how even if some voters do notice, there’s no way to correct the election result. And it’s been explained why machines like the ExpressVote XL are particularly insecure–as I said, see section 8 of this paper*.

And it’s pretty clear that the folks at SMART Elections are aware of these scientific studies, and are basing their journalism and advocacy on good science.

I’ll summarize here what’s explained in the paper: how the ExpressVote XL, if hacked, can change votes. If the machine is hacked, the software can do whatever the hacker has programmed, but the hacker can’t change the hardware. The hardware includes a thermal printer that can make black marks (i.e., print text or barcodes or whatever) on the paper, but the hardware can’t erase marks. Therefore you might think the ExpressVote XL, even if hacked, couldn’t alter votes. But consider this: suppose there are 15 contests on the ballot; suppose the voter makes choices for all 13 contests and chooses not to vote for State Senator. Then what the legitimate software does is, in the line for State Senator, print NO SELECTION MADE. But the hacked software could simply leave that line blank–then, when the voter has reviewed the ballot (or not bothered to), the ballot card is pulled past the printhead into the ballot box, and the printhead (under control of hacked software) can print in a vote for Candidate Smith. Few voters will be worried that the line is blank rather than filled in with NO SELECTION MADE.

You might think, “OK, the ExpressVote XL can fill in undervotes, that’s bad, but it can’t change votes.” But it can! Here is the mechanism: Suppose the voter makes choices in all 15 contests, and chooses Jones for State Senator. The hacked software can print a ballot card with only 14 contests, and leave blank spaces for State Senator. Then, after the voter reviews the ballot card behind glass, the card moves past the printhead into the ballot box. At this time the hacked software can print the hacker’s choice (Smith) for State Senator. If most humans were really good at checking their printout line-by-line with what they marked on the touchscreen, this wouldn’t succeed because the voter would notice the missing line, but voters are only human.

More details and explanation are in the paper*.

* Ballot-Marking Devices Cannot Assure the Will of the Voters, by Andrew W. Appel, Richard A. DeMillo, and Philip B. Stark. Election Law Journal, vol. 19 no. 3, pp. 432-450, September 2020. Non-paywall version, differs in formatting and pagination.

New Research on Privacy and Security Risks of Remote Learning Software

This post and the paper is jointly authored by Shaanan Cohney, Ross Teixeira, Anne Kohlbrenner, Arvind Narayanan, Mihir Kshirsagar, Yan Shvartzshnaider, and Madelyn Sanfilippo. It emerged from a case study at CITP’s tech policy clinic.

As universities rely on remote educational technology to facilitate the rapid shift to online learning, they expose themselves to new security risks and privacy violations. Our latest research paper, “Virtual Classrooms and Real Harms,” advances recommendations for universities and policymakers to protect the interests of students and educators.

The paper develops a threat model that describes the actors, incentives, and risks in online education. Our model is informed by our survey of 105 educators and 10 administrators who identified their expectations and concerns. We use the model to conduct a privacy and security analysis of 23 popular platforms using a combination of sociological analyses of privacy policies and 129 state laws (available here), alongside a technical assessment of platform software.

Our threat model diagrams typical remote learning data flows. An “appropriate” flow is informed by established educational norms. The flow marked end-to-end encryption represents data that is not ordinarily accessible to the platform.

In the physical classroom, there are educational norms and rules that prevent surreptitious recording of the classroom and automated extraction of data. But when classroom interactions shift to a digital platform, not only does data collection become much easier, the social cues that discourage privacy harms are weaker and participants are exposed to new security risks. Popular platforms, like Canvas, Piazza, and Slack, take advantage of this changed environment to act in ways that would be objectionable in the physical classroom—such as selling data about interactions to advertisers or other third parties. As a result, the established informational norms in the educational context are severely tested by remote learning software.

We analyze the privacy policies of 23 major platforms to find where those policies conflict with educational norms. For example, 41% of the policies permitted a platform to share data with advertisers, which conflicts with at least 21 state laws, while 23% allowed a platform to share location data. However, the privacy policies are not the only documents that shape platform practices. Universities use Data Protection Addenda (DPAs) for the institutional licenses that they negotiate with the platform to supplement or even supplant the default privacy policy. We reviewed 50 DPAs from 45 Universities, finding that the addenda were able to cause platforms to significantly shift their data practices, including stricter limits on data retention and use.

We also discuss the limitations of current federal and state regulation to address the risks we identified. In particular, the current laws lack specific guidance for platforms and educational institutions to protect privacy and security and have limited penalties for noncompliance. More broadly, the existing legal framework is geared toward regulating specific information types and a small subset of actors, rather than specifying transmission principles for appropriate use that would be more durable as the technology evolves.

What can be done to better protect students and educators? We offer the following five recommendations:

  1. Educators should understand that there are significant differences between free (or individually licensed) versions of software and institutional versions. Universities need to work on informing educators about those differences and encourage them to use institutionally-supported software.
  2. Universities should use their ability to negotiate DPAs and institute policies to make platforms modify their default practices that are in tension with institutional values.
  3. Crucially, universities should not spend all their resources on a complex vetting process before licensing software. That path leads to significant usability problems for end users, without addressing the security and privacy concerns. Instead, universities should recognize that significant user issues tend to surface only after educators and students have used the platforms and create processes to collect those issues and have the software developers rapidly fix the problems.
  4. Universities should establish clear principles for how software should respect the norms of the educational context and require developers to offer products that let them customize the software for that setting.
  5. Federal and state regulations can be improved by making platforms more accountable for compliance with legal requirements, and giving institutions a mandate to require baseline security practices, much like financial institutions have to protect consumer information under the Federal Trade Commission’s Safeguards Rule.

The shift to virtual learning requires many sacrifices from educators and students already. As we integrate these new learning platforms in our educational systems, we should ensure they reflect established educational norms and do not require users to sacrifice usability, security, and privacy.

We thank the members of Remote Academia and the university administrators who participated in the study. Remote Academia is a global Slack-based community, that gives faculty and other education professionals a space to share resources and techniques for remote learning. It was created by Anne, Ross, and Shaanan.