January 23, 2021

CITP’s Summer Fellowship Program for Training Public Interest Technologists

In 2020, CITP launched the Public Interest Technology Summer Fellowship (PIT-SF) program aimed at rising juniors and seniors interested in getting first-hand experience working on technology policy at the federal, state and local level. The program is supported by the PIT-UN network and accepts students from member universities. We pay students a stipend and cover their reasonable travel costs. We are delighted to announce that applications are open for second year of the program. This post describes the firsthand reflections of three students from the program’s inaugural cohort. 

Who are we and where were our fellowships?

Manish Nagireddy: I’m a sophomore at Carnegie Mellon studying statistics and machine learning. I worked in the AI/ML division of the Data Science and Analytics group at the Consumer Financial Protection Bureau (CFPB).

Julia Meltzer: I’m a junior at Stanford, doing a major in Symbolic Systems (part linguistics, part computer science, part philosophy, and part psychology) and minoring in Ethics and Technology. I worked on the Policy Team for the NYC Mayor’s Office of the Chief Technology Officer (MoCTO).

Meena Balan: I’m a junior at Georgetown studying International Politics with a concentration in International Law, Ethics, and Institutions, and minors in Russian and Computer Science. Last summer I had the opportunity to work with the Office of Policy Planning (OPP) at the Federal Trade Commission (FTC). 

What made you apply for the PIT-SF fellowship? 

Meena: As a student of both the practical and the qualitative aspects of technology, I am strongly drawn to the PIT field because of the opportunity to combine my interests in law, policy, ethics, and technological governance and engage with the social and economic impacts of technology at a national scale. In addition to gaining unique real-world experience and working on cutting-edge issues in the field, I found the PIT-SF fellowship particularly compelling because of its emphasis on mentorship, both from peers and experts in the field, which I believed would help me to grapple more meaningfully with issues I had previously only encountered in a classroom environment. 

Julia: I have long been attracted to and inspired by the Public Interest Technology (PIT) sphere which allows technologists, policymakers, activists, and experts in all fields to ensure that the technological era is just and that the incredible power tech offers is used for social good. As a student with interests in policy, programming, and social impact, I was thrilled to find the rare opportunity to make a difference, in an entry-level position, working on the problems I find most essential. The fellowship also offered the benefit of wisdom from the program’s leaders and guest speakers.

Manish: PIT to me, at face value, means creating and using technology in responsible manners. Specifically, this term represents the mindset of always keeping social values and humanitarian ethics when designing sophisticated technological systems. I applied to this fellowship because it offered a unique opportunity to combine my love of technology for social good as well as gain insight into how government agencies deal with tech-related issues.

How did the PIT-SF fellowship influence you?

Julia: From CITP and the orientation for the fellowship, I learned about the wide range of policy issues central to regulating technology. The personal narratives that guest speakers and the program leaders shared provided assurance that there is no wrong way to join the PIT coalition and inspired me to follow the path that I feel drawn to instead of whatever may seem like the correct one.

At MoCTO, I experienced the full range of what it means to work on local (city-wide) PIT efforts. From watching the design team navigate website accessibility to tracking global COVID-19 technical solutions to advocating for new legislation, my summer as a fellow has compelled me to enter a career in civil service at the same intersection into which MoCTO provided me a foray. I’ve had the privilege to continue working for MoCTO where I’ve begun to gain a deep and full understanding of the ways in which technology policy is written and passed into law. Thanks to the role models I found through MoCTO, I am now applying to law schools not only to become a lawyer, but to increase my comprehension of PIT. I learned by watching my supervisor and the rest of our team that a systematic and complete mastery of the technical logistics, the historical use, the social implications, and the legal context are all essential knowledge bases for those working in the PIT sphere.

Meena: As a fellow working with the FTC, I worked on analyzing acquisitions by prominent technology companies. The process of acquisition analysis is one that combines both technical and qualitative skills, allowing me to uniquely leverage my multidisciplinary background to engage with the business structures, technological features, and post-acquisition implications of hundreds of companies. In addition to gaining a better understanding of investment and growth patterns in the tech sector, I developed a deeper understanding of the economic theories and laws underlying antitrust analysis through direct mentorship with experts in the field. At the culmination of my fellowship, my peers and I presented our findings to the OPP and received valuable feedback from senior leadership, which fueled my interest in the field of tech policy and guided me to follow cutting-edge trends in the applications of emerging technologies more closely. 

Through the course of the fellowship, CITP also offered incredible exposure to PIT niches outside of antitrust, empowering me to develop a greater understanding of both public and private sector perspectives and the broader issue landscape. During the bootcamp, fellows were invited to participate in meaningful discussions with industry leaders and senior experts across federal and local government, intelligence, law, and the technology sectors. This provided us with unique opportunities to understand the issues of privacy, equity and access, and algorithmic fairness not only through a regulatory lens, but also in terms of the technical, business, and ethical challenges that play a significant role in shaping PIT initiatives. Given the broad complexity of the PIT field and the evolving nature of professional exposure at the undergraduate level, the PIT-SF fellowship offered impressive and unparalleled real world experience that has contributed significantly to my pursuit of a career at the intersection of technology, law, and policy.

Manish: During my fellowship at the CFPB, I worked on fair-lending models and this introduced me to the field that I wish to join full time: fairness in machine learning. Borne out of a need to create models that maintain equality with respect to various desirable features/metrics, fair-ml is an interdisciplinary topic that deals with both the algorithmic foundations as well as the real-world implications of fairness-aware machine learning systems.

My fellowship directly introduced me to this field and, by the end of my stint at the CFPB, I compiled all of the knowledge I had amassed through a literature deep-dive in the form of a formal summary paper (linked here). Moreover, this fellowship gave me the necessary background for my current role of leading a research team based in Carnegie Mellon’s Human-Computer Interaction Institute (HCII) where the focus is on how industry practitioners formulate and solve fairness-related tasks.

One of the best parts about this fellowship is that public interest technology itself is broad enough of a field to allow for extremely diverse experiences with one common thread: relevance. Every fellowship dealt with, in some capacity, a timely and cutting edge topic. Personally, the field of fair-ml has only been rigorously studied within the past decade, which allowed me to easily find the most important papers and people to read and reach out to, respectively. The ability to find both incredibly pertinent and also rather interesting work is an immediate consequence of my PIT-SF fellowship.

Conclusion: We plan to invite approximately 16 students to this year program, which will operate in a hybrid format. Like last year, we begin with a virtual three-day policy bootcamp led by Mihir Kshirsagar and Tithi Chattopadhyay. The bootcamp will educate students about law and policy, and will feature leading experts as guest speakers in the fields of computer science and policy. After the bootcamp, fellows will travel to (or join virtually) the host government agencies in different cities that our program has matched them with to spend approximately eight weeks working with the agency. We will also have weekly virtual clinic-style seminars to support the fellows during their internships. At the conclusion of the summer, we aim to bring the 2021 and 2020 PIT-SF fellows for an in-person debriefing session in Princeton (subject to the latest health guidelines). CITP is committed to building a culturally diverse community, and we are interested in receiving applications from members of groups that have been historically underrepresented in this field. The deadline to apply is February 10, 2021 and the application is available here.

New Research on Privacy and Security Risks of Remote Learning Software

This post and the paper is jointly authored by Shaanan Cohney, Ross Teixeira, Anne Kohlbrenner, Arvind Narayanan, Mihir Kshirsagar, Yan Shvartzshnaider, and Madelyn Sanfilippo. It emerged from a case study at CITP’s tech policy clinic.

As universities rely on remote educational technology to facilitate the rapid shift to online learning, they expose themselves to new security risks and privacy violations. Our latest research paper, “Virtual Classrooms and Real Harms,” advances recommendations for universities and policymakers to protect the interests of students and educators.

The paper develops a threat model that describes the actors, incentives, and risks in online education. Our model is informed by our survey of 105 educators and 10 administrators who identified their expectations and concerns. We use the model to conduct a privacy and security analysis of 23 popular platforms using a combination of sociological analyses of privacy policies and 129 state laws (available here), alongside a technical assessment of platform software.

Our threat model diagrams typical remote learning data flows. An “appropriate” flow is informed by established educational norms. The flow marked end-to-end encryption represents data that is not ordinarily accessible to the platform.

In the physical classroom, there are educational norms and rules that prevent surreptitious recording of the classroom and automated extraction of data. But when classroom interactions shift to a digital platform, not only does data collection become much easier, the social cues that discourage privacy harms are weaker and participants are exposed to new security risks. Popular platforms, like Canvas, Piazza, and Slack, take advantage of this changed environment to act in ways that would be objectionable in the physical classroom—such as selling data about interactions to advertisers or other third parties. As a result, the established informational norms in the educational context are severely tested by remote learning software.

We analyze the privacy policies of 23 major platforms to find where those policies conflict with educational norms. For example, 41% of the policies permitted a platform to share data with advertisers, which conflicts with at least 21 state laws, while 23% allowed a platform to share location data. However, the privacy policies are not the only documents that shape platform practices. Universities use Data Protection Addenda (DPAs) for the institutional licenses that they negotiate with the platform to supplement or even supplant the default privacy policy. We reviewed 50 DPAs from 45 Universities, finding that the addenda were able to cause platforms to significantly shift their data practices, including stricter limits on data retention and use.

We also discuss the limitations of current federal and state regulation to address the risks we identified. In particular, the current laws lack specific guidance for platforms and educational institutions to protect privacy and security and have limited penalties for noncompliance. More broadly, the existing legal framework is geared toward regulating specific information types and a small subset of actors, rather than specifying transmission principles for appropriate use that would be more durable as the technology evolves.

What can be done to better protect students and educators? We offer the following five recommendations:

  1. Educators should understand that there are significant differences between free (or individually licensed) versions of software and institutional versions. Universities need to work on informing educators about those differences and encourage them to use institutionally-supported software.
  2. Universities should use their ability to negotiate DPAs and institute policies to make platforms modify their default practices that are in tension with institutional values.
  3. Crucially, universities should not spend all their resources on a complex vetting process before licensing software. That path leads to significant usability problems for end users, without addressing the security and privacy concerns. Instead, universities should recognize that significant user issues tend to surface only after educators and students have used the platforms and create processes to collect those issues and have the software developers rapidly fix the problems.
  4. Universities should establish clear principles for how software should respect the norms of the educational context and require developers to offer products that let them customize the software for that setting.
  5. Federal and state regulations can be improved by making platforms more accountable for compliance with legal requirements, and giving institutions a mandate to require baseline security practices, much like financial institutions have to protect consumer information under the Federal Trade Commission’s Safeguards Rule.

The shift to virtual learning requires many sacrifices from educators and students already. As we integrate these new learning platforms in our educational systems, we should ensure they reflect established educational norms and do not require users to sacrifice usability, security, and privacy.

We thank the members of Remote Academia and the university administrators who participated in the study. Remote Academia is a global Slack-based community, that gives faculty and other education professionals a space to share resources and techniques for remote learning. It was created by Anne, Ross, and Shaanan.

Improving Protections for Children’s Privacy Online

CITP’s Tech Policy Clinic submitted a Comment to the Federal Trade Commission in connection with its review of the COPPA Rule to protect children’s privacy online. Our Comment explains why it is important to update the COPPA Rule to keep it current with new privacy risks, especially as children spend increasing amounts of time online on a variety of connected devices.

What is the Children’s Online Privacy Protection Act (COPPA)?

As background, Congress in 1998 gave the FTC authority to issue rules that govern how online commercial service providers should collect, use or disclose information about children under the age of 13. The FTC issued the first version of the Rule in 2000 which requires providers to place parents in control over what information is collected from their young children online. The Rule applies to both providers of services directed to children under 13 as well as those serving a general audience who have actual knowledge that they are collecting, using, or disclosing personal information from children under 13. This Rule was subsequently revised, after a period of public comment, in 2013 to account for technological developments, including the pervasive use of mobile apps. In 2019, the FTC announced it was revisiting the Rule in light of ongoing questions about the efficacy of the Rule in a data-fueled online marketplace and soliciting public comment on potential improvements to the Rule. 

Core Recommendations to Update the COPPA Rule

Our Comment makes three main points:

  • We  encourage the FTC to develop rules that promote external scrutiny of provider practices by making the provider’s choices about how they are complying with the Rule available in a transparent and machine-readable format. 
  • We recommend that the FTC allow providers to rely on an exemption from collecting or tracking information related to “internal operations” only under extremely limited circumstances, otherwise the exception risks swallowing the rule. 
  • We offer some suggestions on how education technology providers should be responsive to parents and recommend that the FTC conduct further studies about how such technology is being used in practice. 

We elaborate on each point below.

Enabling Effective External Compliance Checks Through Transparency

One of the central challenges with the COPPA Rule today is that it is very difficult for external observers (parents, researchers, journalists or advocacy groups) to understand how an online provider has decided to comply with the Rule. For example, it is not clear if a site believes it is in compliance with the Rule because it argues that none of its content is directed at children or because it has implemented rules that seek appropriate consent before gathering information about users. Making a provider’s choices on compliance transparent will enable meaningful external scrutiny of practices and hold providers to account. 

Under the COPPA Rule providers are responsible for determining whether or not a service is child directed by looking to a variety of factors. If the service is directed at children, then the provider must ensure they have verified parental consent before collecting information about users. If the audience is of mixed age, then the provider must ensure that it does not collect information about users under the age of 13 without parental consent.

The determination about whether a service is child directed, as the FTC explains, includes factors such as “its subject matter, visual content, use of animated characters or child-oriented activities and incentives, music or other audio content, age of models, presence of child celebrities or celebrities who appeal to children, language or other characteristics of the Web site or online service, as well as whether advertising promoting or appearing on the Web site or online service is directed to children . . . [and] competent and reliable empirical evidence regarding audience composition, and evidence regarding the intended audience.” If the service is child directed and children under the age of 13 are the primary audience, then it is “primarily child directed.” If services that are child directed, but do not target children as the primary audience, they are “child directed, but mixed audience” services under the COPPA Rule. 

If a mixed audience service seeks to collect information about users it can choose to implement an age gate to ensure it does not collect data about underage users. An age gate is, a mechanism that asks users to provide their age or date of birth in an age-neutral way. 

Our principal recommendation is that the COPPA Rule should be revised to explicitly facilitate external scrutiny by requiring providers to make their design choices more open to external review. Specifically, we suggest that the FTC should make sites or services disclose, in a machine-readable format, whether they consider themselves, in whole or part, “directed to children” under COPPA. This allows academic researchers (or parents) to examine what the provider is actually doing to protect children’s privacy. 

We also recommend that the FTC establish a requirement that, if a website or online service is using an age gate as part of its determination that it is not child directed, it must publicly post a description of the operation of the age gate and what steps it took to validate that children under 13 cannot circumvent the age gate. 

In addition, drawing on our work on online dark patterns, we suggest that the FTC examine the verifiable parental consent mechanisms used by providers to ensure that parents are being given the opportunity to make fully informed and free choices about their child’s privacy. 

Finally, we suggest some ways that platforms such as iOS or Android can be enlisted by the FTC to play a more effective role in screening users and verifying ages.

Restrict Providers from Relying on the “Internal Operations” Exception

Another significant issue with current practices is that providers rely on an exception for providing parental notice and obtaining consent before collecting personal information when they use persistent identifiers for “internal operations.” The 2013 revisions to the Rule included this new exception, but required it to be used for a limited set of circumstances necessary to deliver the service. It appears many providers now use that exception for a wide variety of purposes that go well beyond what is strictly necessary to deliver the service. In particular, users have no external way to verify whether certain persistent identifiers, such as cookies, are being used for impermissible purposes. Therefore, our Comment urges the FTC to require providers to be transparent about how they rely on the “internal operations” exception when using certain persistent identifiers and limit the circumstances when the providers are allowed to use such an exception.

Give Parents Control Over Information Collected by Educational Technology Service Providers

Finally, our Comment addresses the FTC’s query about whether a specific exception for parental consent is warranted for the growing market of providers of educational technology services to children (and their parents) in the classroom and at home. We recommend that the FTC should study the use of educational technology in the field before considering a specific exception to parental consent. In particular, we explain that any rule should cover the following issues: First, parents should be told, in an accessible manner, what data educational technology providers collect about their children, how that data is used, who has access to the data, and how long it is retained. Parents should also have the right to request that data about their children are deleted. Second, school administrators should be given guidance on how to make informed decisions about selecting educational technology providers, develop policies that preserve student privacy, and train educators to implement those policies. Third, the rule should clarify how school administrators and educational technology providers are accountable to parents for how data about their children are collected, used and maintained. Fourth, the FTC needs to clearly define what is meant by “educational purposes” in the classroom in considering any exceptions for parental consent.

* The Comment was principally drafted by Jonathan Mayer and Mihir Kshirsagar, along with Marshini Chetty, Edward W. Felten, Arunesh Mathur, Arvind Narayanan, Victor Ongkowijaya, Matthew J. Salganik, Madelyn Sanfilippo, and Ari Ezra Waldman.