December 3, 2024

An Introduction to My Project: Algorithmic Amplification and Society

This article was originally published on the Knight Institute website at Columbia University.

The distribution of online speech today is almost wholly algorithm-mediated. To talk about speech, then, we have to talk about algorithms. In computer science, the algorithms driving social media are called recommendation systems, and they are the secret sauce behind Facebook and YouTube, with TikTok more recently showing the power of an almost purely algorithm-driven platform.

Relatively few technologists participate in the debates on the societal and legal implications of these algorithms. As a computer scientist, that makes me excited about the opportunity to help fill this gap by collaborating with the Knight First Amendment Institute at Columbia University as a visiting senior research scientist — I’m on sabbatical leave from Princeton this academic year. Over the course of the year, I’ll lead a major project at the Knight Institute focusing on algorithmic amplification.

This is a new topic for me, but it is at the intersection of many that I’ve previously worked on. My broad area is the societal impact of AI (I’m wrapping up a book on machine learning and fairness, and writing one about AI’s limits). I’ve done technical work to understand the social implications of recommender systems. And finally, I’ve done extensive research on platform accountability, including privacymisleading content, and ads.

Much of my writing will be about algorithmic amplification: roughly, the fact that algorithms increase the reach of some speech and suppress others. The term amplification is caught up in a definitional thicket. It’s tempting to define amplification with respect to some imagined neutral, but there is no neutral, because today’s speech platforms couldn’t exist in a recognizable form without algorithms at their core. Having previously worked on privacy and fairness—two terms that notoriously resist a consensus definition—I don’t see this as a problem. There are many possible definitions of amplification, and the most suitable definition will vary depending on the exact question one wants to answer.

It’s important to talk about amplification and to explore its variations. Much of the debate over online speech, particularly the problem of mis/disinformation, reduces the harms of social media platforms to a false binary—what should be taken down, what should be left up. However, the logic of engagement optimization rewards the production of content that may not be well-aligned with societal benefit, even if it’s not harmful enough to be taken down. This manifests differently in different areas. Instagram and TikTok have changed the nature of travel, with hordes of tourists in some cases trampling on historic or religious sites to make videos in the hopes of going viral. In science, facts or figures in papers can be selectively quoted out of context in service of a particular narrative.

Speech platforms are complex systems whose behavior emerges from feedback loops involving platform design and user behavior. As such, our techniques for studying them are in their infancy, and are further scuttled by the limited researcher access that platforms provide. I hope to both advocate for more access and transparency, and to push back against oversimplified narratives that have sometimes emerged from the research literature.

My output, in collaboration with the Knight Institute and others at Columbia, will take three forms: an essay series, a set of videos and interactives to illustrate technical concepts, and a major symposium in spring 2023. An announcement about the symposium is coming shortly.