Twenty years ago, social media companies started telling us: “Hey, use this free digital media
product!”
We individually used it, or didn’t. And then we all used it, because we had to. Just like the car.
The existence of the technology restricts human freedom and agency. The die has been cast:
social media has reshaped everything and to ban it today would itself be intolerably rapid
change.
Was this a good thing? This normative framing might call to mind specific harms, or specific
benefits, likely those that have been highlighted by the regulative apparatus of
society–journalism, science, the law, and the government.
But there are two parts to this question. What was this thing? And how would we know if it was good?
On November 17, I organized a conference (co-hosted by the Princeton Center for Information Technology Policy and the Princeton Center for Human Values) on “Morality in Tech.” Bringing together scholars from psychology, political science, sociology, art, computer science, political theory, media theory, journalism, law, math and anthropology—ensuring a variety of methodological approaches—the only frameworks that were off-limits were those that fit neatly within the confines of mainstream debates. We would not talk about transhumanism or the existential risk from AI; neither would we accept the premise that any new technology constitutes “progress” except insofar as it causes too many identifiable “harms.”
A six-hour interdisciplinary workshop is not the venue for solving problems of this scale. Instead, our goal was to bring our disparate backgrounds to the table and “unbundle” the frameworks that we each use to approach technology. We separated each of these “bundles” into three components.
The first component, Values, required us to engage with fundamental questions which are generally beyond the scope of conversations about technology. What are we trying to achieve? How are we trying to live? What, exactly, is the meaning of life?
This was a conversation that required the suspension of judgment, a willingness to engage with contradiction.
Over the past two centuries, there have been periods of radical social change wrought by rapid and wide-ranging technological innovation; each of them gave rise to political/intellectual/artistic movements that sought to grapple with these changes. Historicizing the present “techlash” can help us see the diversity of Values, Tools and Actions which have been applied to the issue of industrial and post-industrial technology. Each of these movements, however, had serious and sometimes disastrous shortcomings which we should seek to avoid.
(The toleration of this potted history provides an example of the forbearance necessary for our interdisciplinary endeavor.)
Consider the rise of Romanticism in various European nations in response to the ugly, polluting factories of the Industrial Revolution. The Tools used by the Romantics told them that if this was Progress, they wanted no part of it: rather than cost-benefit analysis or defense of human rights, they trusted their emotional feeling and aesthetic judgment. The Values they prioritized were Beauty (famously equated with Truth), authenticity and nature. The Actions they took in pursuit of these Values, then, weren’t exactly pragmatic: they spent their time in the cultivation of aesthetic sensibilities through poetry, painting and music, as well as in the experience of authentic emotion, sometimes enhanced with narcotics and violence.
In addition to being ineffective, the Romantic movement leaves a legacy of elitism, exemplified by their “romanticization” of agrarian poverty as a reservoir of authenticity. More pernicious still, this embrace of local culture reinforced the ascendent logic of nationalism, which would in turn produce the next technological crisis: WWI.
This story is well-known. The horrors of industrial war and the economic crises that followed shocked a European public that had put their faith in technological progress as an unalloyed good. Disillusioned with the world their elders had created, revolutionary avant-garde movements doubled down on technology as a tool to enhance their own power. With Values like speed, youth and willpower, these fascism.
Enough time has passed that these two technological development/reactions seem settled. The historical reference is still useful for de-naturalizing the present, for reminding us that there are many different Values, Tools and Actions that humans have used, that the current lines of conflict are not inevitable. The third development-reaction we discussed, however, was sufficiently fresh that my potted history prompted heated discussion.
The blunt facts bear repeating. There was a period of roughly fifty years during which global population growth hovered around 2% annually—dramatically higher than ever before, this growth rate is equivalent to a population doubling time of 35 years. Fears over runaway population causing catastrophe parallel present concerns over AI, the inevitable result of extrapolating an exponential growth curve.
The movement which arose in response to the “Population Bomb” is less historiographically settled than Romanticism, but I find it useful to fold it under the label of Ecology. This movement had (has?) Values like balance and control, and used Tools like cybernetics and systems thinking to evaluate how to achieve these Values. The Actions they took included computational modeling, elite coordination (like the famous Club of Rome “Limits of Growth” report), and international NGO development .
In many ways, the central concerns of Ecologists and their intellectual progeny have been vindicated: public attention to climate change has never been higher. But the failures of the Ecology movement as it manifested in the 1970s caused serious harm to the movement, serving as a lesson for us today. Many of the movements’ intellectual lights were too willing to trade off human freedom—specifically, the freedom of the poor both in the Global South and in rich countries to have children—for their Values. From a purely strategic standpoint, the fact that the Ecologists made alarming predictions derived from their computer models which turned out to be badly incorrect diminished their credibility moving forward.
Using these historical examples, our conversation expanded to describe the current situation, where companies selling “Artificial Intelligence” are promising a brave new world. Eventually, we will produce a syllabus that will help contextualize these promises, giving future citizens and regulators a broader set of Values over how we want to live, Tools to evaluate how technology might fit into that vision, and Actions they can take to make this a reality.
We don’t want to live in a world where the only question about technology is the one proposed by the technology companies: “We invented this, take it or leave it.”
We’ll be publishing the first draft of the syllabus early next year. If you’re interested in teaching a course with these materials or just checking out the output of our workshop, please sign up for the CITP Newsletter. Ultimately, the Actions we’re interested in taking beyond the scope of a university course involve regulation, of putting our Values into practice—we’ll be hosting a public conference on the topic this April, so please get in touch if you’d like to be involved.
Speak Your Mind