October 6, 2022

Choosing Between Content Moderation Interventions

How can we design remedies for content “violations” online?

Speaking today at CITP is Eric Goldman (@ericgoldman), a professor of law and co-director of the High Tech Law Institute, at Santa Clara University School of Law. Before he became a full-time academic in 2002, Eric practiced Internet law for eight years in the Silicon Valley. His research and teaching focuses on Internet, IP and advertising law topics, and he blogs on these topics at the Technology & Marketing Law Blog.

Eric reminds us that content moderation questions are front page stories every week. Lawmakers and tech companies are wondering how to create a world where everyone can have their say, people have a chance to hear from them, and people are protected from harms.

Decisions about content moderation depend on a set of questions, says Eric:

“What rules govern online content?” “Who creates those rules? Who adjudicates rule violations?” Eric is most interested in a final question: “what consequences are imposed for rule violations?

So what do should we do once a content violation has been observed? The traditional view is to delete the content or account or to keep the content and account. For example, under the Digital Millennium Copyright Act, platforms are required to “remove or disable access to” copyrighted material. It allows no option less than removing the material from visibility. The DMCA also specifies two other remedies: terminating “repeat infringers” and issue subpoenas to identify/unmask alleged infringers. Overall however, the primary intervention is to remove things, and there’s no lesser action

Next Eric, tells us about civil society principles that adopt a similar idea of removal as the primary remedy. For example, the Manila Principles on Intermediary Liability assume that removal is the one available intervention, but that it should be necessary, proportional, and adopt “the least restrictive technical means.” Similarly, the Santa Clara Principles assume that removal is the one available option.

Eric reminds us that there are many remedies between removal and keeping content. Why should we pay attention to them? With a wider range of options, we can (a) avoid collateral damage from overbroad remedies and develop a (b) broader remedy toolkit to match the needs of different communities. With a wider palette of options, we would also need principles for choosing between those remedies. Eric wants to be able to suggest options that regulators or platforms have at their disposal when making policy decisions.

To illustrate the value of being able to differentiate between remedies, Eric talks about communities that have rich sets of rules with a range of consequences other than full approval or removal, such as churches, fraternities, and sports leagues.

Eric then offers us a taxonomy of remedies, drawn from examples in use online: (a) content restrictions, (b) account restrictions, (c) visibility reductions, (d) financial levers, and (e) other.

Eric asks: once we have listed remedies, how could we possibly choose among them? Eric talks about different theories for choosing – and he doesn’t think that those models are useful for this conversation. Furthermore, conversations about government-imposed remedies are different from internet content violations.

Unlike internet content policies, says Eric, government remedies:

  • are determined by elected officials
  • funded by taxes
  • non-compliance is enforced by police power
  • some remedies are only available to the government (like jail/death)
  • are subject to constitutional limits

Finally, Eric shares some early thoughts about how to choose among possible remedies:

  • Remedy selection manifests a service’s normative priorities, which differ
  • Possible questions to ask when choosing among remedies:
    • How bad is the rule violation?
    • How confident is the service that the rule was actually violated?
    • How open is the community?
    • How will the remedy affect other community members?
    • How to balance between behavior conformance with user engagement?
  • Site design can prevent violations
    • Educate and socialize contributors (for example)
  • Services with only binary remedies aren’t well-positioned to solve problems, and maybe other actors are in a better position
  • Typically, private remedies are better than judicially imposed remedies, but at cost of due process
  • Remedies should be necessary & proportionate
  • Remedies should empower users to choose for themselves what to do

OpenPrecincts: Can Citizen Science Improve Gerrymandering Reform?

How the American public understand gerrymandering and collect data that could lead to fairer, more representative voting districts across the US?

Speaking today at CITP are Ben Williams and Hannah Wheelen of the Princeton Gerrymandering Project, part of a team with Sam Wang, William Adler, Steve Birnbaum, Rick Ober, and James Turk. Ben is the lead coordinator for the Princeton Gerrymandering Project’s research and organizational partnerships. Hannah, who is also speaking, coordinates the collection of voting precinct boundary information.

What’s Gerrymandering and Why Does it Matter?

Ben opens by explaining what gerrymandering is and why it matters. Reapportionment is a process by which congressional districts are allocated to the states after each decennial census. The process of redrawing those lines is called redistricting. When redistricting happens, politicians sometimes engage in gerrymandering, the practice of redrawing the lines to benefit a particular party– something that is common behavior by all parties.

Who has the power to redistrict federal lines? Depending on state law, redistricting is done by by different parties, who have different kinds of

  • independent commissions who make the decisions, independently from the politicians affected by it
  • advisory commissions who advise a legislature but have no decision-making power
  • politician or political appointees
  • state legislatures

Ben tells us that gerrymandering has been part of US democracy ever since the first congress. He tells us about Patrick Henry, governor of Virginia, who redrew the lines to try to favor James Monroe over James Madison. The term came into use in the 19th century, and it has remained common since then.

Why Do People Care About Gerrymandering And What Can We Do About It?

Ben tells us about the Tea Party Wave in 2010, when Republicans announced in the Wall Street Journal a systematic plan, called REDMAP, to redraw districts to establish a majority for republicans in the US for a decade. Democrats have also done similar things on a smaller scale. Since then, the designer of the REDMAP plan has become an advocate for reform, says Ben.

How do we solve gerrymandering if the point is that politicians use it to establish their power and are unlikely to give it up? Ben describes three structures:

  • Create independent commissions to draw the lines. Ballot initiatives in MI, CO, UT, and MO and state legislative action (VA) have put commissions in place.
  • Require governors to approve the plan, and give the governor the capcity to refer district lines to courts (WI, MD)
  • State supreme courts (PA, NC?)

These structures have been achieved in some states, through a variety of means: litigation, and through political campaigns. Ben also hopes that if citizens can learn to recognize gerrymandering, they can spot it and organize to respond as needed.

Decisions and controversies about gerrymandering need reliable evidence, especially at times when different sides bring their own experts to a conversation. Ben describes the projects that have been done so far, summarized the recent paper, “An Antidote for Gobbledygook: Organizing the Judge’s Partisan Gerrymandering Toolkit into a Two-Part Framework.” He also mentions the Metric Geometry and Gerrymandering Group at Tufts and MIT and work by Procaccia and Pegden at Carnegie Mellon.

Citizen Science Solutions to the Bad Data Problem in Redistricting Accountability

These many tools have opened new capacities for citizens to have an informed voice on redistricting conversations. Unfortunately, all of these projects rely on precinct level data on the geography of voting precincts and vote counts at a precinct level. Hannah talks to us about the challenge of contacting thousands of counties for precinct-level voting data. In many cases, national datasets of voter behavior are actually wrong– when you check the paper records held by local areas, you find that the boundaries are often wrong. Worse, errors are so common that gerrymandering datasets could easily produce mistaken outcomes. With too many errors for researchers to untangle, how can these data tools be useful?

Might local citizens be able to contribute to a high quality national dataset about voting precincts, and then use that data to hold politicians accountable? Hannah tells us about OpenPrecincts, a citizen science project by the Princeton Gerrymandering Project to organize the public to create accurate datasets about voter records. Hannah tells us about the many grassroots organizations that they are hoping to empower to collect data for their entire state.

The Trust Architecture of Blockchain: Kevin Werbach at CITP

In 2009, bitcoin inventor Satoshi Nakomoto argued that it was “a system for electronic transactions without relying on trust.”

That’s not true, according to today’s CITP’s speaker Kevin Werbach (@kwerb), a professor of Legal Studies and Business Ethics at the Wharton School at UPenn. Kevin is author of a new book with MIT Press, The Blockchain and the New Architecture of Trust.

A world-renowned expert on emerging technology, Kevin examines business and policy implications of developments such as broadband, big data, gamification, and blockchain. Kevin served on the Obama Administration’s Presidential Transition Team, founded the Supernova Group (a technology conference and consulting firm) and helped develop the U.S. approach to internet policy during the Clinton Administration.

Blockchain does actually rely on trust, says Kevin. He tells us the story of the cryptocurrency exchange QuadrigaCX, who claimed that millions of dollars in cryptocurrency were lost when their CEO passed away. While whole story was more complex, Kevin says, it reveals how much bitcoin transactions rely on many kinds of trust.

Rather than removing the need for trust, blockchain offers a new architecture of trust compared to previous models. Peer to peer trust is based on personal relationships.  Leviathan trust, described by Hobbes, is a social contract with the state, which then has the power to enforce private agreements between people. The power of the state makes us more trusting in the private relationships– if you trust the state and if the legal system works. Intermediary trust involves a central entity that manages transactions between people

Blockchain is a new kind of trust, says Kevin. With blockchain trust, you can trust the ledger without (so it seems) trusting any actor to validate it. For this to work, transactions need to be very hard to change without central control – if anyone had the power to make changes, you would have to trust them.

Why would anyone value the blockchain? Blockchain minimizes the need for certain kinds of trust: removing single points of failure, reducing risks of monopoly, and reduces friction from the intermediation. Blockchain also expands trust by minimizing reconciliation, carries out automated execution, and increases the auditability of records.

What could possibly go wrong? Even if the blockchain ledger is auditable and trustworthy, the transaction record isn’t the whole system. Kevin points out that 80% of all bitcoin users rely on centralized key storage. He also reported figures that 20-80% of all Initial Coin Offerings were fraudulent.

Kevin tells us about “Vlad’s conundrum”- there’s a direct conflict between the design of the blockchain system and any regulatory model. The blockchain doesn’t know the difference between transactions, and there’s no entity that can say “no, that’s not okay.” Kevin tells us about the use of the blockchain for money laundering and financing terrorism. He also tells us about the challenge of moderating child pornography data that has been distributed across the blockchain- exposing every bitcoin node to legal risks.

None of these risks are as simple as they seem. Legal enforcement is carried out by humans who often consider intent. Simply possessing digital bits that represent child pornography data will not doom bitcoin. Furthermore, systems are less decentralized or anonymous than they appear. Regulations about parts of the system at the edges and endpoints of the blockchain can promote trust and innovation. Regulators have often been able to pull systems apart, find the involved parties, and hold actors accountable.

Kevin argues that designers of blockchain systems have to manage three trade-offs. Trust, freedom of action, and convenience. Any designer of a system will have to make hard choices about the tradeoffs among each of these factors.

aCiting Vili Lehdonvirta’s blockchain paradox, Kevin tells us several stories about ways that centralized governance processes managed serious problems and fraud in blockchain systems that would have been problems if governance had purely been decentralized.  Kevin also describes technical mechanisms for governance: voting systems, special kinds of contracts, arbitration schemes, and dispute resolution processes

Overall, Kevin tells us that blockchain governance comes back to trust– which shapes how we act with confidence in circumstances of uncertainty and vulnerability.

Do Mobile News Alerts Undermine Media’s Role in Democracy? Madelyn Sanfilippo at CITP

Why do different people sometimes get different articles about the same event, sometimes from the same news provider? What might that mean for democracy?

Speaking at CITP today is Dr. Madelyn Rose Sanfilippo, a postdoctoral research associate here at CITP. Madelyn empirically studies the governance of sociotechnical systems, as well as outcomes, inequality, and consequences within these systems–through mixed method research design.

Today, Madelyn tells us about a large scale project with Yafit Lev-Aretz  to examine how push notifications and personalized distribution and consumption of news might influence readers and democracy. The project is funded by the Tow Center for Digital Journalism at Columbia University and the Knight Foundation.

Why Do Push Notification Matters for Democracy?

Americans’ trust in media have been diverging in recent years, even as society worries about the risks to democracy from echo chambers. Madelyn also tells us about changes in how Americans get their news.

Push notifications are one of those changes– news organizations that send alerts to people’s computers and to our mobile phones about news they think are important. And we get a lot of them. In 2017, Tow Center researcher Pete Brown found that people get almost one push notification per minute on their phones– interrupting us with news.

In 2017, 85% of Americans were getting news via their mobile devices, and while it’s not clear how many of that came from push notifications, mobile phones tend to come with news apps that have push notifications enabled by default.

When Madelyn and Yafit  started to analyze push notifications, they noticed something fascinating: the same publisher often pushes different headlines to different platforms. They also found that news publishers use language with less objectivity and more subjective, emotional content in those notifications.

Madelyn and Yafit especially wanted to know if media outlets covered breaking news differently based on political affiliation of their readers. Comparing notifications of disasters, gun violence, and terrorism, they found differences in the number of push notifications published by publishers with higher and lower affiliation. They also found differences in the machine-coded subjectivity and objectivity of how these publishers covered those stories.

Composite subjectivity of different sources (higher is more subjective)

Do Push Notifications Create Political Filter Bubbles?

Finally, Madelyn and Yafit wanted to know if the personalization of push notifications shaped what people might be aware of. First, Madelyn explains to us that personalization takes multiple forms:

  • Curation: sometimes which articles we see is curated by personalized algorithms (like Google News)
  • Sometimes the content itself is personalized, where two people see very different text even though they’re reading the same article

Together, they found that location based personalization is common. Madelyn tells us about three different notifications that NBC news sent to people the morning after the Democratic primary. Not only did national audiences get different notifications, but different cities received notes that mentioned Democrat and Republican candidates differently. Aside from midterms, Madelyn and her colleagues found out that sports news is often location-personalized.

Behavioral Personalization

Madelyn tells us that many news publishers also personalize news articles based on information about their readers, including their reading behavior and surveys. They found that some news publishers personalize messages based on what they consider to be a person’s reading level. They also found evidence that publishers tailor news based on personal information that they never provided to the publisher.

Governing News Personalization

How can we ensure that news publishers are serving democracy in the decisions that they make and the knowledge they contribute to society? In many publishers, decisions about the structure of news personalization are made by the business side of the organization.

Madelyn tells us about future research she hopes to do. She’s looking at the means available to news readers to manage these notifications as well as policy avenues for governing news personalization.

Madelyn also thanks her funders for supporting this collaboration with Yafit Lev-Aretz: the Knight Foundation and the Tow Center for Digital Journalism.

Bridging Tech-Military AI Divides in an Era of Tech Ethics: Sharif Calfee at CITP

In a time when U.S. tech employees are organizing against corporate-military collaborations on AI, how can the ethics and incentives of military, corporate, and academic research be more closely aligned on AI and lethal autonomous weapons?

Speaking today at CITP was Captain Sharif Calfee, a U.S. Naval Officer who serves as a surface warfare officer. He is a graduate of the U.S. Naval Academy and U.S. Naval Postgraduate School and a current MPP student at the Woodrow Wilson School.

Afloat, Sharif most recently served as the commanding officer, USS McCAMPBELL (DDG 85), an Aegis guided missile destroyer. Ashore, Sharif was most recently selected for the Federal Executive Fellowship program and served as the U.S. Navy fellow to the Center for Strategic & Budgetary Assessments (CSBA), a non-partisan, national security policy analysis think-tank in Washington, D.C..

Sharif spoke to CITP today with some of his own views (not speaking for the U.S. government) about how research and defense can more closely collaborate on AI.

Over the last two years, Sharif has been working on ways for the Navy to accelerate AI and adopt commercial systems to get more unmanned systems into the fleet. Toward this goal, he recently interviewed 160 people at 50 organizations. His talk today is based on that research.

Sharif next tells us about a rift between the U.S. government and companies/academia in AI. This rift is a symptom, he tells us, of a growing “civil-military divide” in the US. In previous generations, big tech companies have worked closely with the U.S. military, and a majority of elected representatives in Congress had prior military experience. That’s no longer true. As there’s a bifurcation in the experiences of Americans who serve in the military versus those who have. This lack of familiarity, he says, complicates moments when companies and academics discuss the potential of working with and for the U.S. military.

Next, Sharif says that conversations about tech ethics in the technology industry are creating a conflict that making it difficult for the U.S. military to work with them. He tells us about Project Maven, a project that Google and the Department of Defense worked on together to analyze drone footage using AI. Their purpose was to reduce the number of casualties to civilians who are not considered battlefield combatants. This project, which wasn’t secret, burst into public awareness after a New York Times article and a letter from over three thousand employees. Google declined to renew the DOD contract and update their motto.

U.S. Predator Drone (via Wikimedia Commons)

On the heels of their project Maven decision, Google also faced criticism for working with the Chinese government to provide services in China in ways that enabled certain kinds of censorship. Suddenly, Google found themselves answering questions about why they were collaborating with China on AI and not with the U.S. military.

How do we resolve this impasse in collaboration?

  • The defense acquisition process is hard for small, nimble companies to engage in
  • Defense contracts are too slow, too expensive, too bureaucratic, and not profitable
  • Companies aren’t not necessarily interested in the same type of R&D products as the DOD wants
  • National security partnerships with gov’t might affect opportunities in other international markets.
  • The Cold War is “ancient history” for the current generation
  • Global, international corporations don’t want to take sides on conflicts
  • Companies and employees seek to create good. Government R&D may conflict with that ethos

Academics also have reasons not to work for the government:

  • Worried about how their R&D will be utilized
  • Schools of faculty may philoisophically disagree with the government
  • Universities are incubators of international talent, and government R&D could be divisive, not inclusive
  • Government R&D is sometimes kept secret, which hurts academic careers

Faced with this, according to Sharif, the U.S. government is sometimes baffled by people’s ideological concerns. Many in the government remember the Cold War and knew people who lived and fought in World War Two. They can sometimes be resentful about a cold shoulder from academics and companies, especially since the military funded the foundational work in computer science and AI.

Sharif tells us that R&D reached an inflection point in the 1990s. During the Cold War, new technologies were developed through defense funding (the internet, GPS, nuclear technology) and then they reached industry. Now the reverse happens. Now technologies like AI are being developed by the commercial sector and reaching government. That flow is not very nimble. DOD acquisition systems are designed for projects that take 91 months to complete (like a new airplane), while companies adopt AI technologies in 6-9 months (see this report by the Congressional Research Service).

Conversations about policy and law also constrain the U.S. government from developing and adopting lethal autonomous weapons systems, says Sharif. Even as we have important questions about the ethical risks of AI, Sharif tells us that other governments don’t have the same restrictions. He asks us to imagine what would have happened if nuclear weapons weren’t developed first by the U.S..

How can divides between the U.S. government and companies/academia be bridged? Sharif suggests:

  • The U.S. government must substantially increase R&D funding to help regain influence
  • Establish a prestigious DOD/Government R&D one-year fellowship program with top notch STEM grads prior to joining the commercial sector
  • Expand on the Defense Innovation Unit
  • Elevate the Defense Innovation Board in prominence and expand the project to create conversations that bridge between ideological divides. Organize conversations at high levels and middle management levels to accelerate this familiarization.
  • Increase DARPA and other collaborations with commercial and academic sectors
  • Establish joint DOD and Commercial Sector exchange programs
  • Expand the number of DOD research fellows and scientists present on university campuses in fellowship programs
  • Continue to reform DOD acquisition processes to streamline for sectors like AI

Sharif has also recommended to the U.S. Navy that they create an Autonomy Project Office to enable the Navy to better leverage R&D. The U.S. Navy has used structures like this for previous technology transformations on nuclear propulsion, the Polaris submarine missiles, naval aviation, and the Aegis combat system.

At the end of the day, says Sharif, what happens in a conflict where the U.S. does not have the technological overmatch and is overmatched by someone else? What are the real life consequences? That’s what’s at stake in collaborations between researchers, companies, and the U.S. department of defense.