By Nia Brazzell and Mihir Kshirsagar
In Gonzalez v. Google, a case under review at the Supreme Court, the families of individuals killed by ISIS terrorist attacks in Paris allege that YouTube aided and abetted terrorist strikes by radicalizing recruits through personalized recommendations of videos. CITP’s Tech Policy Clinic filed its first amicus brief before the Supreme Court to help it analyze how the interpretation of Section 230 of the Communications Decency Act would affect platform accountability.* As many readers of this blog know, that provision has increasingly been a topic of discussion as voices on all sides call for amendment, repeal, or greater direction from the courts.
At issue in Gonzalez is the appropriate interpretation of Section 230§(c)(1), which states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Congress was motivated to pass the law, in part, because of a state court decision in the Stratton Oakmont case, in which a broker sued Prodigy for defamation based on anonymous third-party posts on a Prodigy bulletin board that accused that broker of engaging in criminal conduct. (The Wolf of Wall Street movie is based on the broker’s actions.) The court found that Prodigy could be liable because it acted more like a publisher of those posts rather than a passive distributor like a bookstore. It reasoned that Prodigy’s use of content moderation tools, like an automated screening program to detect offensive language, made it take on a more active role. Congress sought to clarify through Section 230 that platforms could continue to engage in content moderation without being exposed to liability for serving as a conduit for third-party content.
The Supreme Court has not yet had a chance to weigh in on how to interpret the law. Many lower court decisions have interpreted Section 230 expansively. The Ninth Circuit dismissed the Gonzalez case on the grounds that Section 230 immunized Google for its content recommendations. It explained, “[t]his system is certainly more sophisticated than a traditional search engine, which requires users to type in textual queries, but the core principle is the same: Google’s algorithms select the particular content provided to a user based on that user’s inputs.” As a result, the court reasoned that YouTube had used “neutral” algorithmic tools protected by Section 230, and should not have to face any allegations that its algorithms may have contributed to illegal conduct.
Our brief takes a neutral perspective. We focus on a technical explanation of how the underlying recommendation systems work and why different interpretations of the law might affect platform accountability. Specifically, we explain how platforms have a much greater role in shaping a user’s internet experience in 2022 than they did in 1996. If you take a spin through Instagram, TikTok, YouTube, or Spotify, you see dynamically generated and heavily personalized content. The platform’s ability to prioritize or de-prioritize what users interact with gives them significant power to shape experiences.
While our brief did not take sides in the case, we emphasized that if the Court took the expansive content-neutral approach adopted by the Ninth Circuit, it could immunize a wide range of conduct by platforms that has significant implications for society. For example, a platform could be immunized for its role in the discriminatory distribution of content, or for improperly demoting third-party content. Briefing in the case will be completed by January and the oral argument is scheduled for February 21, 2023.
* Nia Brazzell, Klaudia Jaźwińska, and Varun Rao contributed to the Clinic’s amicus brief.