March 24, 2018

Archives for March 2018

Artificial Intelligence and the Future of Online Content Moderation

Yesterday in Berlin, I attended a workshop on the use of artificial intelligence in governing communication online, hosted by the Humboldt Institute for Internet and Society.


In the United States and Europe, many platforms that host user content, such as Facebook, YouTube, and Twitter, have enjoyed safe harbor protections for the content they host, under laws such as Section 230 of the Communications Decency Act (CDA), the Digital Millenium Copyright Act (DMCA), and in Europe, Articles 12–15 of the eCommerce Directive. Some of these laws, such as the DMCA, provide immunity to platforms for copyright damages if the platforms remove content based on knowledge that it is unlawful. Section 230 of the CDA provides broad immunity to platforms, with the express goals of promoting economic development and free expression. Daphne Keller has a good summary of the legal landscape on intermediary liability.

Platforms are now facing increasing pressure to detect and remove illegal (and, in some cases, legal-but-objectionable) content. In the United States, for example, bills in the House and Senate would remove safe harbor protection for platforms that do not remove illegal content related to sex trafficking. The European Union has also considering laws that would limit the immunity of platforms who do not remove illegal content, which in the EU includes four categories: child sex abuse, incitement to terrorism, certain types of hate speech, and intellectual property or copyright infringement.

The mounting pressure on platforms to moderate online content coincides with increasing attention to algorithms that can automate the process of content moderation (“AI”) for the detection and ultimate removal of illegal (or unwanted) content.

The focus of yesterday’s workshop was to explore questions surrounding the role of AI in moderating content online, and the possible implications of AI for the moderation of online content and how online content moderation is governed.

Setting the Tone: Challenges for Automated Filtering

Malavika Jayaram from Digital Asia Hub and I delivered the two opening “impulse statements” for the day. Malavika talked about some of the inherent limitations of AI for automated detection (with a reference to the infamous “Not Hot Dog” app) and pointed out some of the tools that platforms are being pressured to use automated content moderation tools.

I spoke about our long line of research on applying machine learning to detect a wide range of unwanted traffic, ranging from spam to botnets to bulletproof scam hosting sites. I then talked about how the dialog has in some ways used the technical community’s past success in spam filtering to suggest that automated filtering of other types of content should be as easy as flipping a switch. Since spam detection was something we knew how to do, then surely the platforms could also ferret out everything from copyright violations to hate speech, right?

In practice Evan Engstrom and I previously wrote about the difficulty of applying automated filtering algorithms to copyrighted content.  In short, even with a database that matches fingerprints of audio and video content against fingerprints of known copyrighted content, the methods are imperfect. When framing the problem in terms of incitement to violence or hate speech, automated detection becomes even more challenging, due to “corner cases” such as parody, fair use, irony, and so forth. A recent article from James Grimmelmann summarizes some of these challenges.

What I Learned

Over the course of the day, I learned many things about automated filtering that I hadn’t previously thought about.

  • Regulators and platforms are under tremendous pressure to act, based on the assumption that the technical problems are easy.  Regulators and platforms alike are facing increasing pressure to act, as I previously mentioned. Part of the pressure comes from a perception that detection of unwanted content is a solved problem. This myth is sometimes perpetuated by the designers of the original content fingerprinting technologies, some of which are now in widespread use. But, there’s a big difference between testing fingerprints of content against a database of known offending content and building detection algorithms that can classify the semantics of content that has never been seen before. An area where technologists can contribute to this dialog is in studying and demonstrating the capabilities and limitations of automated filtering, both in terms of scale and accuracy. Technologists might study existing automated filtering techniques or design new ones entirely.
  • Takedown requests are a frequent instrument for censorship. I learned about the prevalence of “snitching”, whereby one user may request that a platform take down objectionable content by flagging the content or otherwise complaining about it—in such instances, oppressed groups (e.g., Rohingya Muslims) can be disproportionately targeted by large campaigns of takedown requests. (It was not known whether such campaigns to flag content have been automated on a large scale, but my intuition is that they likely are.) In such cases, the platforms err on the side of removing content, and the process for “remedy” (i.e., restoring the content) can be slow and tedious. This process creates a lever for censorship and suppression of speech.The trends are troubling: according to a recent article, a year ago, Facebook removed 50 percent of content that Israel requested be removed; now that figure is 95 percent.  Jillian York runs a site where users can report these types of takedowns, but these reports and statistics are all self-reported. A useful project might be to automate the measurement of takedowns for some portion of the ecosystem or group of users.
  • The larger platforms share content hashes of unwanted content, but the database and process are opaque. About nine months ago, Twitter, Facebook, YouTube, and Microsoft formed the Global Internet Forum to Counter Terrorism. Essentially, the project relies on something called the Shared Industry Hash Database. It’s very challenging to find anything about this database online aside from a few blog posts from the companies, although it does seem in some way associated with Tech Against Terrorism.The secretive nature of the shared hash database and the process itself has a couple of implications. First, the database is difficult to audit—if content is wrongly placed in the database, removing it would appear next to impossible. Second, only the member companies can check content against the database, essentially preventing smaller companies (e.g., startups) from benefitting from the information. Such limits in knowledge could ultimately prove to be a significant disadvantage if the platforms are ultimately held liable for the content that they are hosting. As I discovered throughout the day, the opaque nature of commercial content moderation proves to be a recurring theme, which I’ll return to later.
  • Different countries have very different definitions of unlawful content. The patchwork of laws governing speech on the Internet makes regulation complicated, as different countries have different laws and restrictions on speech. For example, “incitement to violence” or “hate speech” might mean a different thing in Germany (where Nazi propaganda is illegal) than it does in Spain (where it is illegal to insult the king) or France (which recently vowed to ferret out racist content on social media). When applying this observation to automated detection of illegal content, things become complicated. It becomes impossible to train a single classifier that can be applied generally; essentially, each jurisdiction needs its own classifier.
  • Norms and speech evolve over time, often rapidly. Several attendees observed that most of the automated filtering techniques today boil down to flagging content based on keywords. Such a model can be incredibly difficult to maintain, particularly when it comes to detecting certain types of content such as hate speech. For one, norms and language evolve; a word that was innocuous or unremarkable today could take on an entirely new meaning tomorrow. Complicating matters further, sometimes people try to regain control in an online discussion by co-opting a slur; therefore, a model that bases classification on the presence of certain keywords can produce unexpected false positives, especially in the absence of context.


Aside from the information I learned above, I also took away a few themes about the state of online content moderation:

  • There will likely always be a human in the loop. We must figure out what role the human should play. Detection algorithms are only as good as their input data. If the data is biased, if norms and language evolve, or if data is mislabeled (an even more likely occurrence, since a label like “hate speech” could differ by country), then the outputs will be incorrect. Additionally, algorithms can only detect proxies for semantics and meaning (e.g., an ISIS flag, a large swath of bare skin) but have much more difficulty assessing context, fair use, parody, and other nuance. In short, on a technical front, we have our work cut out for us. It was widely held that humans will always need to be in the loop, and that AI should merely be an assistive technology, for triage, scale, and improving human effectiveness and efficiency when making decisions about moderation. Figuring out the appropriate division of labor between machines and humans is a challenging technical, social, and legal problem.
  • Governance and auditing is currently challenging because decision-making is secretive. The online platforms currently control all aspects of content moderation and governance. They have the data; nobody else has it. They know the classification algorithms they use and the features they use as input to those algorithms; nobody else knows them. They also are the only ones who have insight into the ultimate decision-making process. This situation is different from other unwanted traffic detection problems that the computer science research community has worked on, where it was relatively easy to get a trace of email spam or denial of service traffic, either by generating it or by working with an ISP. In this situation, everything is under lock and key.This lack of public access to data and information makes it difficult to audit the process that platforms are currently using, and it also raises important questions about governance:
    • Should the platforms be the ultimate arbiter in takedown and moderation?
    • Is that an acceptable situation, even if we don’t know the rules that they are using to make those decisions?
    • Who trains the algorithms, and with what data?
    • Who gets access to the models and algorithms? How does disclosure work?
    • How does a user learn that his or her content was taken down, as well as why it was taken down?
    • What are the steps to remedy an incorrect, unlawful, or unjust takedown request?
    • How can we trust the platforms to make the right decisions when in some cases it is in their financial interests to suppress speech? History has suggested that trusting the platforms to do the right thing in these situations can lead to restrictions on speech.

Many of the above questions are regulatory. Yet, technologists can play a role for some aspects of these questions. For example, measurement tools might detect and evaluate removal and takedowns of content for some well-scoped forum or topic. A useful starting point for the design of such a measurement system could be a platform such as Politwoops, which monitors tweets that politicians have deleted.


The workshop was enlightening. I came as a technologist wanting to learn more about how computer science might be applied to the social and legal problems concerning content moderation; I came away with a few ideas, fueled by exciting discussion. The attendees were an healthy mix of computer scientists, regulators, practitioners, legal scholars, and human rights activists. I’ve worked on censorship of Internet protocols for many years, but in some sense measuring censorship can feel a little bit like looking for one’s key under the lamppost—my sense is that the real power over speech is now held by the platforms, and as a community we need new mechanisms—technical, legal, economic, and social—to hold them to account.

What’s new with BlockSci, Princeton’s blockchain analysis tool

Six months ago we released the initial version of BlockSci, a fast and expressive tool to analyze public blockchains. In the accompanying paper we explained how we used it to answer scientific questions about security, privacy, miner behavior, and economics using blockchain data. BlockSci has a number of other applications including forensics and as an educational tool.

Since then we’ve heard from a number of researchers and developers who’ve found it useful, and there’s already a published paper on ransomware that has made use of it. We’re grateful for the pull requests and bug reports on GitHub from the community. We’ve also used it to deep-dive into some of the strange corners of blockchain data. We’ve made enhancements including a 5x speed improvement over the initial version (which was already several hundred times faster than previous tools).

Today we’re happy to announce BlockSci 0.4.5, which has a large number of feature enhancements and bug fixes. As just one example, Bitcoin’s SegWit update introduces the concept of addresses that have different representations but are equivalent; tools such as are confused by this and return incorrect (or at least unexpected) values for the balance held by such addresses. BlockSci handles these nuances correctly. We think BlockSci is now ready for serious use, although it is still beta software. Here are a number of ideas on how you can use it in your projects or contribute to its development.

We plan to release talks and tutorials on BlockSci, and improve its documentation. I’ll give a brief talk about it at the MIT Bitcoin Expo this Saturday; then Harry Kalodner and Malte Möser will join me for a BlockSci tutorial/workshop at MIT on Monday, March 19, organized by the Digital Currency Initiative and Fidelity Labs. Videos of both events will be available.

We now have two priorities for the development of BlockSci. The first is to make it possible to implement almost all analyses in Python with the speed of C++. To enable this we are building a function composition interface to automatically translate Python to C++. The second is to better support graph queries and improved clustering of the transaction graph. We’ve teamed up with our colleagues in the theoretical computer science group to adapt sophisticated graph clustering algorithms to blockchain data. If this effort succeeds, it will be a foundational part of how we understand blockchains, just as PageRank is a fundamental part of how we understand the structure of the web. Stay tuned!

New Jersey Takes Up Net Neutrality: A Summary, and My Experiences as a Witness

On Monday afternoon, I testified before the New Jersey State Assembly Committee on Science, Technology, and Innovation, which is chaired by Assemblyman Andrew Zwicker, who also happens to represent Princeton’s district.

On the committee agenda were three bills related to net neutrality.

Let’s quickly review the recent events. In December 2017, the Federal Communications Commission (FCC) recently rolled back the now-famous 2015 Open Internet Order, which required Internet service providers (ISPs) to abide by several so-called “bright line” rules, which can be summarized as (1) no blocking lawful Internet traffic; (2) no throttling or degrading the performance of lawful Internet traffic; (3) no paid prioritization of one type of traffic over another; (4) transparency about network management practices that may affect the forwarding of traffic.  In addition to these rules, the FCC order also re-classified Internet service as a “Title II” telecommunications service—placing it under the jurisdiction of the FCC’s rulemaking authority—overturning the previous “Title I” information services classification that ISPs previously enjoyed.

The distinction of Title I vs. Title II classification is nuanced and complicated, as I’ve previously discussed. Re-classification of ISPs as a Title II service certainly comes with a host of complicated regulatory strings attached.  It also places the ISPs in a different regulatory regime than the content providers (e.g., Google, Facebook, Amazon, Netflix).

The rollback of the Open Internet Order reverted not only the ISPs’ classification of Title II service, but also the four “bright line rules”. In response, many states have recently been considering and enacting their own net neutrality legislation, including Washington, Oregon, California, and now New Jersey. Generally speaking, these state laws are far less complicated than the original FCC order. They typically involve re-instating the FCC’s bright-line rules, but entirely avoid the question of Title II classification.

On Monday, the New Jersey State Assembly considered three bills relating to net neutrality. Essentially, all three bills amount to providing financial and other incentives to ISPs to abide by the bright line rules.  The bills require ISPs to follow the bright line rules as a condition for:

  1.  securing any contract with the state government (which can often be a significant revenue source);
  2. gaining access to utility poles (which is necessary for deploying infrastructure);
  3. municipal consent (which is required to occupy a city’s right-of-way).

I testified at the hearing, and I also submitted written testimony, which you can read here. This was my first experience testifying before a legislative committee; it was an interesting and rewarding experience.  Below, I’ll briefly summarize the hearing and my testimony (particularly in the context of the other testifying witnesses), as well as my experience as a testifying witness (complete with some lessons learned).

My Testimony

Before I wrote my testimony, I thought hard about what a computer scientist with my expertise could bring to the table as a testifying expert. I focused my testimony on three points:

  • No blocking and no throttling are technically simple to implement. One of the arguments that those opposed to the legislation are making is that different state laws on blocking and throttling could become exceedingly difficult to implement, particularly if each state has its own laws. In short, the argument is that state laws could create a complex regulatory “patchwork” that is burdensome to implement. If we were considering a version of the several-hundred-page FCC’s Open Internet Order in each state, I might tend to agree. But, the New Jersey laws are simple and concise: each law is only a couple of pages. The laws basically say “don’t block or throttle lawful content”. There are clear carve-outs for illegal traffic, attack traffic, and so forth. My comments essentially focused on the simplicity of implementation, and that we need not fear a patchwork of laws if the default is a simple rule that simply prevents blocking or throttling. In my oral testimony, I added (mostly for color) that the Internet, by the way, is already a patchwork of tens of thousands of independently operated networks across hundreds of countries, and that our protocols support carrying Internet traffic over a variety of physical media, from optical networks to wireless networks to carrier pigeon. I also took the opportunity to make the point that, by the way, ISPs are in a relative sense, pretty good actors in this space right now, in contrast to other content providers who have regularly blocked access to content either for anti-competitive reasons, or as a condition for doing business in certain countries.
  • Prioritization can be useful for certain types of traffic, but it is distinct from paid prioritization. Some ISPs have been making arguments recently that prohibiting paid prioritization would prohibit (among other things) the deployment of high-priority emergency services over the Internet. Of course, anyone who has taking an undergraduate networking course will have learned about prioritization (e.g., Weighted Fair Queueing), as well as how prioritization (and even shaping) can improve application performance, by ensuring that interactive, delay-sensitive applications such as gaming are not queued behind lower priority bulk transfers, such as a cloud backup. Yet, prioritization of certain classes of applications over others is a different matter from paid prioritization, whereby one customer might pay an ISP for higher prioritization over competing traffic. I discussed the differences at length.I also talked about how prioritization and paid prioritization could more generally: it’s not just about what a router does, but about who has access to what infrastructure. The bills address “prioritization” merely as a packet scheduling exercise—a router services one queue of packets at a faster rate than another queue. But, there are plenty of other ways that some content can be made to “go faster” than others; one such example is the deployment of content across a so-called Content Delivery Network (CDN)—a distributed network of content caches that are close to users. Some application or content providers may enjoy unfair advantage (“priority”) over others merely by virtue of the infrastructure it has access to. Today’s laws—neither the repealed FCC rules nor the state law—do not say anything about this type of prioritization, which could be applied in anti-competitive ways.Finally, I talked about how prioritization is a bit of a red herring as long as there is spare capacity. Again, in an undergraduate networking course, we talk about resource allocation concepts such as max-min fairness, where every sender gets the capacity they require as long as capacity exceeds total demand. Thus, it is also important to ensure that ISPs and application providers continue to add capacity, both in their networks and at the interconnects between their networks.
  • Transparency is important for consumers, but figuring out exactly what ISPs should expose, in a way that’s meaningful to consumers and not unduly burdensome, is technically challenging. Consumers have a right to know about the service that they are purchasing from their ISP, as well as whether (and how well) that service can support different applications. Disclosure of network management practices and performance certainly makes good sense on the surface, but here the devil is in the details. An ISP could be very specific in disclosure. It could say, for example, that it has deployed a token bucket filter of a certain size, fill rate, and drain rate and detail the places in its network where such mechanisms are deployed. This would constitute a disclosure of a network management practice, but it would be meaningless for consumers. On the other hand, other disclosures might be so vague as to be meaningless; a statement from the ISP that says they might throttle certain types of high volume traffic a times of high demand might not be meaningful in helping a consumer figure out how certain applications might perform. In this sense, paragraph 226 of the Restoring Internet Freedom Order, which talks about consumers’ needs to understand how the network is delivering service for the applications that they care about is spot on. There’s only one problem with that provision: Technically, ISPs would have a hard time doing this without direct access to the client or server side of an application. In short: Transparency is challenging. To be continued.

The Hearing and Vote

The hearing itself was a interesting. There were several testifying witnesses opposing the bills: Jon Leibowitz, from Davis Polk (retained by Internet Service Providers); and a representative from US Telecom. The arguments against the bills were primarily legal and business-oriented. Essentially, the legal argument against the bills is that the states should leave this problem to the federal government. The arguments are (roughly) as follows: (1) The Restoring Internet Freedom Order prevents state pre-emption; (2) The Federal Trade Commission has this well-in-hand, now that ISPs are back in Title I territory (and as former commissioner, Leibowitz would know well the types of authority that the FTC has to bring such cases, as well as many cases they have brought against Google, Facebook, and others); (3) The state laws will create a patchwork of laws and introduce regulatory uncertainty, making it difficult for the ISPs to operate efficiently, and creating uncertainty for future investment.

The arguments in opposition to the bill are orthogonal to the points I made in my own testimony. In particular, I disclaimed any legal expertise on pre-emption. I was, however, able to comment on whether I thought the second and third arguments held water from a technical perspective. While the second point about the FTC authority is mostly a legal question, I understood enough about the FTC act, and the circumstances under which they bring cases, to comment on whether technically the bills in question give consumers more power than they might otherwise have with just the FTC rules in place. My perspective was that they do, although this point is a really interesting case of the muddy distinction between technology and the law: To really dive into arguments around this point, it helps to know a bit about both technology and the law. I was able to comment on the “patchwork” assertion from a technical perspective, as I discussed above.

At the end of the hearing, there was a committee vote on all three bills. It was interesting to see both the voting process, and the commentary that each committee member made with their votes.  In the end, there were two abstentions, with the rest in favor. The members who abstained did so largely on the legal question concerning state pre-emption—perhaps foreshadowing the next round of legal battles.

Lessons Learned

Through this experience, I once again saw the value in having technologists at the table in these forums, where the laws that govern the future of the Internet are being written and decided on. I learned a couple of important lessons, which I’ve briefly summarized below.

My job was to bring technical clarity, not to advocate policy. As a witness, technically I am picking a side. And, in these settings, even when making technical points, one is typically doing so to serve one side of a policy or legal argument. Naturally, given my arguments, I registered for a witness in favor of the legislation.

However, and importantly: that doesn’t mean my job was to advocate policy.  As a technologist, my role as a witness is to explain to the lawmakers technical concepts that can help them make better sense of the various arguments from others in the room. Additionally, I steered clear of rendering legal opinions, and where my comments did rely on legal frameworks, I made it clear that I was not an expert in those matters, but was speaking on technical points within the context of the laws, as I understood them.  Finally, when figuring out how to frame my testimony, I consulted many people: the lawmakers, my colleagues at Princeton, and even the ISPs themselves. In all cases, I asked these stakeholders about the topics I might focus on, as opposed to asking what, specifically I should say. I thought hard about what a computer scientist could bring to the discussion, as well as ensuring that what I said was technically accurate and correct.

A simple technical explanation is of utmost importance. In such a committee hearing, advocates and lobbyists abound (on both sides); technologists are rare. I suspect I was the only technologist in the room. Additionally, most of the people in the room have jobs to make arguments that serve a particular stakeholder.  In doing so, they may muddy the waters, either accidentally or intentionally. To advance their arguments, some people may even say things that are blatantly false (thankfully that didn’t happen on Monday, but I’ve seen it happen in similar forums). Perhaps surprisingly, such discourse can fly by completely unnoticed, because the people in the room—especially the decision-makers—don’t have as deep of an understanding of the technology as the technologists.  Technologists need to be in the room, to shed light and to call foul—and, importantly, to do so using accessible language and examples that non-technical policy-makers can understand.