March 22, 2018

SESTA May Encourage the Adoption of Broken Automated Filtering Technologies

The Senate is currently considering the Stop Enabling Sex Traffickers Act (SESTA, S. 1693), with a scheduled hearing tomorrow. In brief, the proposed legislation threatens to roll back aspects of Section 230 of the Communications Decency Act (CDA), which relieve content providers, or so-called “intermediaries” (e.g., Google, Facebook, Twitter) of liability for the content that is hosted on their platforms. Section 230 protects these platforms from prosecution in federal civil or state courts for the activities of their customers.

One of the corollaries of SESTA is that, with increased liability, content providers might feel compelled to rely more on automated classification filters and algorithms to detect and eliminate unwanted content on the Internet. Having spent more than ten years on developing these types of classifiers to detect “unwanted traffic” ranging from spam to phishing attacks to botnets, I am deeply familiar with the potential—and limitations—of automated filtering algorithms for identifying such content. Existing algorithms can be effective for detecting and predicting certain types of “unwanted traffic”—most notably, attack traffic—but the current approaches to detecting unwanted speech fall far short of being able to reliably detect illegal speech.

Content filters are inaccurate. Notably, the oft-referenced technologies for detecting illegal speech or imagery (e.g., PhotoDNA, EchoPrint), rely on matching content that is posted online against a corpus of content that is known to contain illegal content (e.g., text, audio, imagery). Unfortunately, because these technologies rely on analyzing the content of the posted material. the potential for false positives (i.e., mistakenly identifying innocuous content as illegal) and false negatives (i.e., failing to detect illegal content entirely) are both possible. The network security community has been through this scenario before, in the context of spam filtering: years ago, spam filters would analyze the text of messages to determine whether a particular message was legitimate or spam; it wasn’t long before spammers developed tens of thousands of ways to spell “Rolex” and “Viagra” to evade these filters. They also came up with other creative ways to evade them, by stuffing messages with Shakespeare, and delivering their messages through a variety of formats, ranging from compressed audio to images to spreadsheets.

In short, content-based filters have largely failed to keep up with the agility of spammers.  Evaluation of EchoPrint, for example, suggests that false positive rates are far too high to be used in an automated filtering context. Depending on the length of the file and the type of encoding, error rates are around 1–2 %, where an error could either be a false negative or a false positive. On the other hand, when we were working on spam filters, our discussions with online email service providers suggested that any spam filtering algorithm whose false positive rate exceeded 0.01% would be far too high to avoid raising free speech questions and concerns. In other words, some of the existing automated fingerprinting services that providers might rely on as a result of SESTA might have false positive rates that are many orders of magnitude greater than might otherwise be considered acceptable. We have written extensively about the limitations of these automated filters in the context of copyright.

Content filters cannot identify context. Similarly, today, users who post content online have many tools at their disposal to evade the relatively brittle content-based filters. Detecting unwanted or illegal content on intermediary platforms is even more challenging. Instead of simply classifying unwanted email traffic such as spam (which are typically readily apparent, as they have links to advertisers, phishing sites, and so forth), the challenge on intermediary platforms entails detecting copyright, hate speech, terrorist speech, sex trafficking, and so forth. Yet, simply detecting the presence of something that matches content in a database cannot evaluate considerations fair use, parody, or coercion. Relying on automated content filters will not only produce mistakes in classifying content, but also these filters have no hope of classifying context.

A possible step forward: Classifiers based on network traffic and sending pattens. About ten years ago, we realized the failure of content filters and began exploring how network traffic patterns might produce a stronger signal for illicit activity. We observed that while it was fairly easy for a spammer to change the content of a message it was potentially much more costly for a spammer to change sending patterns, such as email volumes and where messages were originating from and going to. We devised classifiers for email traffic that relied on “network-level features” that now form the basis of many online spam filters. I think there are several grand challenges that lie ahead in determining whether similar approaches could be used to identify unwanted or illegal posts on intermediary content platforms. For example, it might be the case that certain types of illegal speech are characterized by high volumes of re-tweets, short reply times, many instances of repeated messages, or some other feature that is characteristic of the traffic or the accounts that post those messages.

Unfortunately, the reality is that we are far from having automated filtering technology that can reliably detect a wide range of illegal content. Determining how and whether various types of illegal content could be identified remains an open research problem. To suggest that “Any start-up has access to low cost and virtually unlimited computing power and to advanced analytics, artificial intelligence and filtering software.”—a comment that was made in a recent letter to Congress on the question of SESTA—vastly overstates the current state of the art. The bottom line is that whether we can design automated filters to detect illegal content on today’s online platforms is an open research question. A potentially unwanted side effect of SESTA is that intermediaries might feel compelled to deploy these imperfect technologies on their platforms as a result of this law, for fear of liability—thus potentially resulting in over-blocking of legal, legitimate content while failing to effectively deter or prevent the illegal speech that can easily evade today’s content-based filters.

Summary: Automated filters are not “there yet”. Automated filters are often incapable of simply matching content against known offending content, typically because content-based filters are so easily evaded. An interesting question concerns whether other “signals”, such as network traffic and posting patterns, or other characteristics of user accounts (e.g., age of account, number and characteristics of followers) might help us identify illegal content of various types. But, much research is needed before we can comfortably say that these algorithms are even remotely effective at curbing illegal speech. And, even as we work to improve the effectiveness of these automated fingerprinting and filtering technologies, they will likely at best remain an aid that intermediaries might opt to use; I cannot foresee false positive rates ever reaching zero; by no means should we require intermediaries to use these algorithms and technologies in hopes that doing so will eliminate all illegal speech. Doing so would undoubtedly also curb legal and legitimate speech, even as we work to improve them.

Innovation in Network Measurement Can and Should Affect the Future of Internet Privacy

As most readers are likely aware, the Federal Communications Commission (FCC) issued a rule last fall governing how Internet service providers (ISPs) can gather and share data about consumers that was recently rolled back through the Congressional Review Act. The media stoked consumer fear with headlines such as “For Sale: Your Private Browsing History” and comments about how ISPs can now “sell your Web browsing history to advertisers“. We also saw promises from large ISPs such as Comcast promising not to do exactly that. What’s next is anyone’s guess, but technologists need not stand idly by.

Technologists can and should play an important role in this discussion in several ways.  In particular, conveying knowledge about the capabilities and uses of network monitoring, and developing both new monitoring technologies and privacy-preserving capabilities can and should shape this debate in three important ways: (1) Level-setting on the data collection capabilities of various parties; (2) Understanding and limiting the power of inference; and (3) Developing new monitoring technologies that help facilitate network operations and security while protecting consumer privacy.

1. Level-setting on data collection uses and capabilities. Before entering a debate about privacy, it helps to have a firm understanding of who can collect what types of data—both in theory and in practice, as well as the myriad ways that data might be used for good (and bad). For example, in practice, if anyone has your browsing history, your ISP is a less likely culprit than an online service provider such as Google—who operates a browser, and (perhaps more importantly) whose analytics scripts are on a large fraction of the Internet’s web pages. Your browsing is also likely being logged by many of the countless online trackers that keep track of your browsing history, often without your knowledge or consent. In contrast, the network monitoring technology that is available in routers and switches today makes it a lot more difficult to extract “browsing history”; that requires a technology commonly referred to as “deep packet inspection” (DPI), or complete capture of network traffic data, which is expensive to deploy, and even more costly when data storage and analysis is concerned. Most ISPs will tell you than DPI is deployed on only a small fraction of the links in their networks, and that fraction is going down as speeds are increasing; it’s expensive to collect and analyze all of that data.

ISPs do, of course, collect other types of traffic statistics, such as lookups to domain names via the Domain Name System (DNS) and coarse-grained traffic volume statistics via IPFIX. That data can, of course, be revealing. At the same time, ISPs will correctly point out that monitoring DNS and IPFIX is critical to securing and operating the network. DNS traffic, for example, is central to detecting denial of service attacks or infected devices. IPFIX statistics are critical for monitoring and mitigating network congestion. DNS is a quintessential example of data that is both incredibly sensitive (because it reveals the domains and websites we visit, among other things, and is typically unencrypted) and incredibly useful for detecting attacks, ranging from phishing to denial of service attacks.

The long line of security and traffic engineering research illustrates both the importance of data collection, as well as the limitations of current network monitoring capabilities in performing these tasks. Take, for example, research on botnet detection, which has shown the power of using DNS lookup data and IPFIX statistics for detecting compromise and intrusion. Or, the development of traffic engineering capabilities in the data center and in the wide area, which depend on the collection and analysis of IPFIX records and in some cases packet traces.

2. Understanding (and mitigating) the power of inference. While most of the focus in the privacy debate thus far concerns data collection (specifically, a focus on DPI, which is somewhat misguided per the discussion above), we would be wise to also consider what can be inferred from any data that is collected. For example, various aspects of “browsing history” could be evident from various datasets ranging from DNS to DPI, but as discussed above all of these datasets also have legitimate operational uses. Furthermore, “browsing history” is evident from a wide range of datasets that many parties are privy to without our consent, beyond just ISPs. Such inference capabilities are only going to increase with the proliferation of data-producing Internet-connected devices coupled with advances in machine learning. If prescriptive rules specify which some types of data can be collected, we risk over-prescribing rules, while failing to achieve the goal of protecting the higher-level information that we really want to protect.

While asking questions about collection is a fine place to start a discussion, we should be at least as concerned with how the data is usedwhat it can be used to infer, and who it is shared with.We likely should be asking: (1) What data do we think should be protected or private? (2) What types network data permits inference of that private data? (3) Who has access to that data and under what circumstances? Suppose that I am interested in protecting information about whether I am at home. My ISP could learn this information from my traffic patterns, simply based on the decline in traffic volume from individual devices, even if all of my web traffic were encrypted, and even if I used a virtual private network (VPN) for all of my traffic. Such inference will be increasingly possible as more devices in our homes connect to the Internet. But, online service providers could also come to know the same information without my consent, based on different data; Google, for example, would know that I’m browsing the web at my office, rather than at home, through the use of technologies such as cookies, browser fingerprinting, and other online device tracking mechanisms.

Past and ongoing research, such as the Web Transparency and Accountability Project, as well as the “What They Know” series from the Wall Street Journal, shed important light on what can be inferred from various digital data sources. The Upturn report last year was similarly illuminating with respect to ISP data. More recently, researchers at Princeton including Noah Apthorpe and Dillon Reisman have been developing techniques to mitigate the power of inference using various traffic shaping and camouflaging techniques to limit what an ISP can infer from traffic patterns coming from a home network.

3. Facilitating purpose-driven network measurement and data minimization. Part of the tension surrounding network measurement and privacy is that current network monitoring technology is very crude; in fact, this technology hasn’t changed considerably in nearly 30 years. It at once gathers too much data, and yet, for many purposes, it is still too little. Consider, for example, that with current network monitoring technology, an ISP (or content provider) have incredible difficulty determining a user’s quality of experience for a given application, such as video streaming, simply because the wrong kind of data is collected, at the wrong granularity. As a result, ISPs (and many other parties in the Internet ecosystem) adopt a post hoc “collect first, ask questions later” approach, simply because current network monitoring technology (1) is oriented towards offline processing on warehoused data; (2) does not make it easy to figure out what data is needed to answer a particular analysis question.

Instead, network data collection could be driven by the questions operators were asking; data could be collected if—and only if—it were pertinent to a specific question or network operations task, such as monitoring application performance or detecting attacks. For example, suppose that an operator could ask a query such as “tell me the average packet loss rate of all Netflix video streams for subscribers in Seattle”. Answering such a query with today’s tools is challenging: one would have to collect all packet traces and all DNS queries and somehow identify post hoc that these streams correspond to the application of interest. In short, it’s difficult, if not impossible, answer such an operational query today without large-scale collection and storage of (very sensitive) data—all to find what is essentially a needle in a haystack.

Over the past year, my Ph.D. student Arpit Gupta at Princeton has been leading the design and development of a system called Sonata that may ultimately resolve this dichotomy and give us the best of both worlds. Two emerging technologies—(1) in-band network measurement, as supported by Barefoot’s Tofino chipset; (2) scalable streaming analytics platforms such as Spark—make it possible to write a high-level query in advance and only collect the data that is needed to satisfy the query. Such technology allows a network operator to write a query in a high-level language (in this case, Scala), specifying only the question, but allowing the runtime to figure out the minimal set of raw data that is needed to satisfy the operator’s query.

Our goal in the design and implementation of Sonata was to satisfy the operational and scaling limitations of network measurement, but achieving such scalability also has data minimization effects that have positive benefits for privacy. Data that is collected can also be a liability; it may, for example, become the target of law enforcement requests or subpoenas, which parties such as ISPs, but also online providers such as Google are regularly subject to. Minimizing the collected data to only that which is pertinent to operational queries can also ultimately help reduce this risk.

Sonata is open source, and we welcome contributions and suggestions from the community about how we can better support specific types of network queries and tasks.

Summary. Network monitoring and analytics technology is moving at a rapid pace, in terms of its capabilities to help network operators answer important questions about performance and security, without coming at the cost of consumer privacy. Technologists should devote attention to developing new technologies that can help achieve the best of both worlds, and on helping educate policymakers about the capabilities (and limitations) of existing network monitoring technology. Policymakers should be aware that network monitoring technology continues to advance, and should focus discussion around protecting what can be inferred, rather than focusing only on who can collect a packet trace.

Dissecting the (Likely) Forthcoming Repeal of the FCC’s Privacy Rulemaking

Last week, the House and Senate both passed a joint resolution that prevents the new privacy rules from the Federal Communications Commission (FCC) from taking effect; the rules were released by the FCC last November, and would have bound Internet Service Providers (ISPs) in the United States to a set of practices concerning the collection and sharing of data about consumers. The rules were widely heralded by consumer advocates, and several researchers in the computer science community, including myself , played a role in helping to shape aspects of the rules. I provided input into the rules that helped preserve the use of ISP traffic data for research and protocol development.

How much should we be concerned? Consumers have cause for concern, but almost certainly not as much as the media would have you believe. The joint resolution is expected to be signed by the President, whereupon it will go into law. Many articles in the news last week announced the joint resolution passed by Congress as a watershed moment, saying effectively that Internet service providers can “now” sell your data to the highest bidder. Yet, the first thing to realize is that Internet service providers were never prevented from doing this, and in some sense, the Congressional repeal simply preserves the status quo, with respect to ISPs and data sharing. That is, the privacy rule that was released last November, never went into effect. That said, there is one thing that consumers might be more concerned about: The resolution also prevents the FCC from making similar rules in the future, which has the effect of removing the threat of regulatory action on privacy. Previously, even though it was legal for ISPs to share your data without your consent, they might not have done so simply for fear of regulatory action from the FCC. If this resolution becomes law, there is no longer such a threat, and we will have to rely on market forces for ISPs to be good stewards of our data.

With these high-order bits in mind, the rest of this post will dissect the events over the past year or so in more detail.

Who regulates privacy? Part of the complication surrounding the debates on privacy is that there are currently two agencies in our government who are primarily responsible for protecting consumer privacy. The Federal Trade Commission (FTC) operates under the FTC Act and regulates consumer protection for businesses that are not “common carriers”; this includes most businesses, with the exception of public utilities, and—recently, with the passage of the Open Internet Order (the so-called “net neutrality” rule) in 2015—ISPs. One of the landmark decisions in the Open Internet Order was to classify ISPs under “Title II” (telecommunications providers), whereas previously they were classified under Title I. This action effectively moved the jurisdiction for regulating ISP privacy from the FTC (where Google, Facebook, and other Internet companies are regulated) to the FCC.

Essentially, there is a firewall of sorts between the two agencies when it comes to privacy rulemaking: The FTC is prohibited by federal law from regulating common carriers, and the FCC has a statutory mandate (under Section 222 of the telecommunications act) to protect customer data that is collected by common carriers.

Are the FCC’s privacy rules “fair”? Part of the debate from the ISPs surrounds whether this separation is fair: ISPs like Comcast and online service providers (so called “edge providers” in Washington) like Google are increasingly competing in the same markets, and regulating them under different rules can in some sense create an uneven playing field. Depending on your viewpoint and orientation, there is some merit to this argument: The FCC’s privacy rules are stronger than the FTC’s rules, as the FCC’s rules govern additional information that cannot be shared without user consent, such as browsing history, application usage history, and geolocation. Companies who are regulated by the FTC (Google, Facebook, etc.) have no such restrictions on sharing your data without your consent. Whether this situation is “fair” depends in some sense on your perspective about whether edge providers like Google and ISPs like Comcast should be subject to the same rules.

  • The ISP viewpoint (and the Republican rationale behind the resolution) of the joint resolution is that for the Googles and Facebooks of the world, your data is not considered sensitive; they can already gather this information about your browsing history and sell it to third-party marketers. The ISPs and Republicans view that if ISPs and edge providers are really in the same market (or should allowed to be), then they shouldn’t be subject to different rules. That sounds good, except there are a couple of hangups. The first is, as mentioned, the FTC cannot regulate ISPs; they are prohibited from doing so by federal law. Unless the ISPs are reclassified again under Title I, they may currently end up in a situation where nobody can legally regulate them, since the FTC is already prevented from doing so, and it is increasingly looking like the FCC will be prevented from doing so, as well. The charitable viewpoint to the situation is that the goal appears to be not to get rid of privacy rules entirely, but rather to shift everything concerning consumer privacy back to the FTC, where ISPs and edge providers are subject to the same rules. But, in the meantime, the situation may be suspended in a strange limbo.
  • The consumer advocate viewpoint is that, in the current market for ISPs in the United States, many consumers do not have a choice of ISP. Therefore, the ISPs are in a position of power that the edge providers do not have. In many senses, that is true: in many parts of the United States, studies from the FCC and elsewhere have shown that consumers have only one choice of broadband ISP. This places the ISP in a position of great power, because we can’t just rely on “market forces” to encourage good behavior towards consumers if consumers can’t vote with their feet. Effectively, in contrast to edge providers such as Google or Facebook, in certain markets in the US, one cannot simply “opt out” of one’s ISP. There are also some arguments that ISPs can see a lot more data than edge providers can; that point is certainly arguable, given the level of instrumentation that a company like Google has on everything from the trackers they place on just about every website on the Internet to their command over our browser, mobile operating system, etc. More likely, we should be equally concerned about both edge providers and ISPs.

The repeal, and the status quo. In essence, the repeal that is likely to come in the coming weeks should cause concern, but it is not quite as simple as “ISPs can now sell your data to the highest bidder”. Keep in mind that ISPs have always legally been able to do so, and they haven’t done so yet. In fact, on Friday, Comcast just committed to not selling your data to third-party marketers, which provides some hope that the market will, in fact, induce behavior that is good for consumers. In some sense, the repeal will do nothing except to preserve the status quo. Ultimately, time will tell. I do expect that increasingly ISPs may look increasingly like advertisers—after all, they have been trying to get into the business of advertising for years. Without the threat of regulatory enforcement that has existed until now, ISPs may be more likely to enter these markets (or at least try to do so). In the coming years, there may not be much we can do about this except hope that the market enforces good behavior. It should be noted that, despite the widespread attention to Virtual Private Networks as a possible defense against ISP data collection over the past week, these offer scant protection against the kinds of data that would or could be collected about you, as I and others have previously explained.

Privacy is a red herring. The real problem is lack of competition. The prospect of relying on the market brings me to a final point. One of the oft-forgotten provisions of the Open Internet Order’s reclassification of the ISPs under Title II is that the FCC can compel the ISPs to “unbundle the local loop”—a technical term for letting competing ISPs share the underlying physical infrastructure. We used to have this situation in the United States (older readers probably remember the days of “mom and pop” DSL providers who leased infrastructure from the telcos), and many countries in Europe still have competitive markets by virtue of this structure. One possible path forward that could give more leverage to market forces would be to unbundle the local loop under Title II. This outcome is widely viewed to be highly unlikely.

Part of the reason this might be unlikely is that if Title II reclassification is walked back and ISPs end up in the Title I regime once again. Oddly, though we are likely to hear much uproar over the “repeal” of the net neutrality rules, one silver lining will be that if and when such a rollback occurs, the ISPs will be bound by some privacy rules. If the current resolution passes, they’ll be bound by none at all.

Finally, it is worth remembering that there are other uses of customer data besides selling it to advertisers. My biggest role in helping shape the FCC’s original privacy rules was to help preserve the use of this data for Internet engineers and researchers who continue to develop new algorithms and protocols to help the Internet perform better, and to keep us safe from attacks ranging from denial of service to phishing. While none of us may be excited at the prospect of having our data shared with advertisers without our consent, we all benefit from other operational uses of this data, and those uses should certainly be preserved.