May 26, 2017

Archives for 2017

Dissecting the (Likely) Forthcoming Repeal of the FCC’s Privacy Rulemaking

Last week, the House and Senate both passed a joint resolution that prevents the new privacy rules from the Federal Communications Commission (FCC) from taking effect; the rules were released by the FCC last November, and would have bound Internet Service Providers (ISPs) in the United States to a set of practices concerning the collection and sharing of data about consumers. The rules were widely heralded by consumer advocates, and several researchers in the computer science community, including myself , played a role in helping to shape aspects of the rules. I provided input into the rules that helped preserve the use of ISP traffic data for research and protocol development.

How much should we be concerned? Consumers have cause for concern, but almost certainly not as much as the media would have you believe. The joint resolution is expected to be signed by the President, whereupon it will go into law. Many articles in the news last week announced the joint resolution passed by Congress as a watershed moment, saying effectively that Internet service providers can “now” sell your data to the highest bidder. Yet, the first thing to realize is that Internet service providers were never prevented from doing this, and in some sense, the Congressional repeal simply preserves the status quo, with respect to ISPs and data sharing. That is, the privacy rule that was released last November, never went into effect. That said, there is one thing that consumers might be more concerned about: The resolution also prevents the FCC from making similar rules in the future, which has the effect of removing the threat of regulatory action on privacy. Previously, even though it was legal for ISPs to share your data without your consent, they might not have done so simply for fear of regulatory action from the FCC. If this resolution becomes law, there is no longer such a threat, and we will have to rely on market forces for ISPs to be good stewards of our data.

With these high-order bits in mind, the rest of this post will dissect the events over the past year or so in more detail.

Who regulates privacy? Part of the complication surrounding the debates on privacy is that there are currently two agencies in our government who are primarily responsible for protecting consumer privacy. The Federal Trade Commission (FTC) operates under the FTC Act and regulates consumer protection for businesses that are not “common carriers”; this includes most businesses, with the exception of public utilities, and—recently, with the passage of the Open Internet Order (the so-called “net neutrality” rule) in 2015—ISPs. One of the landmark decisions in the Open Internet Order was to classify ISPs under “Title II” (telecommunications providers), whereas previously they were classified under Title I. This action effectively moved the jurisdiction for regulating ISP privacy from the FTC (where Google, Facebook, and other Internet companies are regulated) to the FCC.

Essentially, there is a firewall of sorts between the two agencies when it comes to privacy rulemaking: The FTC is prohibited by federal law from regulating common carriers, and the FCC has a statutory mandate (under Section 222 of the telecommunications act) to protect customer data that is collected by common carriers.

Are the FCC’s privacy rules “fair”? Part of the debate from the ISPs surrounds whether this separation is fair: ISPs like Comcast and online service providers (so called “edge providers” in Washington) like Google are increasingly competing in the same markets, and regulating them under different rules can in some sense create an uneven playing field. Depending on your viewpoint and orientation, there is some merit to this argument: The FCC’s privacy rules are stronger than the FTC’s rules, as the FCC’s rules govern additional information that cannot be shared without user consent, such as browsing history, application usage history, and geolocation. Companies who are regulated by the FTC (Google, Facebook, etc.) have no such restrictions on sharing your data without your consent. Whether this situation is “fair” depends in some sense on your perspective about whether edge providers like Google and ISPs like Comcast should be subject to the same rules.

  • The ISP viewpoint (and the Republican rationale behind the resolution) of the joint resolution is that for the Googles and Facebooks of the world, your data is not considered sensitive; they can already gather this information about your browsing history and sell it to third-party marketers. The ISPs and Republicans view that if ISPs and edge providers are really in the same market (or should allowed to be), then they shouldn’t be subject to different rules. That sounds good, except there are a couple of hangups. The first is, as mentioned, the FTC cannot regulate ISPs; they are prohibited from doing so by federal law. Unless the ISPs are reclassified again under Title I, they may currently end up in a situation where nobody can legally regulate them, since the FTC is already prevented from doing so, and it is increasingly looking like the FCC will be prevented from doing so, as well. The charitable viewpoint to the situation is that the goal appears to be not to get rid of privacy rules entirely, but rather to shift everything concerning consumer privacy back to the FTC, where ISPs and edge providers are subject to the same rules. But, in the meantime, the situation may be suspended in a strange limbo.
  • The consumer advocate viewpoint is that, in the current market for ISPs in the United States, many consumers do not have a choice of ISP. Therefore, the ISPs are in a position of power that the edge providers do not have. In many senses, that is true: in many parts of the United States, studies from the FCC and elsewhere have shown that consumers have only one choice of broadband ISP. This places the ISP in a position of great power, because we can’t just rely on “market forces” to encourage good behavior towards consumers if consumers can’t vote with their feet. Effectively, in contrast to edge providers such as Google or Facebook, in certain markets in the US, one cannot simply “opt out” of one’s ISP. There are also some arguments that ISPs can see a lot more data than edge providers can; that point is certainly arguable, given the level of instrumentation that a company like Google has on everything from the trackers they place on just about every website on the Internet to their command over our browser, mobile operating system, etc. More likely, we should be equally concerned about both edge providers and ISPs.

The repeal, and the status quo. In essence, the repeal that is likely to come in the coming weeks should cause concern, but it is not quite as simple as “ISPs can now sell your data to the highest bidder”. Keep in mind that ISPs have always legally been able to do so, and they haven’t done so yet. In fact, on Friday, Comcast just committed to not selling your data to third-party marketers, which provides some hope that the market will, in fact, induce behavior that is good for consumers. In some sense, the repeal will do nothing except to preserve the status quo. Ultimately, time will tell. I do expect that increasingly ISPs may look increasingly like advertisers—after all, they have been trying to get into the business of advertising for years. Without the threat of regulatory enforcement that has existed until now, ISPs may be more likely to enter these markets (or at least try to do so). In the coming years, there may not be much we can do about this except hope that the market enforces good behavior. It should be noted that, despite the widespread attention to Virtual Private Networks as a possible defense against ISP data collection over the past week, these offer scant protection against the kinds of data that would or could be collected about you, as I and others have previously explained.

Privacy is a red herring. The real problem is lack of competition. The prospect of relying on the market brings me to a final point. One of the oft-forgotten provisions of the Open Internet Order’s reclassification of the ISPs under Title II is that the FCC can compel the ISPs to “unbundle the local loop”—a technical term for letting competing ISPs share the underlying physical infrastructure. We used to have this situation in the United States (older readers probably remember the days of “mom and pop” DSL providers who leased infrastructure from the telcos), and many countries in Europe still have competitive markets by virtue of this structure. One possible path forward that could give more leverage to market forces would be to unbundle the local loop under Title II. This outcome is widely viewed to be highly unlikely.

Part of the reason this might be unlikely is that if Title II reclassification is walked back and ISPs end up in the Title I regime once again. Oddly, though we are likely to hear much uproar over the “repeal” of the net neutrality rules, one silver lining will be that if and when such a rollback occurs, the ISPs will be bound by some privacy rules. If the current resolution passes, they’ll be bound by none at all.

Finally, it is worth remembering that there are other uses of customer data besides selling it to advertisers. My biggest role in helping shape the FCC’s original privacy rules was to help preserve the use of this data for Internet engineers and researchers who continue to develop new algorithms and protocols to help the Internet perform better, and to keep us safe from attacks ranging from denial of service to phishing. While none of us may be excited at the prospect of having our data shared with advertisers without our consent, we all benefit from other operational uses of this data, and those uses should certainly be preserved.

Questions for the FBI on Encryption Mandates

I wrote on Monday about how to analyze a proposal to mandate access to encrypted data. FBI Director James Comey, at the University of Texas last week, talked about encryption policy and his hope that some kind of exceptional access for law enforcement will become available. (Here’s a video.)  Let’s look at what Director Comey said about how a mandate might work.

Here is an extended quote from Director Comey’s answer to an audience question (starting at 51:02 in the video, emphasis added):

The technical thing, look, I really do think we haven’t given this the shot it deserves. President Obama commissioned some work at the end of his Administration because he’d heard a lot from people on device encryption, [that] it’s too hard.  [No], it’s not too hard. It’s not too hard. It requires a change in business model but it is, according to experts inside the U.S. government and a lot of people who will meet with us privately in the private sector, no one actually wants to be seen with us but we meet them out behind the 7/11, they tell us, look, it’s a business model decision.

Take the FBI’s business model. We equip our agents with mobile devices that I think are great mobile devices and we’ve worked hard to make them secure. We have designed it so that we have the ability to access the content. And so I don’t think we have a fatally flawed mobile system in the FBI, and I think nearly every enterprise that is represented here probably has the same. You retain the ability to access the content. So look, one of the worlds I could imagine, I don’t know whether this makes sense, one of the worlds I could imagine is a requirement that if you’re going to sell a device or market a device in the United States, you must be able to comply with judicial process. You figure out how to do it.

And maybe that doesn’t make sense, absent an international component to it, but I just don’t think we, and look, I get it, the makers of devices and the makers of fabulous apps that are riding on top of our devices, on top of our networks, really don’t have an incentive to deal with, to internalize the public safety harm. And I get that. My job is to worry about public safety. Their job is to worry about innovating and selling more units, I totally get that. Somehow we have to bring together, and see if we can’t optimize those two things. And really, given my role, I should not be the one to say, here’s what the technology should look like, nor should they say, no I don’t really care about that public safety aspect.

And what I don’t want to have happen, and I know you agree with me no matter what you think about this, now I think you’re going to agree with what I’m about to say, is we can’t have this conversation after something really bad happens. And look, I don’t want to be a pessimist, but bad things are going to happen. And even I, the Director of the FBI, do not believe that we can have thoughtful conversations about optimizing things we care about in the wake of a serious, serious attack of any kind.

The bolded text is the closest Director Comey came to describing how he imagines a mandate working. He doesn’t suggest that it’s anything like a complete proposal–and anyway that would be too much to ask from an off-the-cuff answer to an audience question. But let’s look at what would be required to turn it into a proposal that can be analyzed. In other words, let’s extrapolate from Director Comey’s answer and try to figure out how he and his team might try to build out a specific proposal based on what he suggested.

The notional mandate would apply at least to retailers (“if you’re going to sell … or market a device”) who sell smartphones to the public “in the United States.” That would include Apple (for sales in Apple Stores), big box retailers like Best Buy, mobile phone carriers’ shops, online retailers like Amazon, and the smaller convenience stores and kiosks that sell cheap smartphones.

Retailers would be required “comply with judicial process.” At a minimum, that would presumably mean that if presented with a smartphone that they had sold, they could extract from it any data encrypted by the user. Which data, and under what circumstances? That would have to be specified, but it’s worth noting that there is a limited amount the retailer can do to control how a user encrypts data on the device. So unless we require retailers to prevent the installation of new software onto the device (and thereby put app stores, and most app sellers, out of business), there would need to be major carve-outs to limit the mandate’s reach to include only cases where the retailer had some control. For example, the mandate might apply only to data encrypted by the software present on the device at the time of sale. That could create an easy loophole for users who wanted to prevent extraction of their encrypted data (by installing encryption software post-sale), but at least it would avoid imposing an impossible requirement on the retailer. (Veterans of the 1990s crypto wars will remember how U.S. software products often shipped without strong crypto, to comply with export controls, but post-sale plug-ins adding crypto were widely available.)

Other classes of devices, such as laptops, tablets, smart devices, and server computers, would either have to be covered, with careful consideration of how they are sold and configured, or they would be excluded, limiting the coverage of the rule. There would need to be rules about devices brought into the United States by their user-owners, or if those devices were not covered, then some law enforcement value would be lost. And the treatment of used devices would have to be specified, including both devices made before the mandate took effect (which would probably need to be exempted, creating another loophole) and post-mandate devices re-sold by a user of merchant: would the original seller or the re-seller be responsible, and what if the reseller is an individual?

Notice that we had to make all of these decisions, and face the attendant unpleasant tradeoffs, before we even reached the question of how to design the technical mechanism to implement key escrow, and how that would affect the security and privacy interests of law-abiding users. The crypto policy discussion often gets hung up on this one issue–the security implications of key escrow–but it is far from the only challenge that needs to be addressed, and the security implications of a key escrow mechanism are far from the only potential drawbacks to be considered.

Director Comey didn’t go to Austin to present an encryption mandate proposal.  But if he or others do decide to push seriously for a mandate, they ought to be able to lay out the details of how they would do it.

 

 

How to Analyze An Encryption Access Proposal

It looks like the idea of requiring law enforcement access to encrypted data is back in the news, with the UK government apparently pushing for access in the wake of the recent London attack. With that in mind, let’s talk about how one can go about analyzing a proposed access mandate.

The first thing to recognize is that although law enforcement is often clear about what result they want–getting access to encrypted data–they are often far from clear about how they propose to get that result. There is no magic wand that can give encrypted data to law enforcement and nobody else, while leaving everything else about the world unchanged. If a mandate were to be imposed, this would happen via regulation of companies’ products or behavior.

The operation of a mandate would necessarily be a three stage process: the government imposes specific mandate language, which induces changes in product design and behavior by companies and users, thereby leading to consequences that affect the public good.

Expanding this a bit, we can lay out some questions that a mandate proposal should be prepared to answer:

  1. mandate language: What requirements are imposed, and on whom? Which types of devices and products are covered and which are not? What specifically is required of a device maker? Of an operating system developer? Of a network provider? Of a retailer selling devices? Of an importer of devices? Of a user?
  2. changes in product design and behavior:  How will companies and users react to the mandate? For example, how will companies change the design of their products to comply with the mandate while maintaining their competitive position and serving their customers? How will criminals and terrorists change their behavior? How will law-abiding users adapt? What might foreign governments do to take advantage of these changes?
  3. consequences: What consequences will result from the design and behavioral changes that are predicted? How will the changes affect public safety? Cybersecurity? Personal privacy? The competitiveness of domestic companies? Human rights and free expression?

These questions are important because they expose the kinds of tradeoffs that would have to be made in imposing a mandate. As an example, covering a broad range of devices might allow recovery of more encrypted data (with a warrant), but it might be difficult to write requirements that make sense across a broad spectrum of different device types. As another example, all of the company types that you might regulate come with challenges: some are mostly located outside your national borders, others lack technical sophistication, others touch only a subset of the devices of interest, and so on. Difficult choices abound–and if you haven’t thought about how you would make those choices, then you aren’t in a position to assert that the benefits of a mandate are worth the downsides.

To date, the FBI has not put forward any specific approach. Nor has the UK government, to my knowledge. All they have offered in their public statements are vague assertions that a good approach must exist.

If our law enforcement agencies want to have a grown-up conversation about encryption mandates, they can start by offering a specific proposal, at least for purposes of discussion. Then the serious policy discussion can begin.