Utah’s Republican presidential primary was conducted today by Internet. If you have your voter-registration PIN, or even if you don’t, visit https://ivotingcenter.gop and you will learn something about Internet voting!
Measuring the performance of broadband networks is an important area of research, and efforts to characterize the performance of these networks continues to evolve. Measurement efforts to date have largely relied on inhome devices and are primarily designed to characterize access network performance. Yet, a user’s experience also relies on factors that lie upstream of ISP access networks, which is why measuring interconnection is so important. Unfortunately, as I have previously written about, visibility about performance at the interconnection points to ISPs have been extremely limited, and efforts to date to characterize interconnection have largely been indirect, relying on inferences made at network endpoints.
Today, I am pleased to release analysis taken from direct measurement of Internet interconnection points, which represents advancement in this important field of research. To this end, I am releasing a working paper that includes data from seven Internet Service Providers (ISPs) who collectively serve approximately half of all US broadband subscribers.
Each ISP has installed a common measurement system from DeepField Networks to provide an aggregated and anonymized picture of interconnection capacity and utilization. Collectively, the measurement system captures data from 99% of the interconnection capacity for these participating ISPs, comprising more than 1,200 link groups. I have worked with these ISPs to expose interesting insights around this very important aspect of the Internet. Analysis and views of the dataset are available in my working paper,which also includes a full review of the method used.
The research community has long recognized the need for this foundational information, which will help us understand how capacity is provisioned across a number of ISPs and how content traverses the links that connect broadband networks together.
Naturally, the proprietary nature of Internet interconnection prevents us from revealing everything that the public would like to see—notably, we can’t expose information about individual interconnects because both the existence and capacity of individual interconnects is confidential. Yet, even the aggregate views yield many interesting insights.
One of the most significant findings from the initial analysis of five months of data—from October 2015 through February 2016—is that aggregate capacity is roughly 50% utilized during peak periods (and never exceeds 66% for any individual participating ISP, as shown in the figure below. Moreover, aggregate capacity at the interconnects continues to grow to offset the growth of broadband data consumption.
I am very excited to provide this unique and unprecedented view into the Internet. It is in everyone’s interest to advance this field of research in a rigorous and thoughtful way.
The Apple versus FBI showdown has quickly become a crucial flashpoint of the “new Crypto War.” On February 16 the FBI invoked the All Writs Act of 1789, a catch-all authority for assistance of law enforcement, demanding that Apple create a custom version of its iOS to help the FBI decrypt an iPhone used by one of the San Bernardino shooters. The fact that the FBI allowed Apple to disclose the order publicly, on the same day, represents a rare exception to the government’s normal penchant for secrecy.
The reasons behind the FBI’s unusually loud entrance are important – but even more so is the risk that after the present flurry concludes, the FBI and other government agencies will revert to more shadowy methods of compelling companies to backdoor their software. This blog post explores these software transparency risks, and how new technical measures could help ensure that the public debate over software backdoors remains public.
This week I signed the Electronic Frontier Foundation’s amicus (friend-of-the-court) brief in the Apple/FBI iPhone-unlocking lawsuit. Many prominent computer scientists and cryptographers signed: Josh Aas, Hal Abelson, Judy Anderson, Andrew Appel, Tom Ball (the Google one, not the Microsoft one), Boaz Barak, Brian Behlendorf, Rich Belgard, Dan Bernstein, Matt Bishop, Josh Bloch, Fred Brooks, Mark Davis, Jeff Dean, Peter Deutsch, David Dill, Les Earnest, Brendan Eich, David Farber, Joan Feigenbaum, Michael Fischer, Bryan Ford, Matt Franklin, Matt Green, Alex Halderman, Martin Hellman, Nadia Heninger, Miguel de Icaza, Tanja Lange, Ed Lazowska, George Ledin, Patrick McDaniel, David Patterson, Vern Paxson, Thomas Ristenpart, Ron Rivest, Phillip Rogaway, Greg Rose, Guido van Rossum, Tom Shrimpton, Barbara Simons, Gene Spafford, Dan Wallach, Nickolai Zeldovich, Yan Zhu, Phil Zimmerman. (See also the EFF’s blog post.)
The technical and legal argument is based on the First Amendment: (1) Computer programs are a form of speech; (2) the Government cannot compel you to “say” something any more than it can prohibit you from expressing something. Also, (3) digital signatures are a form of signature; (4) the government cannot compel or coerce you to sign a statement that you don’t believe, a statement that is inconsistent with your values. Each of these four statements has ample precedent in Federal law. Combined together, (1) and (2) mean that Apple cannot be compelled to write a specific computer program. (3) and (4) mean that even if the FBI wrote the program (instead of forcing Apple to write it), Apple could not be compelled to sign it with its secret signing key. The brief argues,
By compelling Apple to write and then digitally sign new code, the Order forces Apple to first write a message to the government’s specifications, and then adopt, verify and endorse that message as its own, despite its strong disagreement with that message. The Court’s Order is thus akin to the government dictating a letter endorsing its preferred position and forcing Apple to transcribe it and sign its unique and forgery-proof name at the bottom.
Earlier this week, I came across a working paper from Professor Peter Swire—a highly respected attorney, professor, and policy expert. Swire’s paper, entitled “Online Privacy and ISPs“, argues that ISPs have limited capability to monitor users’ online activity. The paper argues that ISPs have limited visibility into users’ online activity for three reasons: (1) users are increasingly using many devices and connections, so any single ISP is the conduit of only a fraction of a typical user’s activity; (2) end-to-end encryption is becoming more pervasive, which limits ISPs’ ability to glean information about user activity; and (3) users are increasingly shifting to VPNs to send traffic.
An informed reader might surmise that this writeup relates to the reclassification of Internet service providers under Title II of the Telecommunications Act, which gives the FCC a mandate to protect private information that ISPs learn about their customers. This private information includes both personal information, as well as information about a customer’s use of the service that is provided as a result of receiving service—sometimes called Customer Proprietary Network Information, or CPNI. One possible conclusion a reader might draw from this white paper is that ISPs have limited capability to learn information about customers’ use of their service and hence should not be subject to additional privacy regulations.
I am not taking a position in this policy debate, nor do I intend to make any normative statements about whether an ISP’s ability to see this type of user information is inherently “good” or “bad” (in fact, one might even argue that an ISP’s ability to see this information might improve network security, network management, or other services). Nevertheless, these debates should be based on a technical picture that is as accurate as possible. In this vein, it is worth examining Professor Swire’s “factual description of today’s online ecosystem” that claims to offer the reader an “up-to-date and accurate understanding of the facts”. It is true that the report certainly contains many facts, but it also omits important details about the “online ecosystem”. Below, I fill in what I see as some important missing pieces. Much of what I discuss below I have also sent verbatim in a letter to the FCC Chairman. I hope that the original report will ultimately incorporate some of these points.
[Update (March 9): Swire notes in a response that the report itself doesn’t contain technical inaccuracies. Although there are certainly many points that are arguable, they are hard to disprove without better data, so it is difficult to “prove” the inaccuracies. Even if we take it as a given that there are no inaccuracies, that’s a very different thing than saying that the report tells the whole story.]
After my previous blog post about the FBI, Apple, and the San Bernadino iPhone, I’ve been reading many other bloggers and news articles on the topic. What seems to be missing is a decent analogy to explain the unusual nature of the FBI’s demand and the importance of Apple’s stance in opposition to it. Before I dive in, it’s worth understanding what the FBI’s larger goals are. Cyrus Vance Jr., the Manhattan DA, states it clearly: “no smartphone lies beyond the reach of a judicial search warrant.” That’s the FBI’s real goal. The San Bernadino case is just a vehicle toward achieving that goal. With this in mind, it’s less important to focus on the specific details of the San Bernadino case, the subtle improvements Apple has made to the iPhone since the 5c, or the apparent mishandling of the iCloud account behind the San Bernadino iPhone.
Our Analogy: TSA Luggage Locks
When you check your bags in the airport, you may well want to lock them, to keep baggage handlers and other interlopers from stealing your stuff. But, of course, baggage inspectors have a legitimate need to look through bags. Your bags don’t have any right of privacy in an airport. To satisfy these needs, we now have “TSA locks”. You get a combination you can enter, and the TSA gets their own secret key that allows airport staff to open any TSA lock. That’s a “backdoor”, engineered into the lock’s design.
What’s the alternative? If you want the TSA to have the technical capacity to search a large percentage of bags, then there really isn’t an alternative. After all, if we used “real” locks, then the TSA would be “forced” to cut them open. But consider the hypothetical case where these sorts of searches were exceptionally rare. At that point, the local TSA could keep hundreds of spare locks, of all makes and models. They could cut off your super-duper strong lock, inspect your bag, and then replace the cut lock with a brand new one of the same variety. They could extract the PIN or key cylinder from the broken lock and install it in the new one. They could even rough up the new one so it looks just like the original. Needless to say, this would be a specialized skill and it would be expensive to use. That’s pretty much where we are in terms of hacking the newest smartphones.
Another area where this analogy holds up is all the people who will “need” access to the backdoor keys. Who gets the backdoor keys? Sure, it might begin with the TSA, but every baggage inspector in every airport, worldwide, will demand access to those keys. And they’ll even justify it, because their inspectors work together with ours to defeat smuggling and other crimes. We’re all in this together! Next thing you know, the backdoor keys are everywhere. Is that a bad thing? Well, the TSA backdoor lock scheme is only as secure as their ability to keep the keys a secret. And what happened? The TSA mistakenly allowed the Washington Post to publish a photo of all the keys, which makes it trivial for anyone to fabricate those keys. (CAD files for them are now online!) Consequently, anybody can take advantage of the TSA locks’ designed-in backdoor, not just all the world’s baggage inspectors.
For San Bernadino, the FBI wants Apple to retrofit a backdoor mechanism where there wasn’t one previously. The legal precedent that the FBI wants creates a capability to convert any luggage lock into a TSA backdoor lock. This would only be necessary if they wanted access to lots of phones, at a scale where their specialized phone-cracking team becomes too expensive to operate. This no doubt becomes all the more pressing for the FBI as modern smartphones get better and better at resisting physical attacks.
Where the analogy breaks down: If you travel with expensive stuff in your luggage, you know well that those locks have very limited resistance to an attacker with bolt cutters. If somebody steals your luggage, they’ll get your stuff, whereas that’s not necessarily the case with a modern iPhone. These phones are akin to luggage having some kind of self-destruct charge inside. You force the luggage open and the contents will be destroyed. Another important difference is that much of the data that the FBI presumably wants from the San Bernadino phone can be gotten elsewhere, e.g., phone call metadata and cellular tower usage metadata. We have very little reason to believe that the FBI needs anything on that phone whatsoever, relative to the mountain of evidence that it already has.
Why this analogy is important: The capability to access the San Bernadino iPhone, as the court order describes it, is a one-off thing—a magic wand that converts precisely one traditional luggage lock into a TSA backdoor lock, having no effect on any other lock in the world. But as Vance makes clear in his New York Times opinion, the stakes are much higher than that. The FBI wants this magic wand, in the form of judicial orders and a bespoke Apple engineering process, to gain backdoor access to any phone in their possession. If the FBI can go to Apple to demand this, then so can any other government. Apple will quickly want to get itself out of the business of adjudicating these demands, so it will engineer in the backdoor feature once and for good, albeit under duress, and will share the necessary secrets with the FBI and with every other nation-state’s police and intelligence agencies. In other words, Apple will be forced to install a TSA backdoor key in every phone they make, and so will everybody else.
While this would be lovely for helping the FBI gather the evidence it wants, it would be especially lovely for foreign intelligence officers, operating on our shores, or going after our citizens when they travel abroad. If they pickpocket a phone from a high-value target, our FBI’s policies will enable any intel or police organization, anywhere, to trivially exercise any phone’s TSA backdoor lock and access all the intel within. Needless to say, we already have a hard time defending ourselves from nation-state adversaries’ cyber-exfiltration attacks. Hopefully, sanity will prevail, because it would be a monumental error for the government to require that all our phones be engineered with backdoors.
Apple just posted a remarkable “customer letter” on its web site. To understand it, let’s take a few steps back.
In a nutshell, one of the San Bernadino shooters had an iPhone. The FBI wants to root through it as part of their investigation, but they can’t do this effectively because of Apple’s security features. How, exactly, does this work?
- Modern iPhones (and also modern Android devices) encrypt their internal storage. If you were to just cut the Flash chips out of the phone and read them directly, you’d learn nothing.
- But iPhones need to decrypt that internal storage in order to actually run software. The necessary cryptographic key material is protected by the user’s password or PIN.
- The FBI wants to be able to exhaustively try all the possible PINs (a “brute force search”), but the iPhone was deliberately engineered with a “rate limit” to make this sort of attack difficult.
- The only other option, the FBI claims, is to replace the standard copy of iOS with something custom-engineered to defeat these rate limits, but an iPhone will only accept an update to iOS if it’s digitally signed by Apple. Consequently, the FBI convinced a judge to compel Apple to create a custom version of iOS, just for them, solely for this investigation.
- I’m going to ignore the legal arguments on both sides, and focus on the technical and policy aspects. It’s certainly technically possible for Apple to do this. They could even engineer their customized iOS build to measure the serial number of the iPhone on which it’s installed, such that the backdoor would only work on the San Bernadino suspect’s phone, without being a general-purpose skeleton key for all iPhones.
With all that as background, it’s worth considering a variety of questions.
On Monday, the Telecom Regulatory Authority of India (TRAI) released a decision that effectively bans “zero-rated” Internet services in the country. While the notion of zero-rating might be somewhat new to many readers in the United States, the practice is common in many developing economies. Essentially, it is the practice by which a carrier creates an arrangement whereby its customers are not charged normal data rates for accessing certain content.
High-profile instances of zero-rating include Facebook’s “Free Basics” (formerly “Internet.org“) and Wikipedia Zero. But, many readers might be surprised to learn that the practice is impressively widespread. Although comprehensive documentation is hard to come by, experience and conventional wisdom affirm that mobile data carriers in regions across the world regularly partner with mobile data providers to provide services that are effectively free to the consumer, and these offerings tend to change frequently.
I experienced zero-rating first-hand on a trip to South Africa last summer. While on a research trip there, I learned that Cell C, a mobile telecom provider, had partnered with Internet.org to offer its subscribers free access to a limited set of sites through the Internet.org mobile application. I immediately wondered whether a citizen’s socioeconomic class could affect Internet usage—and, as a consequence, their access to information.
Zero-rating evokes a wide range of (strong) opinions (emphasis on “opinion”). Mark Zuckerberg would have us believe that Free Basics is a way to bring the Internet to the next billion people, where the alternative might be that this demographic might not have access to the Internet at all. This, of course, presumes that we equate “access to Facebook” with “access to the Internet”—something which at least one study has shown can occur (and is perhaps even more cause for concern). Others have argued that zero-rated services violate network neutrality principles and could also result in the creation of walled gardens where citizens’ Internet access might be brokered by a few large and powerful organizations.
And yet, while the arguments on zero-rating are loud, emotional, and increasingly higher-stakes, these opinions on either side have yet to be supported by any actual data.
The first complete draft of the Princeton Bitcoin textbook is now freely available. We’re very happy with how the book turned out: it’s comprehensive, at over 300 pages, but has a conversational style that keeps it readable.
If you’re looking to truly understand how Bitcoin works at a technical level and have a basic familiarity with computer science and programming, this book is for you. Researchers and advanced students will find the book useful as well — starting around Chapter 5, most chapters have novel intellectual contributions.
Princeton University Press is publishing the official, peer-reviewed, polished, and professionally done version of this book. It will be out this summer. If you’d like to be notified when it comes out, you should sign up here.
Several courses have already used an earlier draft of the book in their classes, including Stanford’s CS 251. If you’re an instructor looking to use the book in your class, we welcome you to , and we’d be happy to share additional teaching materials with you.
Online course and supplementary materials. The Coursera course accompanying this book had 30,000 students in its first version, and it was a success based on engagement and end-of-course feedback.
We plan to offer a version with some improvements shortly. Specifically, we’ll be integrating the programming assignments developed for the Stanford course with our own, with Dan Boneh’s gracious permission. We also have tenative plans to record a lecture on Ethereum (we’ve added a discussion of Ethereum to the book in Chapter 10).
Finally, graduate students at Princeton have been leading the charge on several exciting research projects in this space. Watch this blog or my Twitter for updates.
Despite statements to the contrary by sponsors and supporters in April 2014, August 2015, and October 2015, backers of the Defend Trade Secrets Act (DTSA) now aver that “cyber espionage is not the primary focus” of the legislation. At last month’s Senate Judiciary Committee hearing, the DTSA was instead supported by two different primary reasons: the rise of trade secret theft by rogue employees and the need for uniformity in trade secret law.
While a change in a policy argument is not inherently bad, the alteration of the core justification for a bill should be considered when assessing it. Perhaps the new position of DTSA proponents acknowledges the arguments by over 40 academics, including me, that the DTSA will not reduce cyberespionage. However, we also disputed these new rationales in that letter: the rogue employee is more than adequately addressed by existing trade secret law, and there will be less uniformity in trade secrecy under the DTSA because of the lack of federal jurisprudence.
The downsides — including weakened industry cybersecurity, abusive litigation against small entities, and resurrection of the anti-employee inevitable disclosure doctrine — remain. As such, I continue to oppose the DTSA as a giant trade secrecy policy experiment with little data to back up its benefits and much evidence of its costs.