September 19, 2017

Engineering around social media border searches

The latest news is that the U.S. Department of Homeland Security is considering a requirement, while passing through a border checkpoint, to inspect a prospective visitor’s “online presence”. That means immigration officials would require users to divulge their passwords to Facebook and other such services, which the agent might then inspect, right there, at the border crossing. This raises a variety of concerns, from its chilling impact on freedom of speech to its being an unreasonable search or seizure, nevermind whether an airport border agent has the necessary training to make such judgments, much less the time to do it while hundreds of people are waiting in line to get through.

Rather than conduct a serious legal analysis, however, I want to talk about technical countermeasures. What might Facebook or other such services do to help defend their users as they pass a border crossing?

Fake accounts. It’s certainly feasible today to create multiple accounts for yourself, giving up the password to a fake account rather than your real account. Most users would find this unnecessarily cumbersome, and the last thing Facebook or anybody else wants is to have a bunch of fake accounts running around. It’s already a concern when somebody tries to borrow a real person’s identity to create a fake account and “friend” their actual friends.

Duress passwords. Years ago, my home alarm system had the option to have two separate PINs. One of them would disable the alarm as normal. The other would sound a silent alarm, summoning the police immediately while making it seem like I disabled the alarm. Let’s say Facebook supported something similar. You enter the duress password, then Facebook locks out your account or switches to your fake account, as above.

Temporary lockouts. If you know you’re about to go through a border crossing, you could give a duress password, as above, or you could arrange an account lockout in advance. You might, for example, designate ten trusted friends, where any five must declare that the lockout is over. Absent those declarations, your account would remain locked, and there would be no means for you to be coerced into giving access to your own account.

Temporary sanitization. Absent any action from Facebook, the best advice today for somebody about to go through a border crossing is to sanitize their account before going through. That means attempting to second-guess what border agents are looking for and delete it in advance. Facebook might assist this by providing search features to allow users to temporarily drop friends, temporarily delete comments or posts with keywords in them, etc. As with the temporary lockouts, temporary sanitization would need to have a restoration process that could be delegated to trusted friends. Once you give the all-clear, everything comes back again.

User defense in bulk. Every time a user, going through a border crossing, exercises a duress password, that’s an unambiguous signal to Facebook. Even absent such signals, Facebook would observe highly unusual login behavior coming from those specific browsers and IP addresses. Facebook could simply deny access to its services from government IP address blocks. While it’s entirely possible for the government to circumvent this, whether using Tor or whatever else, there’s no reason that Facebook needs to be complicit in the process.

So is there a reasonable alternative?

While it’s technically feasible for the government to require that Facebook give it full “backdoor” access to each and every account so it can render threat judgments in advance, this would constitute the most unreasonable search and seizure in the history of that phrase. Furthermore, if and when it became common knowledge that such unreasonable seizures were commonplace, that would be the end of the company. Facebook users have an expectation of privacy and will switch to other services if Facebook cannot protect them.

Wouldn’t it be nice if there was some less invasive way to support the government’s desire for “extreme vetting”? Can we protect ordinary users’ privacy while still enabling the government to intercept people who intend harm to our country? We certainly must assume that an actual bona fide terrorist is going to have no trouble creating a completely clean online persona to use while crossing a border. They can invent wholesome friends with healthy children sharing silly videos of cute kittens. While we don’t know too much about our existing vetting strategies to distinguish tourists from terrorists, we have to assume that the process involves the accumulation of signals and human intelligence, and other painstaking efforts by professional investigators to protect our country from harm. It’s entirely possible that they’re already doing a good job.

An analogy to understand the FBI's request of Apple

After my previous blog post about the FBI, Apple, and the San Bernadino iPhone, I’ve been reading many other bloggers and news articles on the topic. What seems to be missing is a decent analogy to explain the unusual nature of the FBI’s demand and the importance of Apple’s stance in opposition to it. Before I dive in, it’s worth understanding what the FBI’s larger goals are. Cyrus Vance Jr., the Manhattan DA, states it clearly: “no smartphone lies beyond the reach of a judicial search warrant.” That’s the FBI’s real goal. The San Bernadino case is just a vehicle toward achieving that goal. With this in mind, it’s less important to focus on the specific details of the San Bernadino case, the subtle improvements Apple has made to the iPhone since the 5c, or the apparent mishandling of the iCloud account behind the San Bernadino iPhone.

Our Analogy: TSA Luggage Locks

When you check your bags in the airport, you may well want to lock them, to keep baggage handlers and other interlopers from stealing your stuff. But, of course, baggage inspectors have a legitimate need to look through bags. Your bags don’t have any right of privacy in an airport. To satisfy these needs, we now have “TSA locks”. You get a combination you can enter, and the TSA gets their own secret key that allows airport staff to open any TSA lock. That’s a “backdoor”, engineered into the lock’s design.

What’s the alternative? If you want the TSA to have the technical capacity to search a large percentage of bags, then there really isn’t an alternative. After all, if we used “real” locks, then the TSA would be “forced” to cut them open. But consider the hypothetical case where these sorts of searches were exceptionally rare. At that point, the local TSA could keep hundreds of spare locks, of all makes and models. They could cut off your super-duper strong lock, inspect your bag, and then replace the cut lock with a brand new one of the same variety. They could extract the PIN or key cylinder from the broken lock and install it in the new one. They could even rough up the new one so it looks just like the original. Needless to say, this would be a specialized skill and it would be expensive to use. That’s pretty much where we are in terms of hacking the newest smartphones.

Another area where this analogy holds up is all the people who will “need” access to the backdoor keys. Who gets the backdoor keys? Sure, it might begin with the TSA, but every baggage inspector in every airport, worldwide, will demand access to those keys. And they’ll even justify it, because their inspectors work together with ours to defeat smuggling and other crimes. We’re all in this together! Next thing you know, the backdoor keys are everywhere. Is that a bad thing? Well, the TSA backdoor lock scheme is only as secure as their ability to keep the keys a secret. And what happened? The TSA mistakenly allowed the Washington Post to publish a photo of all the keys, which makes it trivial for anyone to fabricate those keys. (CAD files for them are now online!) Consequently, anybody can take advantage of the TSA locks’ designed-in backdoor, not just all the world’s baggage inspectors.

For San Bernadino, the FBI wants Apple to retrofit a backdoor mechanism where there wasn’t one previously. The legal precedent that the FBI wants creates a capability to convert any luggage lock into a TSA backdoor lock. This would only be necessary if they wanted access to lots of phones, at a scale where their specialized phone-cracking team becomes too expensive to operate. This no doubt becomes all the more pressing for the FBI as modern smartphones get better and better at resisting physical attacks.

Where the analogy breaks down: If you travel with expensive stuff in your luggage, you know well that those locks have very limited resistance to an attacker with bolt cutters. If somebody steals your luggage, they’ll get your stuff, whereas that’s not necessarily the case with a modern iPhone. These phones are akin to luggage having some kind of self-destruct charge inside. You force the luggage open and the contents will be destroyed. Another important difference is that much of the data that the FBI presumably wants from the San Bernadino phone can be gotten elsewhere, e.g., phone call metadata and cellular tower usage metadata. We have very little reason to believe that the FBI needs anything on that phone whatsoever, relative to the mountain of evidence that it already has.

Why this analogy is important: The capability to access the San Bernadino iPhone, as the court order describes it, is a one-off thing—a magic wand that converts precisely one traditional luggage lock into a TSA backdoor lock, having no effect on any other lock in the world. But as Vance makes clear in his New York Times opinion, the stakes are much higher than that. The FBI wants this magic wand, in the form of judicial orders and a bespoke Apple engineering process, to gain backdoor access to any phone in their possession. If the FBI can go to Apple to demand this, then so can any other government. Apple will quickly want to get itself out of the business of adjudicating these demands, so it will engineer in the backdoor feature once and for good, albeit under duress, and will share the necessary secrets with the FBI and with every other nation-state’s police and intelligence agencies. In other words, Apple will be forced to install a TSA backdoor key in every phone they make, and so will everybody else.

While this would be lovely for helping the FBI gather the evidence it wants, it would be especially lovely for foreign intelligence officers, operating on our shores, or going after our citizens when they travel abroad. If they pickpocket a phone from a high-value target, our FBI’s policies will enable any intel or police organization, anywhere, to trivially exercise any phone’s TSA backdoor lock and access all the intel within. Needless to say, we already have a hard time defending ourselves from nation-state adversaries’ cyber-exfiltration attacks. Hopefully, sanity will prevail, because it would be a monumental error for the government to require that all our phones be engineered with backdoors.

Supreme Court Takes Important GPS Tracking Case

This morning, the Supreme Court agreed to hear an appeal next term of United States v. Jones (formerly United States v. Maynard), a case in which the D.C. Circuit Court of Appeals suppressed evidence of a criminal defendant’s travels around town, which the police collected using a tracking device they attached to his car. For more background on the case, consult the original opinion and Orin Kerr’s previous discussions about the case.

No matter what the Court says or holds, this case will probably prove to be a landmark. Watch it closely.

(1) Even if the Court says nothing else, it will face the constitutionally of the use by police of tracking beepers to follow criminal suspects. In a pair of cases from the mid-1980’s, the Court held that the police did not need a warrant to use a tracking beeper to follow a car around on public, city streets (Knotts) but did need a warrant to follow a beeper that was moved indoors (Karo) because it “reveal[ed] a critical fact about the interior of the premises.” By direct application of these cases, the warrantless tracking in Jones seems constitutional, because it was restricted to movement on public, city streets.

Not so fast, said the D.C. Circuit. In Jones, the police tracked the vehicle 24 hours a day for four weeks. Citing the “mosaic theory often invoked by the Government in cases involving national security information,” the Court held that the whole can sometimes be more than the parts. Tracking a car continuously for a month is constitutionally different in kind not just degree from tracking a car along a single trip. This is a new approach to the Fourth Amendment, one arguably at odds with opinions from other Courts of Appeal.

(2) This case gives the Court the opportunity to speak generally about the Fourth Amendment and location privacy. Depending on what it says, it may provide hints for lower courts struggling with the government’s use of cell phone location information, for example.

(3) For support of its embrace of the mosaic theory, the D.C. Circuit cited a 1989 Supreme Court case, U.S. Department of Justice v. National Reporters Committee. In this case, which involved the Freedom of Information Act (FOIA) not the Fourth Amendment, the Court allowed the FBI to refuse to release compiled “rap sheets” about organized crime suspects, even though the rap sheets were compiled mostly from “public” information obtainable from courthouse records. In agreeing that the rap sheets nevertheless fell within a “personal privacy” exemption from FOIA, the Court embraced, for the first time, the idea that the whole may be worth more than the parts. The Court noted the difference “between scattered disclosure of the bits of information contained in a rap-sheet and revelation of the rap-sheet as a whole,” and found a “vast difference between the public records that might be found after a diligent search of courthouse files, county archives, and local police stations throughout the country and a computerized summary located in a single clearinghouse of information.” (FtT readers will see the parallels to the debates on this blog about PACER and RECAP.) In summary, it found that “practical obscurity” could amount to privacy.

Practical obscurity is an idea that hasn’t gotten much traction in the Courts since National Reporters Committee. But it is an idea well-loved by many privacy scholars, including myself, for whom it helps explain their concerns about the privacy implications of data aggregation and mining of supposedly “public” data.

The Court, of course, may choose a narrow route for affirming or reversing the D.C. Circuit. But if it instead speaks broadly or categorically about the viability of practical obscurity as a legal theory, this case might set a standard that we will be debating for years to come.