November 24, 2024

Going to the doctor and worrying about cybersecurity

For most people, going to the doctor means thinking about co-pays and when they’ll feel better. For me though, it means thinking about those plus the cyber security of the computer systems being used by the medical professionals. I’ve spent more time than usual visiting doctors recently. I broke my hand – sure I’ll tell […]

DHS OIG study of scanners silent on computer threats

The U.S. Department of Homeland Security Office of Inspector General (DHS OIG) released their report on safety of airport backscatter machines on February 29. The report has received criticism from ProPublica among others for what it says as well as what it doesn’t, mostly focusing on issues of incremental risk to the traveling public, the large number of repair services, and the lack of data analyzing whether the machines serve their claimed purpose. (The report does not address millimeter wave machines, which most scientists believe are safer.)

But what’s surprising in both the report and the critiques about it is that they have only discussed the radiation aspects when used as intended, and not the information systems embedded in the devices, or what happens if the scanners are used in unintended ways, as could happen with a computer system malfunction. Like any modern system, the scanners almost certainly have a plethora of computer systems, controlling the scanning beam, analysis of what the beam finds, etc. It’s pretty likely that there’s Windows and Linux systems embedded in the device, and it’s certain that the different parts of the device are networked together, for example so a technician in a separate room can see the images without seeing the person being scanned (as TSA has done to head off the complaints about invasion of privacy).

The computer systems are the parts that concern me the most. We should be considered about security, safety, and privacy with such complex systems. But the report doesn’t use the word “software” even once, and the word “computer” is used twice in reference to training but not to the devices themselves.

On the safety front, we know that improperly designed software/hardware interaction can lead to serious and even fatal results – Nancy Leveson’s report on the failure of the Therac-25 system should be required reading for anyone considering building a software-controlled radiation management system, or anyone assessing the safety of such a system. We can hope that the hardware design of the scanners is such that even malicious software would be unable to cause the kind of failures that occurred with the Therac-25, but the OIG report gives no indication whether that risk was considered.

On the security and privacy front, we know that the devices have software update capabilities – that became clear when they were “upgraded” to obscure the person’s face as a privacy measure, and future planned upgrades to provide only a body outline showing items of concern, rather than an actual image of the person. So what protections are in place to ensure that insiders or outsiders can’t install “custom” upgrades that leak images, or worse yet change the radiation characteristics of the machines? Consider the recent case of the Air Force drone control facility that was infected by malware, despite being a closed classified network – we should not assume that closed networks will remain closed, especially with the ease of carrying USB devices.

Since we know that the scanners include networks, what measures are in place to protect the networks, and to prevent their being attacked just like the networks used by government and private industry? Yes, it’s possible to build the devices as closed networks protected by encryption – and it’s also possible to accidentally or intentionally subvert those networks by connecting them up using wireless routers.

Yes, I know that the government has extensive processes in place to approve any computer systems, using a process known as Certification and Accreditation. Unfortunately, C&A processes tend to focus too much on the paperwork, and not enough on real-world threat assessments. And perhaps the C&A process used for the scanners really is good enough, but we just don’t know, and the OIG report by neglecting to discus the computer side of the scanners gives no reassurance.

Over the past few years, Stuxnet and research into embedded devices such as those used in cars and medical devices have taught us that embedded systems software can impact the real world in surprising ways. And with software controlled radiation devices potentially causing unseen damage, the risks to the traveling public are too great for the OIG to ignore this critical aspect of the machines.

United States v. Jones is a Near-Optimal Result

This morning, the Supreme Court handed down its decision in United States v. Jones, the GPS tracking case, deciding unanimously that the government violated the defendant’s Fourth Amendment rights when it installed a wireless GPS tracking device on the undercarriage of his car and used it to monitor his movement’s around town for four weeks without a search warrant.

Despite the unanimous result, the court was not unified in its reasoning. Five Justices signed the majority opinion, authored by Justice Scalia, finding that the Fourth Amendment “at bottom . . . assure[s] preservation of that degree of privacy against government that existed when the Fourth Amendment was adopted” and thus analyzing the case under “common-law trespassory” principles.

Justice Alito wrote a concurring opinion, signed by Justices Ginsburg, Breyer, and Kagan, faulting the majority for “decid[ing] the case based on 18th-century tort law” and arguing instead that the case should be decided under Katz’s “reasonable expectations of privacy” test. Applying Katz, the four concurring Justices would have found that the government violated the Fourth Amendment because “long-term tracking” implicated a reasonable expectation of privacy and thus required a warrant.

Justice Sotomayor, who signed the majority opinion, wrote a separate concurring opinion, but more on that in a second.

I think the Jones court reached the correct result in this case, and I think that the three opinions in this case represent a near-optimal result for those who want the Court to recognize how its present Fourth Amendment jurisprudence does far too little to protect privacy and limit unwarranted government power in light of recent advances in surveillance technology. This might seem counter-intuitive. I predict that many news stories about Jones will pitch it as an epic battle between Scalia’s property-centric and Alito’s privacy-centric approaches to the Fourth Amendment and quote people expressing regret that Justice Alito didn’t instead win the day. I think this would focus on the wrong thing, underplaying how today’s three opinions–all of them–represent a significant advance for Constitutional privacy, for several reasons:

1. Justice Alito? Maybe I’m not a savvy court watcher, but I did not see this coming. The fact that Justice Alito wrote such a strong privacy-centric opinion suggests that future Fourth Amendment litigants will see a well-defined path to five votes, especially since it seems like Justice Sotomayor will likely provide the fifth vote in the right future case.

2. Justice Scalia and Thomas showed restraint. The majority opinion goes out of its way to highlight that its focus on property is not meant to foreclose privacy-based analyses in the future. It uses the words “at bottom” and “at a minimum” to hammer home the idea that it is supplementing Katz not replacing it. Maybe Justice Scalia did this to win Justice Sotomayor’s vote, but even if so, I am heartened that neither Justice Scalia nor Justice Thomas thought it necessary to write a separate concurrence arguing that Katz’s privacy focus should be replaced with a focus only on property rights.

3. Justice Sotomayor does not like the third-party doctrine. It’s probably best here just to quote from the opinion:

More fundamentally, it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. E.g., Smith, 442 U.S., at 742; United States v. Miller, 425 U.S. 435, 443 (1976). This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks. People disclose the phone numbers that they dial or text to their cellular providers; the URLs that they visit and the e-mail addresses with which they correspond to their Internet service providers; and the books, groceries, and medications they purchase to online retailers. Perhaps, as JUSTICE ALITO notes, some people may find the “tradeoff” of privacy for convenience “worthwhile,” or come to accept this “dimunition of privacy” as “inevitable,” post, at 10, and perhaps not. I for one doubt that people would accept without complaint the warrantless disclosure to the Government of a list of every Web site they had visited in the last week, or month, or year. But whatever the societal expectations, they can attain constitutionally protected status only if our Fourth Amendment jurisprudence ceases to treat secrecy as a prerequisite for privacy. I would not assume that all information voluntarily disclosed to some member of the public for a limited purpose is, for that reason alone, disentitled to Fourth Amendment protection.

Wow. And Amen. Set your stopwatches: the death watch for the third-party doctrine has finally begun.

4. This was the wrong case for a privacy overhaul of the Fourth Amendment. Most importantly, I’ve had misgivings about using Jones as the vehicle for fixing what is broken with the Fourth Amendment. GPS vehicle tracking comes laden with lots of baggage–practical, jurisprudential and atmospheric–that other actively litigated areas of modern surveillance do not. GPS vehicle tracking happens on public streets, meaning it runs into dozens of Supreme Court pronouncements about assumption of risk and voluntarily disclosure. It faces two prior precedents, Karo and Knotts, that need to be distinguished or possibly overturned. It does not suffer (as far as we know) from a long history of use against innocent people, but instead seems mostly used to track fugitives and drug dealers.

For all of these reasons, even the most privacy-minded Justice is likely to recognize caveats and exceptions in crafting a new rule for GPS tracking. Imagine if Justice Sotomayor had signed Justice Alito’s opinion instead of Justice Scalia’s. We would’ve been left with a holding that allowed short-term monitoring but not long-term monitoring, without a precise delineation between the two. We would’ve been left with the possible new caveat that the rules change when the police investigate “extraordinary offenses,” also undefined. These unsatisfying, vague new rules would have had downstream negative effects on lower court opinions analyzing URL or search query monitoring, or cell phone tower monitoring, or packet sniffing.

Better that we have the big “reinventing Katz” debate in a case that isn’t so saddled with the confusions of following cars on public streets. I hope the Supreme Court next faces a surveillance technique born purely on the Internet, one in which “classic trespassory search is not involved.” If the votes hold from Jones, we might end up with what many legal scholars have urged: a retrenchment or reversal of the third-party doctrine; a Fourth Amendment jurisprudence better tailored to the rise of the Internet; and a better Constitutional balance in this country between privacy and security.

Supreme Court Takes Important GPS Tracking Case

This morning, the Supreme Court agreed to hear an appeal next term of United States v. Jones (formerly United States v. Maynard), a case in which the D.C. Circuit Court of Appeals suppressed evidence of a criminal defendant’s travels around town, which the police collected using a tracking device they attached to his car. For more background on the case, consult the original opinion and Orin Kerr’s previous discussions about the case.

No matter what the Court says or holds, this case will probably prove to be a landmark. Watch it closely.

(1) Even if the Court says nothing else, it will face the constitutionally of the use by police of tracking beepers to follow criminal suspects. In a pair of cases from the mid-1980’s, the Court held that the police did not need a warrant to use a tracking beeper to follow a car around on public, city streets (Knotts) but did need a warrant to follow a beeper that was moved indoors (Karo) because it “reveal[ed] a critical fact about the interior of the premises.” By direct application of these cases, the warrantless tracking in Jones seems constitutional, because it was restricted to movement on public, city streets.

Not so fast, said the D.C. Circuit. In Jones, the police tracked the vehicle 24 hours a day for four weeks. Citing the “mosaic theory often invoked by the Government in cases involving national security information,” the Court held that the whole can sometimes be more than the parts. Tracking a car continuously for a month is constitutionally different in kind not just degree from tracking a car along a single trip. This is a new approach to the Fourth Amendment, one arguably at odds with opinions from other Courts of Appeal.

(2) This case gives the Court the opportunity to speak generally about the Fourth Amendment and location privacy. Depending on what it says, it may provide hints for lower courts struggling with the government’s use of cell phone location information, for example.

(3) For support of its embrace of the mosaic theory, the D.C. Circuit cited a 1989 Supreme Court case, U.S. Department of Justice v. National Reporters Committee. In this case, which involved the Freedom of Information Act (FOIA) not the Fourth Amendment, the Court allowed the FBI to refuse to release compiled “rap sheets” about organized crime suspects, even though the rap sheets were compiled mostly from “public” information obtainable from courthouse records. In agreeing that the rap sheets nevertheless fell within a “personal privacy” exemption from FOIA, the Court embraced, for the first time, the idea that the whole may be worth more than the parts. The Court noted the difference “between scattered disclosure of the bits of information contained in a rap-sheet and revelation of the rap-sheet as a whole,” and found a “vast difference between the public records that might be found after a diligent search of courthouse files, county archives, and local police stations throughout the country and a computerized summary located in a single clearinghouse of information.” (FtT readers will see the parallels to the debates on this blog about PACER and RECAP.) In summary, it found that “practical obscurity” could amount to privacy.

Practical obscurity is an idea that hasn’t gotten much traction in the Courts since National Reporters Committee. But it is an idea well-loved by many privacy scholars, including myself, for whom it helps explain their concerns about the privacy implications of data aggregation and mining of supposedly “public” data.

The Court, of course, may choose a narrow route for affirming or reversing the D.C. Circuit. But if it instead speaks broadly or categorically about the viability of practical obscurity as a legal theory, this case might set a standard that we will be debating for years to come.

Deceptive Assurances of Privacy?

Earlier this week, Facebook expanded the roll-out of its facial recognition software to tag people in photos uploaded to the social networking site. Many observers and regulators responded with privacy concerns; EFF offered a video showing users how to opt-out.

Tim O’Reilly, however, takes a different tack:

Face recognition is here to stay. My question is whether to pretend that it doesn’t exist, and leave its use to government agencies, repressive regimes, marketing data mining firms, insurance companies, and other monolithic entities, or whether to come to grips with it as a society by making it commonplace and useful, figuring out the downsides, and regulating those downsides.

…We need to move away from a Maginot-line like approach where we try to put up walls to keep information from leaking out, and instead assume that most things that used to be private are now knowable via various forms of data mining. Once we do that, we start to engage in a question of what uses are permitted, and what uses are not.

O’Reilly’s point –and face-recognition technology — is bigger than Facebook. Even if Facebook swore off the technology tomorrow, it would be out there, and likely used against us unless regulated. Yet we can’t decide on the proper scope of regulation without understanding the technology and its social implications.

By taking these latent capabilities (Riya was demonstrating them years ago; the NSA probably had them decades earlier) and making them visible, Facebook gives us more feedback on the privacy consequences of the tech. If part of that feedback is “ick, creepy” or worse, we should feed that into regulation for the technology’s use everywhere, not just in Facebook’s interface. Merely hiding the feature in the interface, while leaving it active in the background would be deceptive: it would give us a false assurance of privacy. For all its blundering, Facebook seems to be blundering in the right direction now.

Compare the furor around Dropbox’s disclosure “clarification”. Dropbox had claimed that “All files stored on Dropbox servers are encrypted (AES-256) and are inaccessible without your account password,” but recently updated that to the weaker assertion: “Like most online services, we have a small number of employees who must be able to access user data for the reasons stated in our privacy policy (e.g., when legally required to do so).” Dropbox had signaled “encrypted”: absolutely private, when it meant only relatively private. Users who acted on the assurance of complete secrecy were deceived; now those who know the true level of relative secrecy can update their assumptions and adapt behavior more appropriately.

Privacy-invasive technology and the limits of privacy-protection should be visible. Visibility feeds more and better-controlled experiments to help us understand the scope of privacy, publicity, and the space in between (which Woody Hartzog and Fred Stutzman call “obscurity” in a very helpful draft). Then, we should implement privacy rules uniformly to reinforce our social choices.