November 23, 2024

Crowdsourcing State Secrets

Those who regularly listen to Fresh Air may have heard a recent interview with journalist Dana Priest about the dramatic expansion of the intelligence community over the past ten years. The guest mentioned how the government had paid contractors several times what their own intelligence officials would be paid to perform the same analysis tasks. The guest also mentioned how unwieldy the massive network of contractors had become (to the point where even decided who gets top secret clearance had been contracted out). At the same time, in this age of Wikileaks and #Antisec, leaks and break-ins are becoming all the more common. It’s only a matter of time before thousands of military intelligence reports show up on Pastebin.

However, what if we didn’t have to pay this mass of analysts? What if we stopped worrying so much about leaks and embraced them? What if we could bring in anyone who wanted to analyze the insane amount of information by simply dumping large amounts of the raw data to a publicly-accessible location? What if we crowdsourced intelligence analysis?

Granted, we wouldn’t be able to just dump everything, as some items (such as “al-Qaeda’s number 5 may be house X in Waziristan, according to informant Y who lives in Taliban-controlled territory”) would be damaging if released. But (at least according to the interview) many of the items which are classified as top secret actually wouldn’t cause “exceptionally grave damage.” As for particularly sensitive (but could benefit from analysis) information in such documents, we could simply use pseudonyms and keep the pseudonym-real name mapping top secret.

Adversaries would almost certainly attempt to piece together false analyses. This simply becomes an instance of the Byzantine generals problem, but with a twist: because the mainstream media is always looking for the next sensational story, it would be performing much of the analysis. Because this creates a common goal between the public and the news outlets, there would be some level of trust that other (potentially adversarial) actors would not necessarily have.

In an era when the talking heads in Washington and the media want to cut everything from the tiny National Endowment for the Arts to gigantic Social Security, the last thing we need is to pay people to do work that many would do for free. Applying open government principles to data that do not necessarily need to be kept secret could go a long way toward reducing the part of government that most politicians are unwilling to touch.

Did NJ election officials fail to respect court order to improve security of elections?

Part 2 of 4
The Gusciora case was filed in 2004 by the Rutgers Constitutional Litigation Clinic on behalf of Reed Gusciora and other public-interest plaintiffs. The Plaintiffs sought to end the use of paperless direct-recording electronic voting machines, which are very vulnerable to fraud and manipulation via replacement of their software. The defendant was the Governor of New Jersey, and as governors came and went it was variously titled Gusciora v. McGreevey, Gusciora v. Corzine, Guscioria v. Christie.

In 2010 Judge Linda Feinberg issued an Opinion. She did not ban the machines, but ordered the State to implement several kinds of security measures: some to improve the security of the computers on which ballots are programmed (and results are tabulated), and some to improve the security of the computers inside the voting machines themselves.

The Plaintiffs had shown evidence that ballot-programming computers (the so-called “WinEDS laptops”) in Union County had been used to surf the Internet even on election day in 2008. This, combined with many other security vulnerabilities in the configuration of Microsoft Windows, left the computers open to intrusion by outsiders, who could then interfere with and manipulate the programming of ballots before their installation on the voting machines, or manipulate the aggregation of results after the elections. Judge Feinberg also heard testimony that so-called “Hardening Guidelines”, which had previously been prepared by Sequoia Voting Systems at the request of the State of California, would help close some of these vulnerabilities. Basically, one wipes the hard drive clean on the “WinEDS laptop”, installs a fresh copy of Microsoft Windows, runs a script to shut down Internet access and generally tighten the Windows security configuration, and finally installs a fresh copy of the WinEDS ballot software. The Court also heard testimony (from me) that installing these Guidelines requires experience in Windows system administration, and would likely be beyond the capability of some election administrators.

Among the several steps the Court ordered in 2010 was the installation of these Hardening Guidelines on every WinEDS ballot-programming computer used in public elections, within 120 days.

Two years after I testified in the Gusciora case, I served as an expert witness in a different case, Zirkle v. Henry, in a different Court, before Judge David Krell. I wanted to determine whether an anomaly in the June 2011 Cumberland County primary election could have been caused by an intruder from the Internet, or whether such intrusion could reasonably be ruled out. Thus, the question became relevant of whether Cumberland County’s WinEDS laptop was in compliance with Judge Feinberg’s Order. That is, had the Hardening Guidelines been installed before the ballot programming was done for the election in question? If so, what would the event logs say about the use of that machine as the ballot cartridges were programmed?

One of the components of the Hardening Guidelines is to turn on certain Event Logs in the Windows operating system. So, during my examination of the WinEDS laptop on August 17, I opened the Windows Event Viewer and photographed screen-shots of the logs. To my surprise, the logs commenced on the afternoon of August 16, 2011, the day before my examination. Someone had wiped the logs clean, at the very least, or possibly on August 16 someone had wiped the entire hard drive clean in installing the Hardening Guidelines. In either case, evidence in a pending court case–files on a computer that the State of New Jersey and County of Cumberland had been ordered to produce for examination–was erased. I’m told that evidence-tampering is a crime. In an affidavit dated August 24, Jason Cossaboon, a Computer Systems Analyst employed by Cumberland County, stated that he erased the event logs on August 16.

Robert Giles, Director of the New Jersey Division of Elections, was present during my examination on August 17. Mr. Giles submitted to Judge David Krell an affidavit dated August 25 describing the steps he had taken to achieve compliance with Judge Feinberg’s Order. He writes, “The Sequoia hardening manual was sent, by email, to the various county election offices on March 29, 2010. To my knowledge, the hardening process was completed by the affected counties by the required deadline of June 1, 2010.” Mr. Giles does not say anything about how he acquired the “knowledge” that the process was completed.

Mr. Giles was present in Judge Feinberg’s courtroom in 2009 when I testified that the Hardening Guidelines are not simple to install and would typically require someone with technical training or experience. And yet he then pretended to discharge the State’s duty of compliance with Judge Feinberg’s Order by simply sending a mass e-mail to county election officials. Judge Feinberg herself said that sending an e-mail was not enough; a year later, Mr. Giles has done nothing more. In my opinion, this is disrespectful to the Court, and to the voters of New Jersey.

DigiNotar Hack Highlights the Critical Failures of our SSL Web Security Model

This past week, the Dutch company DigiNotar admitted that their servers were hacked in June of 2011. DigiNotar is no ordinary company, and this was no ordinary hack. DigiNotar is one of the “certificate authorities” that has been entrusted by web browsers to certify to users that they are securely connecting to web sites. Without this certainty, users could have their communications intercepted by any nefarious entity that managed to insert itself in the network between the user and the web site they seek to reach.

It appears that DigiNotar did not deserve to be trusted with the responsibility to to issue certifying SSL certificates, because their systems allowed an outside hacker to break in and issue himself certificates for any web site domain he wished. He did so, for dozens of domain names. This included domains like *.google.com and www.cia.gov. Anyone with possession of these certificates and control over the network path between you and the outside world could, for example, view all of your traffic to Gmail. The attacker in this case seems to be the same person who similarly compromised certificate-issuing servers for the company Comodo back in March. He has posted a new manifesto, and he claims to have compromised four other certificate authorities. All signs point to the conclusion that this person is an Iranian national who supports the current regime, or is a member of the regime itself.

The Comodo breach was deeply troubling, and the DigiNotar compromise is far worse. First, this new break-in affected all of DigiNotar’s core certificate servers as opposed to Comodo’s more contained breach. Second, this afforded the attacker with the ability of issuing not only baseline “domain validated” certificates but also higher-security “extended validation” certificates and even special certificates used by the Dutch government to secure itself (see the Dutch government’s fact sheet on the incident). However, this damage was by no means limited to the Netherlands, because any certificate authority can issue certificates for any domain. The third difference when compared to the Comodo breach is that we have actual evidence of these certificates being deployed against users in the real world. In this case, it appears that they were used widely against Iranian users on many different Iranian internet service providers. Finally, and perhaps most damning for DigiNotar, the break-in was not detected for a whole month, and was then not disclosed to the public for almost two more months (see the timeline at the end of this incident report by Fox-IT). The public’s security was put at risk and browser vendors were prevented from implementing fixes because they were kept in the dark. Indeed, DigiNotar seems to have intended never to disclose the problem, and was only forced to do so after a perceptive Iranian Google user noticed that their connections were being hijacked.

The most frightening thing about this episode is not just that a particular certificate authority allowed a hacker to critically compromise its operations, or that the company did not disclose this to the affected public. More fundamentally, it reminds us that our web security model is prone to failure across the board. As I noted at the time of the Comodo breach:

I recently spoke on the subject at USENIX Security 2011 as part of the panel “SSL/TLS Certificates: Threat or Menace?” (video and audio here if you scroll down to Friday at 11:00 a.m., and slides here.)

Supreme Court Takes Important GPS Tracking Case

This morning, the Supreme Court agreed to hear an appeal next term of United States v. Jones (formerly United States v. Maynard), a case in which the D.C. Circuit Court of Appeals suppressed evidence of a criminal defendant’s travels around town, which the police collected using a tracking device they attached to his car. For more background on the case, consult the original opinion and Orin Kerr’s previous discussions about the case.

No matter what the Court says or holds, this case will probably prove to be a landmark. Watch it closely.

(1) Even if the Court says nothing else, it will face the constitutionally of the use by police of tracking beepers to follow criminal suspects. In a pair of cases from the mid-1980’s, the Court held that the police did not need a warrant to use a tracking beeper to follow a car around on public, city streets (Knotts) but did need a warrant to follow a beeper that was moved indoors (Karo) because it “reveal[ed] a critical fact about the interior of the premises.” By direct application of these cases, the warrantless tracking in Jones seems constitutional, because it was restricted to movement on public, city streets.

Not so fast, said the D.C. Circuit. In Jones, the police tracked the vehicle 24 hours a day for four weeks. Citing the “mosaic theory often invoked by the Government in cases involving national security information,” the Court held that the whole can sometimes be more than the parts. Tracking a car continuously for a month is constitutionally different in kind not just degree from tracking a car along a single trip. This is a new approach to the Fourth Amendment, one arguably at odds with opinions from other Courts of Appeal.

(2) This case gives the Court the opportunity to speak generally about the Fourth Amendment and location privacy. Depending on what it says, it may provide hints for lower courts struggling with the government’s use of cell phone location information, for example.

(3) For support of its embrace of the mosaic theory, the D.C. Circuit cited a 1989 Supreme Court case, U.S. Department of Justice v. National Reporters Committee. In this case, which involved the Freedom of Information Act (FOIA) not the Fourth Amendment, the Court allowed the FBI to refuse to release compiled “rap sheets” about organized crime suspects, even though the rap sheets were compiled mostly from “public” information obtainable from courthouse records. In agreeing that the rap sheets nevertheless fell within a “personal privacy” exemption from FOIA, the Court embraced, for the first time, the idea that the whole may be worth more than the parts. The Court noted the difference “between scattered disclosure of the bits of information contained in a rap-sheet and revelation of the rap-sheet as a whole,” and found a “vast difference between the public records that might be found after a diligent search of courthouse files, county archives, and local police stations throughout the country and a computerized summary located in a single clearinghouse of information.” (FtT readers will see the parallels to the debates on this blog about PACER and RECAP.) In summary, it found that “practical obscurity” could amount to privacy.

Practical obscurity is an idea that hasn’t gotten much traction in the Courts since National Reporters Committee. But it is an idea well-loved by many privacy scholars, including myself, for whom it helps explain their concerns about the privacy implications of data aggregation and mining of supposedly “public” data.

The Court, of course, may choose a narrow route for affirming or reversing the D.C. Circuit. But if it instead speaks broadly or categorically about the viability of practical obscurity as a legal theory, this case might set a standard that we will be debating for years to come.

Deceptive Assurances of Privacy?

Earlier this week, Facebook expanded the roll-out of its facial recognition software to tag people in photos uploaded to the social networking site. Many observers and regulators responded with privacy concerns; EFF offered a video showing users how to opt-out.

Tim O’Reilly, however, takes a different tack:

Face recognition is here to stay. My question is whether to pretend that it doesn’t exist, and leave its use to government agencies, repressive regimes, marketing data mining firms, insurance companies, and other monolithic entities, or whether to come to grips with it as a society by making it commonplace and useful, figuring out the downsides, and regulating those downsides.

…We need to move away from a Maginot-line like approach where we try to put up walls to keep information from leaking out, and instead assume that most things that used to be private are now knowable via various forms of data mining. Once we do that, we start to engage in a question of what uses are permitted, and what uses are not.

O’Reilly’s point –and face-recognition technology — is bigger than Facebook. Even if Facebook swore off the technology tomorrow, it would be out there, and likely used against us unless regulated. Yet we can’t decide on the proper scope of regulation without understanding the technology and its social implications.

By taking these latent capabilities (Riya was demonstrating them years ago; the NSA probably had them decades earlier) and making them visible, Facebook gives us more feedback on the privacy consequences of the tech. If part of that feedback is “ick, creepy” or worse, we should feed that into regulation for the technology’s use everywhere, not just in Facebook’s interface. Merely hiding the feature in the interface, while leaving it active in the background would be deceptive: it would give us a false assurance of privacy. For all its blundering, Facebook seems to be blundering in the right direction now.

Compare the furor around Dropbox’s disclosure “clarification”. Dropbox had claimed that “All files stored on Dropbox servers are encrypted (AES-256) and are inaccessible without your account password,” but recently updated that to the weaker assertion: “Like most online services, we have a small number of employees who must be able to access user data for the reasons stated in our privacy policy (e.g., when legally required to do so).” Dropbox had signaled “encrypted”: absolutely private, when it meant only relatively private. Users who acted on the assurance of complete secrecy were deceived; now those who know the true level of relative secrecy can update their assumptions and adapt behavior more appropriately.

Privacy-invasive technology and the limits of privacy-protection should be visible. Visibility feeds more and better-controlled experiments to help us understand the scope of privacy, publicity, and the space in between (which Woody Hartzog and Fred Stutzman call “obscurity” in a very helpful draft). Then, we should implement privacy rules uniformly to reinforce our social choices.