May 23, 2017

Classified material in the public domain: what's a university to do?

Yesterday I posted some thoughts about Purdue University’s decision to destroy a video recording of my keynote address at its Dawn or Doom colloquium. The organizers had gone dark, and a promised public link was not forthcoming. After a couple of weeks of hoping to resolve the matter quietly, I did some digging and decided to write up what I learned. I posted on the web site of the Century Foundation, my main professional home:

It turns out that Purdue has wiped all copies of my video and slides from university servers, on grounds that I displayed classified documents briefly on screen. A breach report was filed with the university’s Research Information Assurance Officer, also known as the Site Security Officer, under the terms of Defense Department Operating Manual 5220.22-M. I am told that Purdue briefly considered, among other things, whether to destroy the projector I borrowed, lest contaminants remain.

I was, perhaps, naive, but pretty much all of that came as a real surprise.

Let’s rewind. Information Assurance? Site Security?

These are familiar terms elsewhere, but new to me in a university context. I learned that Purdue, like a number of its peers, has a “facility security clearance” to perform classified U.S. government research. The manual of regulations runs to 141 pages. (Its terms forbid uncleared trustees to ask about the work underway on their campus, but that’s a subject for another day.) The pertinent provision here, spelled out at length in a manual called Classified Information Spillage, requires “sanitization, physical removal, or destruction” of classified information discovered on unauthorized media.

Two things happened in rapid sequence around the time I told Purdue about my post.

First, the university broke a week-long silence and expressed a measure of regret:

UPDATE: Just after posting this item I received an email from Julie Rosa, who heads strategic communications for Purdue. She confirmed that Purdue wiped my video after consulting the Defense Security Service, but the university now believes it went too far.

“In an overreaction while attempting to comply with regulations, the video was ordered to be deleted instead of just blocking the piece of information in question. Just FYI: The conference organizers were not even aware that any of this had happened until well after the video was already gone.”

“I’m told we are attempting to recover the video, but I have not heard yet whether that is going to be possible. When I find out, I will let you know and we will, of course, provide a copy to you.”

Then Edward Snowden tweeted the link, and the Century Foundation’s web site melted down. It now redirects to Medium, where you can find the full story.

I have not heard back from Purdue today about recovery of the video. It is not clear to me how recovery is even possible, if Purdue followed Pentagon guidelines for secure destruction. Moreover, although the university seems to suggest it could have posted most of the video, it does not promise to do so now. Most importantly, the best that I can hope for here is that my remarks and slides will be made available in redacted form — with classified images removed, and some of my central points therefore missing. There would be one version of the talk for the few hundred people who were in the room on Sept. 24, and for however many watched the live stream, and another version left as the only record.

For our purposes here, the most notable questions have to do with academic freedom in the context of national security. How did a university come to “sanitize” a public lecture it had solicited, on the subject of NSA surveillance, from an author known to possess the Snowden documents? How could it profess to be shocked to find that spillage is going on at such a talk? The beginning of an answer came, I now see, in the question and answer period after my Purdue remarks. A post-doctoral research engineer stood up to ask whether the documents I had put on display were unclassified. “No,” I replied. “They’re classified still.” Eugene Spafford, a professor of computer science there, later attributed that concern to “junior security rangers” on the faculty and staff. But the display of Top Secret material, he said, “once noted, … is something that cannot be unnoted.”

Someone reported my answer to Purdue’s Research Information Assurance Officer, who reported in turn to Purdue’s representative at the Defense Security Service. By the terms of its Pentagon agreement, Purdue decided it was now obliged to wipe the video of my talk in its entirety. I regard this as a rather devout reading of the rules, which allowed Purdue to “realistically consider the potential harm that may result from compromise of spilled information.” The slides I showed had been viewed already by millions of people online. Even so, federal funding might be at stake for Purdue, and the notoriously vague terms of the Espionage Act hung over the decision. For most lawyers, “abundance of caution” would be the default choice. Certainly that kind of thinking is commonplace, and sometimes appropriate, in military and intelligence services.

But universities are not secret agencies. They cannot lightly wear the shackles of a National Industrial Security Program, as Purdue agreed to do. The values at their core, in principle and often in practice, are open inquiry and expression.

I do not claim I suffered any great harm when Purdue purged my remarks from its conference proceedings. I do not lack for publishers or public forums. But the next person whose talk is disappeared may have fewer resources.

More importantly, to my mind, Purdue has compromised its own independence and that of its students and faculty. It set an unhappy precedent, even if the people responsible thought they were merely following routine procedures.

One can criticize the university for its choices, and quite a few have since I published my post. What interests me is how nearly the results were foreordained once Purdue made itself eligible for Top Secret work.

Think of it as a classic case of mission creep. Purdue invited the secret-keepers of the Defense Security Service into one cloistered corner of campus (“a small but significant fraction” of research in certain fields, as the university counsel put it). The trustees accepted what may have seemed a limited burden, confined to the precincts of classified research.

Now the security apparatus claims jurisdiction over the campus (“facility”) at large. The university finds itself “sanitizing” a conference that has nothing to do with any government contract.

I am glad to see that Princeton takes the view that “[s]ecurity regulations and classification of information are at variance with the basic objectives of a University.” It does not permit faculty members to do classified work on campus, which avoids Purdue’s “facility” problem. And even so, at Princeton and elsewhere, there may be an undercurrent of self-censorship and informal restraint against the use of documents derived from unauthorized leaks.

Two of my best students nearly dropped a course I taught a few years back, called “Secrecy, Accountability and the National Security State,” when they learned the syllabus would include documents from Wikileaks. Both had security clearances, for summer jobs, and feared losing them. I told them I would put the documents on Blackboard, so they need not visit the Wikileaks site itself, but the readings were mandatory. Both, to their credit, stayed in the course. They did so against the advice of some of their mentors, including faculty members. The advice was purely practical. The U.S. government will not give a clear answer when asked whether this sort of exposure to published secrets will harm job prospects or future security clearances. Why take the risk?

Every student and scholar must decide for him- or herself, but I think universities should push back harder, and perhaps in concert. There is a treasure trove of primary documents in the archives made available by Snowden and Chelsea Manning. The government may wish otherwise, but that information is irretrievably in the public domain. Should a faculty member ignore the Snowden documents when designing a course on network security architecture? Should a student write a dissertation on modern U.S.-Saudi relations without consulting the numerous diplomatic cables on Wikileaks? To me, those would be abdications of the basic duty to seek out authoritative sources of knowledge, wherever they reside.

I would be interested to learn how others have grappled with these questions. I expect to write about them in my forthcoming book on surveillance, privacy and secrecy.

Building a better CA infrastructure

As several Tor project authors, Ben Adida and many others have written, our certificate authority infrastructure has the flaw that any one CA, anywhere on the planet, can issue a certificate for any web site, anywhere else on the planet. This was tolerable when the only game in town was VeriSign, but now that’s just untenable. So what solutions are available?

First, some non-solutions: Extended validation certs do nothing useful. Will users be properly trained to look for the extra changes in browser behavior as to scream when they’re absent via a normal cert? Fat chance. Similarly, certificate revocation lists buy you nothing if you can’t actually download them (a notable issue if you’re stuck behind the firewall of somebody who wants to attack you).

A straightforward idea is to track the certs you see over time and generate a prominent warning if you see something anomalous. This is available as a fully-functioning Firefox extension, Certificate Patrol. This should be built into every browser.

In addition to your first-hand personal observations, why not leverage other resources on the network to make their own observations? For example, while Google is crawling the web, it can easily save SSL/TLS certificates when it sees them, and browsers could use a real-time API much like Google SafeBrowsing. A research group at CMU has already built something like this, which they call a network notary. In essence, you can have multiple network services, running from different vantage points in the network, all telling you whether the cryptographic credentials you got match what others are seeing. Of course, if you’re stuck behind an attacker’s firewall, the attacker will similarly filter out all these sites.

UPDATE: Google is now doing almost exactly what I suggested.

There are a variety of other proposals out there, notably trying to leverage DNSSEC to enhance or supplant the need for SSL/TLS certificates. Since DNSSEC provides more control over your DNS records, it also provides more control over who can issue SSL/TLS certificates for your web site. If and when DNSSEC becomes universally supported, this would be a bit harder for attacker firewalls to filter without breaking everything, so I certainly hope this takes off.

Let’s say that future browsers properly use all of these tricks and can unquestionably determine for you, with perfect accuracy, when you’re getting a bogus connection. Your browser will display an impressive error dialog and refuses to load the web site. Is that sufficient? This will certainly break all the hotel WiFi systems that want to redirect you to an internal site where they can charge you to access the network. (Arguably, this sort of functionality belongs elsewhere in the software stack, such as through IEEE 802.21, notably used to connect AT&T iPhones to the WiFi service at Starbucks.) Beyond that, though, should the browser just steadfastly refuse to allow the connection? I’ve been to at least one organization whose internal WiFi network insists that it proxy all of your https sessions and, in fact, issues fabricated certificates that you’re expected to configure your browser to trust. We need to support that sort of thing when it’s required, but again, it would perhaps best be supported by some kind of side-channel protocol extension, not by doing a deliberate MITM attack on the crypto protocol.

Corner cases aside, what if you’re truly in a hostile environment and your browser has genuinely detected a network adversary? Should the browser refuse the connection, or should there be some other option? And if so, what would that be? Should the browser perhaps allow the connection (with much gnashing of teeth and throbbing red borders on the window)? Should previous cookies and saved state be hidden away? Should web sites like Gmail and Facebook allow users to have two separate passwords, one for “genuine” login and a separate one for “Yes, I’m in a hostile location, but I need to send and receive email in a limited but still useful fashion?”

[Editor’s note: you may also be interested in the many prior posts on this topic by Freedom to Tinker contributors: 1, 2, 3, 4, 5, 6, 7, 8 — as well as the “Emerging Threats to Online Trust: The Role of Public Policy and Browser Certificates” event that CITP hosted in DC last year with policymakers, industry, and activists.]

Breathalyzer Source Code Secrecy Endangers Minnesota Drunk Driving Convictions

The Minnesota Supreme Court ruled recently that defendants accused of drunk driving in the state are entitled to have their experts inspect the source code for the software in the Intoxilyzer breath-testing machines used by police to gauge the defendants’ blood alcohol levels. The defendants argued, successfully, that they were entitled to examine and challenge the evidence against them, including the design and functioning of devices used to generate that evidence.

The ruling puts many of the state’s drunk driving prosecutions on thin ice, because CMI, the Intoxilyzer’s maker, is withholding the source code and the state apparently has no way to force CMI to provide the code.

Eric Rescorla argues, reasonably, that breath testers have many potential failure modes unrelated to software, and that source code analysis can be labor-intensive and might not turn up any clear problems. Both arguments are valid, as far as they go.

I’m not a lawyer, so I won’t try to guess whether the court’s ruling was correct as a matter of law. But the ruling does seem right as a matter of policy. If we are troubled by criminal convictions relying on secret evidence, then we should also be troubled by convictions relying on evidence generated by a secret process. To the extent that the Intoxilyzer functions as a secret process, the state should not be relying on it in criminal prosecutions.

(Though I haven’t thought carefully about the question, I might potentially draw a different policy conclusion in a civil case, where the standard of proof is preponderance of evidence, rather than guilt beyond a reasonable doubt.)

The problem is illustrated nicely by a contradiction in the arguments that CMI and the state are making. On the one hand, they argue that the machine’s source code contains valuable trade secrets — I’ll call them the “secret sauce” — and that CMI’s business would be substantially harmed if its competitors learned about the secret sauce. On the other hand, they argue that there is no need to examine the source code because it operates straightforwardly, just reading values from some sensors and doing simple calculations to derive a blood alcohol estimate.

It’s hard to see how both arguments can be correct. If the software contains secret sauce, then by definition it has aspects that are neither obvious nor straightforward, and those aspects are important for the software’s operation. In other words, the secret sauce — whatever it is — must relevant to the defendants’ claims.

As in electronic voting, where we have seen similar secrecy arguments, one can’t help suspecting that the real “secret” is that the software quality is not what it should be. A previous study of source code from New Jersey breath testers did appear to find some embarrassing errors.

Let’s hope that breath tester companies can do better than e-voting companies. A rigorous, independent evaluation of the breath tester source code would either determine that the code is sound, or it would undercover problems that could then be fixed, to restore confidence in the machines. Either way, the police in Minnesota would end up with a reliable tool for giving drunk drivers the punishment they deserve.