October 9, 2015


Classified material in the public domain: what’s a university to do?

Yesterday I posted some thoughts about Purdue University’s decision to destroy a video recording of my keynote address at its Dawn or Doom colloquium. The organizers had gone dark, and a promised public link was not forthcoming. After a couple of weeks of hoping to resolve the matter quietly, I did some digging and decided to write up what I learned. I posted on the web site of the Century Foundation, my main professional home:

It turns out that Purdue has wiped all copies of my video and slides from university servers, on grounds that I displayed classified documents briefly on screen. A breach report was filed with the university’s Research Information Assurance Officer, also known as the Site Security Officer, under the terms of Defense Department Operating Manual 5220.22-M. I am told that Purdue briefly considered, among other things, whether to destroy the projector I borrowed, lest contaminants remain.

I was, perhaps, naive, but pretty much all of that came as a real surprise.

Let’s rewind. Information Assurance? Site Security?

These are familiar terms elsewhere, but new to me in a university context. I learned that Purdue, like a number of its peers, has a “facility security clearance” to perform classified U.S. government research. The manual of regulations runs to 141 pages. (Its terms forbid uncleared trustees to ask about the work underway on their campus, but that’s a subject for another day.) The pertinent provision here, spelled out at length in a manual called Classified Information Spillage, requires “sanitization, physical removal, or destruction” of classified information discovered on unauthorized media.

Two things happened in rapid sequence around the time I told Purdue about my post.

First, the university broke a week-long silence and expressed a measure of regret:

UPDATE: Just after posting this item I received an email from Julie Rosa, who heads strategic communications for Purdue. She confirmed that Purdue wiped my video after consulting the Defense Security Service, but the university now believes it went too far.

“In an overreaction while attempting to comply with regulations, the video was ordered to be deleted instead of just blocking the piece of information in question. Just FYI: The conference organizers were not even aware that any of this had happened until well after the video was already gone.”

“I’m told we are attempting to recover the video, but I have not heard yet whether that is going to be possible. When I find out, I will let you know and we will, of course, provide a copy to you.”

Then Edward Snowden tweeted the link, and the Century Foundation’s web site melted down. It now redirects to Medium, where you can find the full story.

I have not heard back from Purdue today about recovery of the video. It is not clear to me how recovery is even possible, if Purdue followed Pentagon guidelines for secure destruction. Moreover, although the university seems to suggest it could have posted most of the video, it does not promise to do so now. Most importantly, the best that I can hope for here is that my remarks and slides will be made available in redacted form — with classified images removed, and some of my central points therefore missing. There would be one version of the talk for the few hundred people who were in the room on Sept. 24, and for however many watched the live stream, and another version left as the only record.

For our purposes here, the most notable questions have to do with academic freedom in the context of national security. How did a university come to “sanitize” a public lecture it had solicited, on the subject of NSA surveillance, from an author known to possess the Snowden documents? How could it profess to be shocked to find that spillage is going on at such a talk? The beginning of an answer came, I now see, in the question and answer period after my Purdue remarks. A post-doctoral research engineer stood up to ask whether the documents I had put on display were unclassified. “No,” I replied. “They’re classified still.” Eugene Spafford, a professor of computer science there, later attributed that concern to “junior security rangers” on the faculty and staff. But the display of Top Secret material, he said, “once noted, … is something that cannot be unnoted.”

Someone reported my answer to Purdue’s Research Information Assurance Officer, who reported in turn to Purdue’s representative at the Defense Security Service. By the terms of its Pentagon agreement, Purdue decided it was now obliged to wipe the video of my talk in its entirety. I regard this as a rather devout reading of the rules, which allowed Purdue to “realistically consider the potential harm that may result from compromise of spilled information.” The slides I showed had been viewed already by millions of people online. Even so, federal funding might be at stake for Purdue, and the notoriously vague terms of the Espionage Act hung over the decision. For most lawyers, “abundance of caution” would be the default choice. Certainly that kind of thinking is commonplace, and sometimes appropriate, in military and intelligence services.

But universities are not secret agencies. They cannot lightly wear the shackles of a National Industrial Security Program, as Purdue agreed to do. The values at their core, in principle and often in practice, are open inquiry and expression.

I do not claim I suffered any great harm when Purdue purged my remarks from its conference proceedings. I do not lack for publishers or public forums. But the next person whose talk is disappeared may have fewer resources.

More importantly, to my mind, Purdue has compromised its own independence and that of its students and faculty. It set an unhappy precedent, even if the people responsible thought they were merely following routine procedures.

One can criticize the university for its choices, and quite a few have since I published my post. What interests me is how nearly the results were foreordained once Purdue made itself eligible for Top Secret work.

Think of it as a classic case of mission creep. Purdue invited the secret-keepers of the Defense Security Service into one cloistered corner of campus (“a small but significant fraction” of research in certain fields, as the university counsel put it). The trustees accepted what may have seemed a limited burden, confined to the precincts of classified research.

Now the security apparatus claims jurisdiction over the campus (“facility”) at large. The university finds itself “sanitizing” a conference that has nothing to do with any government contract.

I am glad to see that Princeton takes the view that “[s]ecurity regulations and classification of information are at variance with the basic objectives of a University.” It does not permit faculty members to do classified work on campus, which avoids Purdue’s “facility” problem. And even so, at Princeton and elsewhere, there may be an undercurrent of self-censorship and informal restraint against the use of documents derived from unauthorized leaks.

Two of my best students nearly dropped a course I taught a few years back, called “Secrecy, Accountability and the National Security State,” when they learned the syllabus would include documents from Wikileaks. Both had security clearances, for summer jobs, and feared losing them. I told them I would put the documents on Blackboard, so they need not visit the Wikileaks site itself, but the readings were mandatory. Both, to their credit, stayed in the course. They did so against the advice of some of their mentors, including faculty members. The advice was purely practical. The U.S. government will not give a clear answer when asked whether this sort of exposure to published secrets will harm job prospects or future security clearances. Why take the risk?

Every student and scholar must decide for him- or herself, but I think universities should push back harder, and perhaps in concert. There is a treasure trove of primary documents in the archives made available by Snowden and Chelsea Manning. The government may wish otherwise, but that information is irretrievably in the public domain. Should a faculty member ignore the Snowden documents when designing a course on network security architecture? Should a student write a dissertation on modern U.S.-Saudi relations without consulting the numerous diplomatic cables on Wikileaks? To me, those would be abdications of the basic duty to seek out authoritative sources of knowledge, wherever they reside.

I would be interested to learn how others have grappled with these questions. I expect to write about them in my forthcoming book on surveillance, privacy and secrecy.


British Court Blocks Publication of Car Security Paper

Recently a British court ordered researchers to withdraw a paper, “Dismantling Megamos Security: Wirelessly Lockpicking a Vehicle Immobiliser” from next week’s USENIX Security Symposium. This is a blow not only to academic freedom but also to progress in vehicle security. And for those of us who have worked in security for a long time, it raises bad memories of past attempts to silence researchers, which have touched many of us over the years.
[Read more…]


Are There Countries Whose Situations Worsened with the Arrival of the Internet?

Are there countries whose situations worsened with the arrival of the internet?  I’ve been arguing that there are lots of examples of countries where technology diffusion has helped democratic institutions deepen.  And there are several examples of countries where technology diffusion has been part of the story of rapid democratic transition.  But there are no good examples of countries where technology diffusion has been high, and the dictators got nastier as a result.

Over twitter, Eric Schmidt, Google CEO, recently opined the same thing.  Evgeny Morozov, professional naysayer, asked for a graph.

So here is a graph and a list.  I used PolityIV’s democratization scores from 2002 and 2011.  I used the World Bank/ITU data on internet users.  I merged the data and made a basic graph.  On the vertical axis is the change in percent of a country’s population online over the last decade.  The horizontal axis reflects any change in the democratization score–any slide towards authoritarianism is represented by a negative number.  For Morozov to be right, the top left corner of this graph needs to have some cases in it.

Change in Percentage Internet Users and Democracy Scores, By Country, 2002-2011


Look at the raw data.

[Read more…]


Singapore Punishes Net Freedom Advocate

Over the last few days my activist self has come out.  I was a tenure reviewer for Dr. Cherian George at Nanyang Technical University, one of Singapore’s most high-profile universities.  His tenure case was overturned at the top, where university administration meets the country’s political elites.

It is difficult to dismiss George on the basis of academic merit. With degrees from Cambridge, Columbia, and Stanford, his pedigree is admirable. He has three books under his belt: the eviscerating “Air Conditioned Nation”, the evocative “Freedom From the Press” and a scholarly tome comparing independent online journalism in Singapore and Malaysia that was actually published at home by Singapore University Press. Through a string of academic articles, George has been equally critical of the government and the press, so it is not surprising that the country’s journalists have not rushed to his defense. He has revealed to colleagues that the decision to deny his tenure was solely because of “non-academic factors”—the university administrators told him as much. He’s had positive teaching evaluations. This wasn’t a merit based decision.
[Read more…]


Predictions for 2013

After a year’s hiatus, our annual predictions post is back! As usual, these predictions reflect the results of brainstorming among many affiliates and friends of the blog, so you should not attribute any prediction to any individual (including me–I’m just the scribe). Without further ado, the tech policy predictions for 2013:

[Read more…]


Zuckerberg Goes to Russia as the Global Network Initiative Turns 4

The Global Network Initiative (GNI) was founded in October 2008 to help technology firms navigate the political implications of their success. Engineers at the world’s leading technology firms have been incredibly innovative, but do not always the global dynamics of their innovation. Moreover, they do not always acknowledge the ways in which politicians get involved with the design process. The creation of the GNI signaled that some in the technology sector were ready to start having more open conversations. Facebook recently joined the GNI–now four years old–as an “observer”. But the company’s founder Mark Zuckerberg also traveled to Moscow to meet with that country’s tech-savvy second-in-command, Dmitri Medvedev. With the anniversary of the GNI in mind, let’s consider the different ways of interpreting the Zuckerberg-Medvedev summit.

Zuckerberg Meets MedvedevNixon Meets Mao

[Read more…]


Stopping SOPA's Anticircumvention

The House’s Stop Online Piracy Act is in Judiciary Committee Markup today. As numerous protests, open letters, and advocacy campaigns across the Web, this is a seriously flawed bill. Sen. Ron Wyden and Rep. Darell Issa’s proposed OPEN Act points out, by contrast, some of the procedural problems.

Here, I analyze just one of the problematic provisions of SOPA: a new “anticircumvention” provision (different from the still-problematic anti-circumvention of section 1201). SOPA’s anticircumvention authorizes injunctions against the provision of tools to bypass the court-ordered blocking of domains. Although it is apparently aimed at MAFIAAfire, the Firefox add-on that offered redirection for seized domains in the wake of ICE seizures, [1] the provision as drafted sweeps much more broadly. Ordinary security and connectivity tools could fall within its scope. If enacted, it would weaken Internet security and reduce the robustness and resilience of Internet connections.

The anticircumvention section, which is not present in the Senate’s companion PROTECT-IP measure, provides for injunctions, on the action of the Attorney General:

(ii)against any entity that knowingly and willfully provides or offers to provide a product or service designed or marketed by such entity or by another in concert with such entity for the circumvention or bypassing of measures described in paragraph (2) [blocking DNS responses, search query results, payments, or ads] and taken in response to a court order issued under this subsection, to enjoin such entity from interfering with the order by continuing to provide or offer to provide such product or service. § 102(c)(3)(A)(ii)

As an initial problem, the section is unclear. Could it cover someone who designs a tool for “the circumvention or bypassing of” DNS blockages in general — even if such a person did not specifically intend or market the tool to be used to frustrate court orders issued under SOPA? Resilience in the face of technological failure is a fundamental software design goal. As DNS experts Steve Crocker, et al. say in their Dec. 9 letter to the House and Senate Judiciary Chairs, “a secure application expecting a secure DNS answer will not give up after a timeout. It might retry the lookup, it might try a backup DNS server, it might even restart the lookup through a proxy service.” Would the providers of software that looked to a proxy for answers –products “designed” to be resilient to transient DNS lookup failures –be subject to injunction? Where the answer is unclear, developers might choose not to offer such lawful features rather than risking legal attack. Indeed, the statute as drafted might chill the development of anti-censorship tools funded by our State Department.

Some such tools are explicitly designed to circumvent censorship in repressive regimes whose authorities engage in DNS manipulation to prevent citizens from accessing sites with dissident messages, alternate sources of news, or human rights reporting. (See Rebecca MacKinnon’s NYT Op-Ed, Stop the Great Firewall of America. Censorship-circumvention tools include Psiphon, which describes itself as an “Open source web proxy designed to help Internet users affected by Internet censorship securely bypass content-filtering systems,” and The Tor Project.) These tools cannot distinguish between Chinese censorship of Tiananmen Square mentions and U.S. copyright protection where their impacts — blocking access to Web content — and their methods — local blocking of domain resolution — are the same.

Finally, the paragraph may encompass mere knowledge-transfer. Does telling someone about alternate DNS resolvers, or noting that a blocked domain can still be found at its IP address — a matter of historical record and necessary to third-party evaluation of the claims against that site — constitute willfully “providing a service designed … [for] bypassing” DNS-blocking? Archives of historic DNS information are often important information to legal or technical network investigations, but might become scarce if providers had to ascertain the reasons their information was being sought.

For these reasons among many others (such as those identified by my ISP colleague Nick), SOPA should be stopped.


Telex and Ethan Zuckerman's "Cute Cat Theory" of Internet Censorship

A few years ago, Ethan Zuckerman gave a talk at CITP on his “cute cat theory” of internet censorship (see also NY Times article), which goes something like this:

Most internet users use the internet and social media tools for harmless activities, like looking at pictures of kittens online. However, an open social media site is open to political content as well as pictures of kittens. Repressive governments might attempt to block this political content by blocking access to, say, all of Blogspot or all of Twitter, but in doing so they also block people from looking at non-political content, like pictures of cute kittens. This both brings more attention to the political causes the government is trying to suppress through the Streisand effect, and can politicize users who previously just wanted unfettered access to cute kittens.

This is great for Web 2.0, and suggests that activists should host their blogs on sites where a lot of kittens would be taken down as collateral damage should they be blocked.

However, what happens when a government is perfectly willing to block all social media? What if a user wants to do more than produce political content on the web?

Telex (blog post) can be seen as a technological method of implementing the cute cat theory for the entire internet: the system allows a user to circumvent internet censorship by executing a secret knock on potentially any web site outside of the censor’s network. When any web site, no matter how innocuous or critical to business or political infrastructure, can be used for a political goal in this fashion, the censorship/anti-censorship cat-and-mouse game is elevated beyond single proxies and lists of blockable Tor nodes, and beyond kittens, to the entire internet.


Anticensorship in the Internet's Infrastructure

I’m pleased to announce a research result that Eric Wustrow, Scott Wolchok, Ian Goldberg, and I have been working on for the past 18 months: Telex, a new approach to circumventing state-level Internet censorship. Telex is markedly different from past anticensorship efforts, and we believe it has the potential to shift the balance of power in the censorship arms race.

What makes Telex different from previous approaches:

  • Telex operates in the network infrastructure — at any ISP between the censor’s network and non-blocked portions of the Internet — rather than at network end points. This approach, which we call “end-to-middle” proxying, can make the system robust against countermeasures (such as blocking) by the censor.
  • Telex focuses on avoiding detection by the censor. That is, it allows a user to circumvent a censor without alerting the censor to the act of circumvention. It complements anonymizing services like Tor (which focus on hiding with whom the user is attempting to communicate instead of that that the user is attempting to have an anonymous conversation) rather than replacing them.
  • Telex employs a form of deep-packet inspection — a technology sometimes used to censor communication — and repurposes it to circumvent censorship.
  • Other systems require distributing secrets, such as encryption keys or IP addresses, to individual users. If the censor discovers these secrets, it can block the system. With Telex, there are no secrets that need to be communicated to users in advance, only the publicly available client software.
  • Telex can provide a state-level response to state-level censorship. We envision that friendly countries would create incentives for ISPs to deploy Telex.

For more information, keep reading, or visit the Telex website.

The Problem

Government Internet censors generally use firewalls in their network to block traffic bound for certain destinations, or containing particular content. For Telex, we assume that the censor government desires generally to allow Internet access (for economic or political reasons) while still preventing access to specifically blacklisted content and sites. That means Telex doesn’t help in cases where a government pulls the plug on the Internet entirely. We further assume that the censor allows access to at least some secure HTTPS websites. This is a safe assumption, since blocking all HTTPS traffic would cut off practically every site that uses password logins.

<!– –>

Many anticensorship systems work by making an encrypted connection (called a “tunnel”) from the user’s computer to a trusted proxy server located outside the censor’s network. This server relays requests to censored websites and returns the responses to the user over the encrypted tunnel. This approach leads to a cat-and-mouse game, where the censor attempts to discover and block the proxy servers. Users need to learn the address and login information for a proxy server somehow, and it’s very difficult to broadcast this information to a large number of users without the censor also learning it.

How Telex Works

Telex turns this approach on its head to create what is essentially a proxy server without an IP address. In fact, users don’t need to know any secrets to connect. The user installs a Telex client app (perhaps by downloading it from an intermittently available website or by making a copy from a friend). When the user wants to visit a blacklisted site, the client establishes an encrypted HTTPS connection to a non-blacklisted web server outside the censor’s network, which could be a normal site that the user regularly visits. Since the connection looks normal, the censor allows it, but this connection is only a decoy.

The client secretly marks the connection as a Telex request by inserting a cryptographic tag into the headers. We construct this tag using a mechanism called public-key steganography. This means anyone can tag a connection using only publicly available information, but only the Telex service (using a private key) can recognize that a connection has been tagged.

As the connection travels over the Internet en route to the non-blacklisted site, it passes through routers at various ISPs in the core of the network. We envision that some of these ISPs would deploy equipment we call Telex stations. These devices hold a private key that lets them recognize tagged connections from Telex clients and decrypt these HTTPS connections. The stations then divert the connections to anti­censorship services, such as proxy servers or Tor entry points, which clients can use to access blocked sites. This creates an encrypted tunnel between the Telex user and Telex station at the ISP, redirecting connections to any site on the Internet.

<!– –>

Telex doesn’t require active participation from the censored websites, or from the non-censored sites that serve as the apparent connection destinations. However, it does rely on ISPs to deploy Telex stations on network paths between the censor’s network and many popular Internet destinations. Widespread ISP deployment might require incentives from governments.

Development so Far

At this point, Telex is a concept rather than a production system. It’s far from ready for real users, but we have developed proof-of-concept software for researchers to experiment with. So far, there’s only one Telex station, on a mock ISP that we’re operating in our lab. Nevertheless, we have been using Telex for our daily web browsing for the past four months, and we’re pleased with the performance and stability. We’ve even tested it using a client in Beijing and streamed HD YouTube videos, in spite of YouTube being censored there.

Telex illustrates how it is possible to shift the balance of power in the censorship arms race, by thinking big about the problem. We hope our work will inspire discussion and further research about the future of anticensorship technology.

You can find more information and prototype software at the Telex website, or read our technical paper, which will appear at Usenix Security 2011 in August.


In DHS Takedown Frenzy, Mozilla Refuses to Delete MafiaaFire Add-On

Not satisfied with seizing domain names, the Department of Homeland Security asked Mozilla to take down the MafiaaFire add-on for Firefox. Mozilla, through its legal counsel Harvey Anderson, refused. Mozilla deserves thanks and credit for a principled stand for its users’ rights.

MafiaaFire is a quick plugin, as its author describes, providing redirection service for a list of domains: “We plan to maintain a list of URLs, and their duplicate sites (for example Demoniod.com and Demoniod.de) and painlessly redirect you to the correct site.” The service provides redundancy, so that domain resolution — especially at a registry in the United States — isn’t a single point of failure between a website and its would-be visitors. After several rounds of ICE seizure of domain names on allegations of copyright infringement — many of which have been questioned as to both procedural validity and effectiveness — redundancy is a sensible precaution for site-owners who are well within the law as well as those pushing its limits.

DHS seemed poised to repeat those procedural errors here. As Mozilla’s Anderson blogged: “Our approach is to comply with valid court orders, warrants, and legal mandates, but in this case there was no such court order.” DHS simply “requested” the takedown with no such procedural back-up. Instead of pulling the add-on, Anderson responded with a set of questions, including:

  1. Have any courts determined that MAFIAAfire.com is unlawful or illegal inany way? If so, on what basis? (Please provide any relevant rulings)

  2. Have any courts determined that the seized domains related to MAFIAAfire.com are unlawful, illegal or liable for infringement in any way? (please provide relevant rulings)
  3. Is Mozilla legally obligated to disable the add-on or is this request based on other reasons? If other reasons, can you please specify.

Unless and until the government can explain its authority for takedown of code, Mozilla is right to resist DHS demands. Mozilla’s hosting of add-ons, and the Firefox browser itself, facilitate speech. They, like they domain name system registries ICE targeted earlier, are sometimes intermediaries necessary to users’ communication. While these private actors do not have First Amendment obligations toward us, their users, we rely on them to assert our rights (and we suffer when some, like Facebook are less vigilant guardians of speech).

As Congress continues to discuss the ill-considered COICA, it should take note of the problems domain takedowns are already causing. Kudos to Mozilla for bringing these latest errors to public attention — and, as Tom Lowenthal suggests in the do-not-track context, standing up for its users.

cross-posted at Legal Tags