April 24, 2014

avatar

Census of Files Available via BitTorrent

BitTorrent is popular because it lets anyone distribute large files at low cost. Which kinds of files are available on BitTorrent? Sauhard Sahi, a Princeton senior, decided to find out. Sauhard’s independent work last semester, under my supervision, set out to measure what was available on BitTorrent. This post, summarizing his results, was co-written by Sauhard and me.

Sauhard chose a (uniform) random sample of files available via the trackerless variant of BitTorrent, using the Mainline DHT. The sample comprised 1021 files. He classified the files in the sample by file type, language, and apparent copyright status.

Before describing the results, we need to offer two caveats. First, the results apply only to the Mainline trackerless BitTorrent system that we surveyed. Other parts of the BitTorrent ecosystem might be different. Second, all files that were available were equally likely to appear in the sample — the sample was not weighted by number of downloads, and it probably contains files that were never downloaded at all. So we can’t say anything about the characteristics of BitTorrent downloads, or even of files that are downloaded via BitTorrent, only about files that are available on BitTorrent.

With that out of the way, here’s what Sauhard found.

File types

46% movies and shows (non-pornographic)
14% games and software
14% pornography
10% music
1% books and guides
1% images
14% could not classify

Movies/Shows

For the movies and shows category, the predominant file format was AVI, and other formats included RMVB (a proprietary format for RealPlayer), MPEG, raw DVD, and some multi-part RAR archives. Interestingly, this section was heavily biased towards recent movies, instead of being spread out evenly over a number of years. In descending order of frequency, we found that 60% of the randomly selected movies and shows were in English, 8% were in Spanish, 7% were in Russian, 5% were in Polish, 5% were in Japanese, 4% were in Chinese, 4% could not be determined, 3% were in French, 1% were in Italian, and other infrequent languages accounted for 2% of the distribution.

Games/Software

For the games and software category, there was no clearly dominant file type, but common file types for software included ISO disc images, multi-part RAR archives, and EXE (Windows executables). The games were targeted for running on different architectures, such as the XBOX 360, Nintendo Wii, and Windows PC’s. In descending order, we found that 74% of games and software in the sample were in English, 12% were in Japanese, 5% were in Spanish, 4% were in Chinese, 2% were in Polish, and 1% were in Russian and French each.

Pornography

For the pornography category, the predominant encoding format was AVI, similar to the movies category. However, there were significantly more MPG and WMV (Windows Media Video) files available. Also, most pornography torrents included the full pornographic video, a sample of the video (a 1-5 minute extract of the video), as well as posters or images of the porn stars in JPEG format. Also, as these videos are not typically dated like movies are, it is difficult to make any remarks regarding the recency bias for pornographic torrents. Our assumption would be that demand for pornography is not as time-sensitive as demand for movies, so it is likely that these pornographic videos constitute a broader spectrum of time than the movies do. In descending order, we found that 53% of pornography in our sample was in English, 16% was in Chinese, 15% was in Japanese, 6% was in Russian, 3% was in German, 2% was in French, 2% was unclassifiable, and Italian, Hindi, and Spanish appeared infrequently (1% each).

Music

For the music category, the predominant encoding format for music was MP3, there were some albums ripped to WMA (Windows Media Audio, a Microsoft codec), and there were also ISO images and multi-part RAR archives. There is still a bias towards recent albums and songs, but it is not as strongly evident as it is for movies—perhaps because people are more willing to continue seeding music even after it is no longer new, so these torrents are able to stay alive longer in the DHT. In descending order, we found that 78% of music torrents in our sample were in English, 6% were in Russian, 4% were in Spanish, 2% were in Japanese and Chinese each, and other infrequent languages appeared 1% each.

Books/Guides

The books/guides and images categories were fairly minor. We classified 15 torrents under books and guides—13 were in English, 1 was in French, and 1 was in Russian. We classified 3 image torrents—one was a set of national park wallpapers, one was a set of pictures of BMW cars (both of these are English), and one was a Japanese comic strip.

Apparent Copyright Infringement

Our final assessment involved determining whether or not each file seemed likely to be copyright-infringing. We classified a file as likely non-infringing if it appeared to be (1) in the public domain, (2) freely available through legitimate channels, or (3) user-generated content. These were judgment calls on our part, based on the contents of the files, together with some external research.

By this definition, all of the 476 movies or TV shows in the sample were found to be likely infringing. We found seven of the 148 files in the games and software category to be likely non-infringing—including two Linux distributions, free plug-in packs for games, as well as free and beta software. In the pornography category, one of the 145 files claimed to be an amateur video, and we gave it the benefit of the doubt as likely non-infringing. All of the 98 music torrents were likely infringing. Two of the fifteen files in the books/guides category seemed to be likely non-infringing.

Overall, we classified ten of the 1021 files, or approximately 1%, as likely non-infringing, This result should be interpreted with caution, as we may have missed some non-infringing files, and our sample is of files available, not files actually downloaded. Still, the result suggests strongly that copyright infringement is widespread among BitTorrent users.

avatar

A Free Internet, If We Can Keep It

“We stand for a single internet where all of humanity has equal access to knowledge and ideas. And we recognize that the world’s information infrastructure will become what we and others make of it. ”

These two sentences, from Secretary of State Clinton’s groundbreaking speech on Internet freedom, sum up beautifully the challenge facing our Internet policy. An open Internet can advance our values and support our interests; but we will only get there if we make some difficult choices now.

One of these choices relates to anonymity. Will it be easy to speak anonymously on the Internet, or not? This was the subject of the first question in the post-speech Q&A:

QUESTION: You talked about anonymity on line and how we have to prevent that. But you also talk about censorship by governments. And I’m struck by – having a veil of anonymity in certain situations is actually quite beneficial. So are you looking to strike a balance between that and this emphasis on censorship?

SECRETARY CLINTON: Absolutely. I mean, this is one of the challenges we face. On the one hand, anonymity protects the exploitation of children. And on the other hand, anonymity protects the free expression of opposition to repressive governments. Anonymity allows the theft of intellectual property, but anonymity also permits people to come together in settings that gives them some basis for free expression without identifying themselves.

None of this will be easy. I think that’s a fair statement. I think, as I said, we all have varying needs and rights and responsibilities. But I think these overriding principles should be our guiding light. We should err on the side of openness and do everything possible to create that, recognizing, as with any rule or any statement of principle, there are going to be exceptions.

So how we go after this, I think, is now what we’re requesting many of you who are experts in this area to lend your help to us in doing. We need the guidance of technology experts. In my experience, most of them are younger than 40, but not all are younger than 40. And we need the companies that do this, and we need the dissident voices who have actually lived on the front lines so that we can try to work through the best way to make that balance you referred to.

Secretary Clinton’s answer is trying to balance competing interests, which is what good politicians do. If we want A, and we want B, and A is in tension with B, can we have some A and some B together? Is there some way to give up a little A in exchange for a lot of B? That’s a useful way to start the discussion.

But sometimes you have to choose — sometimes A and B are profoundly incompatible. That seems to be the case here. Consider the position of a repressive government that wants to spy on a citizen’s political speech, as compared to the position of the U.S. government when it wants to eavesdrop on a suspect’s conversations under a valid search warrant. The two positions are very different morally, but they are pretty much the same technologically. Which means that either both governments can eavesdrop, or neither can. We have to choose.

Secretary Clinton saw this tension, and, being a lawyer, she saw that law could not resolve it. So she expressed the hope that technology, the aspect she understood least, would offer a solution. This is a common pattern: Given a difficult technology policy problem, lawyers will tend to seek technology solutions and technologists will tend to seek legal solutions. (Paul Ohm calls this “Felten’s Third Law”.) It’s easy to reject non-solutions in your own area because you have the knowledge to recognize why they will fail; but there must be a solution lurking somewhere in the unexplored wilderness of the other area.

If we’re forced to choose — and we will be — what kind of Internet will we have? In Secretary Clinton’s words, “the world’s information infrastructure will become what we and others make of it.” We’ll have a free Internet, if we can keep it.

avatar

No Warrant Necessary to Seize Your Laptop

The U.S. Customs may search your laptop and copy your hard drive when you cross the border, according to their policy. They may do this even if they have no particularized suspicion of wrongdoing on your part. They claim that the Fourth Amendment protection against warrantless search and seizure does not apply. The Customs justifies this policy on the grounds that “examinations of documents and electronic devices are a crucial tool for detecting information concerning” all sorts of bad things, including terrorism, drug smuggling, contraband, and so on.

Historically the job of Customs was to control the flow of physical goods into the country, and their authority to search you for physical goods is well established. I am certainly not a constitutional lawyer, but to me a Customs exemption from Fourth Amendment restrictions is more clearly justified for physical contraband than for generalized searches of information.

The American Civil Liberties Union is gathering data about how this Customs enforcement policy works in practice, and they request your help. If you’ve had your laptop searched, or if you have altered your own practices to protect your data when crossing the border, staff attorney Catherine Crump would be interested in hearing about it.

Meanwhile, the ACLU has released a stack of documents they got by FOIA request.
The documents are here, and their spreadsheets analyzing the data are here. They would be quite interested to know what F-to-T readers make of these documents.

ACLU Queries for F-to-T readers:
If the answer to any of the questions below is yes, please briefly describe your experience and e-mail your response to laptopsearch at aclu.org. The ACLU promises confidentiality to anyone responding to this request.
(1) When entering or leaving the United States, has a U.S. official ever examined or browsed the contents of your laptop, PDA, cell phone, or other electronic device?

(2) When entering or leaving the United States, has a U.S. official ever detained your laptop, PDA, cell phone, or other electronic device?

(3) In light of the U.S. government’s policy of conducting suspicionless searches of laptops and other electronic devices, have you taken extra steps to safeguard your electronic information when traveling internationally, such as using encryption software or shipping a hard drive ahead to your destination?

(4) Has the U.S. government’s policy of conducting suspicionless searches of laptops and other electronic devices affected the frequency with which you travel internationally or your willingness to travel with information stored on electronic devices?

avatar

Information Technology Policy in the Obama Administration, One Year In

[Last year, I wrote an essay for Princeton's Woodrow Wilson School, summarizing the technology policy challenges facing the incoming Obama Administration. This week they published my follow-up essay, looking back on the Administration's first year. Here it is.]

Last year I identified four information technology policy challenges facing the incoming Obama Administration: improving cybersecurity, making government more transparent, bringing the benefits of technology to all, and bridging the culture gap between techies and policymakers. On these issues, the Administration’s first-year record has been mixed. Hopes were high that the most tech-savvy presidential campaign in history would lead to an equally transformational approach to governing, but bold plans were ground down by the friction of Washington.

Cybersecurity : The Administration created a new national cybersecurity coordinator (or “czar”) position but then struggled to fill it. Infighting over the job description — reflecting differences over how to reconcile security with other economic goals — left the czar relatively powerless. Cyberattacks on U.S. interests increased as the Adminstration struggled to get its policy off the ground.

Government transparency: This has been a bright spot. The White House pushed executive branch agencies to publish more data about their operations, and created rules for detailed public reporting of stimulus spending. Progress has been slow — transparency requires not just technology but also cultural changes within government — but the ship of state is moving in the right direction, as the public gets more and better data about government, and finds new ways to use that data to improve public life.

Bringing technology to all: On the goal of universal access to technology, it’s too early to tell. The FCC is developing a national broadband plan, in hopes of bringing high-speed Internet to more Americans, but this has proven to be a long and politically difficult process. Obama’s hand-picked FCC chair, Julius Genachowski, inherited a troubled organization but has done much to stabilize it. The broadband plan will be his greatest challenge, with lobbyists on all sides angling for advantage as our national network expands.

Closing the culture gap: The culture gap between techies and policymakers persists. In economic policy debates, health care and the economic crisis have understandably taken center stage, but there seems to be little room even at the periphery for the innovation agenda that many techies had hoped for. The tech policy discussion seems to be dominated by lawyers and management consultants, as in past Administrations. Too often, policymakers still see techies as irrelevant, and techies still see policymakers as clueless.

In recent days, creative thinking on technology has emerged from an unlikely source: the State Department. On the heels of Google’s surprising decision to back away from the Chinese market, Secretary of State Clinton made a rousing speech declaring Internet freedom and universal access to information as important goals of U.S. foreign policy. This will lead to friction with the Chinese and other authoritarian governments, but our principles are worth defending. The Internet can a powerful force for transparency and democratization, around the world and at home.

avatar

Software in dangerous places

Software increasingly manages the world around us, in subtle ways that are often hard to see. Software helps fly our airplanes (in some cases, particularly military fighter aircraft, software is the only thing keeping them in the air). Software manages our cars (fuel/air mixture, among other things). Software manages our electrical grid. And, closer to home for me, software runs our voting machines and manages our elections.

Sunday’s NY Times Magazine has an extended piece about faulty radiation delivery for cancer treatment. The article details two particular fault modes: procedural screwups and software bugs.

The procedural screwups (e.g., treating a patient with stomach cancer with a radiation plan intended for somebody else’s breast cancer) are heartbreaking because they’re something that could be completely eliminated through fairly simple mechanisms. How about putting barcodes on patient armbands that are read by the radiation machine? “Oops, you’re patient #103 and this radiation plan is loaded for patent #319.”

The software bugs are another matter entirely. Supposedly, medical device manufacturers, and software correctness people, have all been thoroughly indoctrinated in the history of Therac-25, a radiation machine from the mid-80′s whose poor software engineering (and user interface design) directly led to several deaths. This article seems to indicate that those lessons were never properly absorbed.

What’s perhaps even more disturbing is that nobody seems to have been deeply bothered when the radiation planning software crashed on them! Did it save their work? Maybe you should double check? Ultimately, the radiation machine just does what it’s told, and the software than plans out the precise dosing pattern is responsible for getting it right. Well, if that software is unreliable (which the article clearly indicates), you shouldn’t use it again until it’s fixed!

What I’d like to know more about, and which the article didn’t discuss at all, is what engineering processes, third-party review processes, and certification processes were used. If there’s anything we’ve learned about voting systems, it’s that the federal and state certification processes were not up to the task of identifying security vulnerabilities, and that the vendors had demonstrably never intended their software to resist the sorts of the attacks that you would expect on an election system. Instead, we’re told that we can rely on poll workers following procedures correctly. Which, of course, is exactly what the article indicates is standard practice for these medical devices. We’re relying on the device operators to do the right thing, even when the software is crashing on them, and that’s clearly inappropriate.

Writing “correct” software, and further ensuring that it’s usable, is a daunting problem. In the voting case, we can at least come up with procedures based on auditing paper ballots, or using various cryptographic techniques, that allow us to detect and correct flaws in the software (although getting such procedures adopted is a daunting problem in its own right, but that’s a story for another day). In the aviation case, which I admit to not knowing much about, I do know they put in sanity-checking software, that will detect when the the more detailed algorithms are asking for something insane and will override it. For medical devices like radiation machines, we clearly need a similar combination of mechanisms, both to ensure that operators don’t make avoidable mistakes, and to ensure that the software they’re using is engineered properly.

avatar

Cyber Détente Part III: American Procedural Negotiation

The first post in this series rebutted the purported Russian motive for renewed cybersecurity negotiations and the second advanced more plausible self-interested rationales. This third and final post of the series examines the U.S. negotiating position through both substantive and procedural lenses.

——————————

American interest in a substantive cybersecurity deal appears limited, and the U.S. is rightly skeptical of Russian motives (perhaps for the reasons detailed in the prior two posts). Negotiators have publicly expressed support for institutional cooperation on the closely related issue of cybercrime, but firmly oppose an arms control or cyberterrorism treaty. This tenuous commitment is further implicated by the U.S. delegation’s composition. Representation of the NSA, State, DoD, and DHS suggests only a preliminary willingness to hear the Russians out and minimal consideration of a full-on bilateral negotiation.

While the cybersecurity talks may thus be substantively vacuous, they have great procedural merit when viewed in the context of shifting Russian relations and perceptions of cybersecurity.

The Bush administration’s Russia policy was marked by antagonism; proposed missile defense installations in Poland and the Czech Republic and NATO membership for Georgia and Ukraine particularly rankled the Kremlin. Upon taking office the Obama administration committed to “press[ing] the reset button” on U.S.-Russia relations by recommitting to cooperation in areas of shared interest.

Cybersecurity talks may best be evaluated as a facet of this systemic “reset.” Earnest discussions – including fruitless ones – may contribute towards a collegial relationship and further other more substantively promising negotiations between the two powers. The cybersecurity topic is particularly well suited for this role in that it brings often less-than-friendly defense, intelligence, and law enforcement agencies to the same table.

Inside-the-beltway perceptions of cybersecurity have also experienced a sea change. In the early Bush administration cybersecurity problems were predominantly construed as cybercrime problems, and consequently within the purview of law enforcement. For example, one of the first “major actions” advocated by the White House’s 2003 National Strategy to Secure Cyberspace was, “[e]nhance law enforcement’s capabilities for preventing and prosecuting cyberspace attacks.” But by the Obama administration cybersecurity was perceived as a national security issue; the 2009 Cyberspace Policy Review located primary responsibility for cybersecurity in the National Security Council.

This shift suggests additional procedural causes for renewed U.S.-Russia and UN cybersecurity talks. Not only do the discussions reflect the new perception of cybersecurity as a national security issue, but also they nudge other nations towards that view. And directly engaging defense and intelligence agencies accustoms them to viewing cybersecurity as an international issue within their domain.

The U.S. response of simultaneously substantively balking at and procedurally engaging with Russia on cybersecurity appears well-calibrated. Where meager opportunity exists for concluding a meaningful cybersecurity instrument given the Russian motives discussed earlier, the U.S. is nonetheless generating value.

While this favorable outcome is reassuring, it is by no means guaranteed for future cybersecurity talks. There is already a noxious atmosphere of often unwarranted alarmism about cyberwarfare and free-form parallels drawn between cyberattack and weapons of mass destruction. Admix the recurrently prophesied “Digital Pearl Harbor” and it is easy to imagine how an international compact on cybersecurity could look all-too-appealing. This pitfall can only be avoided by training an informed, critical eye on states’ motives to develop the appropriate – if any – cybersecurity negotiating position.

avatar

Cyber Détente Part II: Russian Diplomatic and Strategic Self-Interest

The first post in this series rebutted the purported Russian motive for negotiations, avoiding a security dilemma. This second post posits two alternative self-interested Russian inducements for rapprochement: legitimizing use of force and strategic advantage.

——————————

An alternative rationale for talks advanced by the Russians is fear of “cyberterror” – not the capacity for offensive cyberwarfare, but its use against civilians. A weapons use treaty of this sort could have value in establishing a norm against civilian cyberattack… but there are already strong international treaties and norms against attacks aimed at civilians. And at any rate the untraceability of most cyberattacks will take the teeth out of any use-banning treaty of this sort.

The U.S. delegation is rightly skeptical of this motive; the Russians may well be raising cyberterror in the interest of legitimating use of conventional force. The Russians have repeatedly likened political dissidence to cyberterror, and a substantive cyberterrorism treaty may be submitted by Russia as license to pursue political vendettas with conventional force. To probe how such a treaty might function, consider first a hypothetical full-blown infrastructure-crippling act of cyberterror where the perpetrator is known – Russia already need not restrain itself in retaliating. On the other hand, consider the inevitable website defacements by Chechen separatists or Georgian sympathizers in the midst of increasing hostilities – acts of cyberterrorism in violation of a treaty will assuredly be added to the list of provocations should Russia elect to engage in armed conflict.

This simple thought experiment reveals the deep faultlines that will emerge in negotiating any cyberterrorism treaty. Where is the boundary between vandalism (and other petty cybercrime) and cyberterror? What if acts are committed, as is often the case, by nationals of a state but not its government? What proof is required to sustain an allegation of cyberterror? Doubtlessly the Russian delegation would advance a broad definition of cyberterror, while the Americans would propose a narrowly circumscribed definition. Even if, improbably, the U.S. and Russia negotiated to a shared definition of cyberterror, I fail to see how it could be articulated in a manner not prone to later manipulation. It is not difficult to imagine, for example, how trivial defacement of a bank’s website might be shoehorned into a narrow definition: “destructive acts targeting critical civilian infrastructure.”

Another compelling motive for the Russians is realist self-interest: the Russians may believe they will gain a strategic advantage with a capacity-limiting cyberwarfare treaty. At first blush this seems an implausible reading – the U.S., with its technologically advanced and integrated armed forces, appears a far richer target for cyberattack than Russia given its reliance on decrepit Soviet equipment. Moreover, anecdotally the U.S. military has proven highly vulnerable: multiple unattributed attacks have penetrated defense-related systems (most prominently in 2007), and late last year the Wall Street Journal reported Iraqi militants trivially intercepted live video from Predator drones. But looking ahead a Russian self-interest motive is more plausible. Russia has made no secret of its attempts to rapidly stand up modern, professional armed forces, and in 2009 alone increased military spending by over 25% (projects include a revamped navy and a satellite positioning system, among many others). To accomplish this end the Russians may rely to a large degree on information technology, and particularly on commercial off-the-shelf hardware and software. Lacking time and finances the Russians may be unable to secure their new military systems against cyberattack. Thus while at present the U.S. is more vulnerable, in future Russia may have greater weaknesses. Locking in a cyberwarfare arms control agreement now, while the U.S. is more likely to sign on, could therefore be in Russia’s long-term strategic self-interest.

The specific offensive capabilities Russia has reportedly sought to ban are strongly corroborative of this self-interest rationale. In prior negotiations the Russian delegation has signaled particular concern of deliberately planted software and hardware that would allow disabling or co-opting military equipment. The U.S. will likely have far greater success in developing assets of this sort given the at times close relationship between intelligence agencies and commercial IT firms (e.g. the NSA warrantless wiretapping scandal) and the prevalence of American IT worldwide in military applications (think Windows). Russia, on the other hand, would likely have to rely on human intelligence to place assets of this sort.

Russia’s renewed interest in bilateral cybersecurity negotiations also belies its purported security dilemma rationale. Russian interest in talks lapsed between 1996 and 2009, suggesting a novel stimulus is at work, not some long-standing fear of a security dilemma. The recent rise of alleged “cyberterror” and attempts to modernize Russian armed forces – especially in the wake of the 2008 South Ossetia War with Georgia – far better correlate with Russia’s eagerness to come to the table.

To put a point on these two posts, I submit legitimization of use of force and strategic self-interest are far more plausible Russian motives for cybersecurity negotiations than the purported rationale of avoiding a security dilemma and consequent arms race or destabilization. In the following post I will explore the U.S. delegation’s position and argue the American response to Russia’s proposals is well-calibrated.

avatar

Google Attacks Highlight the Importance of Surveillance Transparency

Ed posted yesterday about Google’s bombshell announcement that it is considering pulling out of China in the wake of a sophisticated attack on its infrastructure. People more knowledgeable than me about China have weighed in on the announcement’s implications for the future of US-Sino relations and the evolution of the Chinese Internet. Rebecca MacKinnon, a China expert who will be a CITP visiting scholar beginning next month, says that “Google has taken a bold step onto the right side of history.” She has a roundup of Chinese reactions here.

One aspect of Google’s post that hasn’t received a lot of attention is Google’s statement that “only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” A plausible explanation for this is provided by this article (via James Grimmelmann) at PC World:

Drummond said that the hackers never got into Gmail accounts via the Google hack, but they did manage to get some “account information (such as the date the account was created) and subject line.”

That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.

Obviously, this report should be taken with a grain of salt since it’s based on a single anonymous source. But it fits a pattern identified by our own Jen Rexford and her co-authors in an excellent 2007 paper: when communications systems are changed to make it easier for US authorities to conduct surveillance, it necessarily increases the vulnerability of those systems to attacks by other parties, including foreign governments.

Rexford and her co-authors point to a 2006 incident in which unknown parties exploited vulnerabilities in Vodafone’s network to tap the phones of dozens of senior Greek government officials. According to news reports, these attacks were made possible because Greek telecommunications carriers had deployed equipment with built-in surveillance capabilities, but had not paid the equipment vendor, Ericsson, to activate this “feature.” This left the equipment in a vulnerable state. The attackers surreptitiously switched on the surveillance capabilities and used it to intercept the communications of senior government officials.

It shouldn’t surprise us that systems built to give law enforcement access to private communications could become vectors for malicious attacks. First, these interfaces are often backwaters in the system design. The success of any consumer product is going to depend on its popularity with customers. Therefore, a vendor or network provider is going to deploy its talented engineers to work on the public-facing parts of the product. It is likely to assign a smaller team of less-talented engineers to work on the law-enforcement interface, which is likely to be both less technically interesting and less crucial to the company’s bottom line.

Second, the security model of a law enforcement interface is likely to be more complex and less well-specified than the user-facing parts of the service. For the mainstream product, the security goal is simple: the customer should be able to access his or her own data and no one else’s. In contrast, determining which law enforcement officials are entitled to which information, and how those officials are to be authenticated, can become quite complex. Greater complexity means a higher likelihood of mistakes.

Finally, the public-facing portions of a consumer product benefit from free security audits from “white hat” security experts like our own Bill Zeller. If a publicly-facing website, cell phone network or other consumer product has a security vulnerability, the company is likely to hear about the problem first from a non-malicious source. This means that at least the most obvious security problems will be noticed and fixed quickly, before the bad guys have a chance to exploit them. In contrast, if an interface is shrouded in secrecy, and only accessible to law enforcement officials, then even obvious security vulnerabilities are likely to go unnoticed and unfixed. Such an interface will be a target-rich environment if a malicious hacker ever does get the opportunity to attack it.

This is an added reason to insist on rigorous public and judicial oversight of our domestic surveillance capabilities in the United States. There has been a recent trend, cemented by the 2008 FISA Amendments toward law enforcement and intelligence agencies conducting eavesdropping without meaningful judicial (to say nothing of public) scrutiny. Last month, Chris Soghoian uncovered new evidence suggesting that government agencies are collecting much more private information than has been publicly disclosed. Many people, myself included, oppose this expansion of domestic surveillance grounds on civil liberties grounds. But even if you’re unmoved by those arguments, you should still be concerned about these developments on national security grounds.

As long as these eavesdropping systems are shrouded in secrecy, there’s no way for “white hat” security experts to even begin evaluating them for potential security risks. And that, in turn, means that voters and policymakers will be operating in the dark. Programs that risk exposing our communications systems to the bad guys won’t be identified and shut down. Which means the culture of secrecy that increasingly surrounds our government’s domestic spying programs not only undermines the rule of law, it’s a danger to national security as well.

Update: Props to my colleague Julian Sanchez, who made the same observation 24 hours ahead of me.

avatar

Google Threatens to Leave China

The big news today is Google’s carefully worded statement changing its policy toward China. Up to now, Google has run a China-specific site, google.cn, which censors results consistent with the demands of the Chinese government. Google now says it plans to offer only unfiltered service to Chinese customers. Presumably the Chinese government will not allow this and will respond by setting the Great Firewall to block Google. Google says it is willing to close its China offices (three offices, with several hundred employees, according to a Google spokesman) if necessary.

This looks like a significant turning point in relations between U.S. companies and the Chinese government.

Before announcing the policy change, the statement discusses a series of cyberattacks against Google which sought access to Google-hosted accounts of Chinese dissidents. Indeed, most of the statement is about the attacks, with the policy change tacked on the end.

Though the statement adopts a measured tone, it’s hard to escape the conclusion that Google is angry, presumably because it knows or strongly suspects that the Chinese government is responsible for the attacks. Perhaps there are other details, which aren’t public at this time, that further explain Google’s reaction.

Or maybe the attacks are just the straw that broke the camel’s back — that Google had already concluded that the costs of engagement in China were higher than expected, and the revenue lower.

Either way, the Chinese are unlikely to back down from this kind of challenge. Expect the Chinese government, backed by domestic public opinion, to react with defiance. Already the Chinese search engine Baidu has issued a statement fanning the flames.

We’ll see over the coming days and weeks how the other U.S. Internet companies react. It will be interesting, too, to see how the U.S. government reacts — it can’t be happy with the attacks, but how far will the White House be willing to go?

Please, chime in with your own opinions.

[UPDATE (Jan. 13): I struck the sentence about Baidu's statement, because I now have reason to believe the translated statement I saw may not be genuine.]

avatar

Cyber Détente Part I: A Security Dilemma?

Late last year the Obama administration reopened talks with Russia over the militarization of cyberspace and assented to cybersecurity discussion in the United Nations First Committee (Disarmament and National Security). My intention in this three-part series is to probe Russian and American foreign policy on cyberwarfare and advance the thesis that the Russians are negotiating for specific strategic or diplomatic gains, while the Americans are primarily procedurally invested owing to the “reset” in Russian relations and changing perceptions of cyberwarfare.

This first post rebuts the Russians’ purported rationale for talks: avoiding a security dilemma.

——————————

The Russians seek a cyberwarfare arms control instrument ostensibly to avoid a security dilemma and arms race, in the vein of past arrangements for nuclear weapons (i.e. SALT I/II, START I/II, and SORT) and anti-ballistic missile technology (ABM), among others. This basis for negotiations does not withstand scrutiny.

A security dilemma may arise where a state has the opportunity to develop a game-changing new weapons system, even if for purely defensive purposes. For fear of strategic disadvantage other powers may elect to develop the weapon – an arms race – resulting in none gaining a strategic advantage and all bearing a significant cost. Alternatively, technologically incapable of matching or unable to afford the development, other states may take destabilizing offensive steps. Arms control treaties resolve this form of security dilemma by committing states to not developing certain weapons.

Cyberwarfare lacks necessary elements of a security dilemma. First and foremost, cyberwarfare capabilities defy quantifiability. Consider the Cold War nuclear arms race, for example, and the strategic fixation on differences in the number and type of nuclear warheads and delivery systems (the “missile gap”). In the absence of such a metric the two powers have no means of calibrating their activities, and there is no persistent pressure to match or surpass some specific capability the other side maintains.

Intelligence might give each power a rough indication of the other’s cyberwarfare capabilities, but it will be harder to come by than for other military operations. Unlike with other weapons systems, cyberwarfare does not require special installations or resources. There are no centrifuge sites to inspect or uranium shipments to track – just talented programmers and generic computer hardware.

A related issue is that a successful arms control agreement on cyberwarfare would require monitoring and enforcement provisions (“trust but verify”). But as discussed above intelligence on cyberwar capabilities will be harder to come by than for other weapons systems. The Biological Weapons Convention is illustrative of how ineffective an arms control treaty may be without effective monitoring: until a 1989 defection the West was unaware of the scope of Russia’s secret biological weapons program.

Supposing, arguendo, that cyberwarfare capabilities did form an avoidable security dilemma, the negative results that make a security dilemma worth avoiding – excessive expenditures and destabilization – do not arise.

Cyberwarfare is cheap. Developing the F-22 aircraft, for example, cost roughly $65 billion; the annual Air Force cyberspace budget, on the other hand, appears in the low billions and consists primarily of personnel and basing expenditures (Strategic Command Press Release; FY2010 budget).

As for destabilization, there is minimal marginal strategic gain from cyberwarfare capabilities. In the Cold War nuclear arms race there was a perception that if the other side achieved even a slight advantage the bipolar strategic equilibrium would collapse. Cyberwarfare is neither perceived to be – nor is it, in actuality – so effective on the margin. While specific capabilities are not public, it is difficult to imagine cyberattacks will be consistently more effective than conventional strikes. Moreover, given the United States’ enormous strategic advantages in the whole, even significant marginal strategic gains would do little to tip the balance of power to Russia.

Having deconstructed the alleged Russian rationale for talks, the next post in this series will explore alternate viable Russian rationales.