November 21, 2024

Archives for 2010

Software in dangerous places

Software increasingly manages the world around us, in subtle ways that are often hard to see. Software helps fly our airplanes (in some cases, particularly military fighter aircraft, software is the only thing keeping them in the air). Software manages our cars (fuel/air mixture, among other things). Software manages our electrical grid. And, closer to home for me, software runs our voting machines and manages our elections.

Sunday’s NY Times Magazine has an extended piece about faulty radiation delivery for cancer treatment. The article details two particular fault modes: procedural screwups and software bugs.

The procedural screwups (e.g., treating a patient with stomach cancer with a radiation plan intended for somebody else’s breast cancer) are heartbreaking because they’re something that could be completely eliminated through fairly simple mechanisms. How about putting barcodes on patient armbands that are read by the radiation machine? “Oops, you’re patient #103 and this radiation plan is loaded for patent #319.”

The software bugs are another matter entirely. Supposedly, medical device manufacturers, and software correctness people, have all been thoroughly indoctrinated in the history of Therac-25, a radiation machine from the mid-80’s whose poor software engineering (and user interface design) directly led to several deaths. This article seems to indicate that those lessons were never properly absorbed.

What’s perhaps even more disturbing is that nobody seems to have been deeply bothered when the radiation planning software crashed on them! Did it save their work? Maybe you should double check? Ultimately, the radiation machine just does what it’s told, and the software than plans out the precise dosing pattern is responsible for getting it right. Well, if that software is unreliable (which the article clearly indicates), you shouldn’t use it again until it’s fixed!

What I’d like to know more about, and which the article didn’t discuss at all, is what engineering processes, third-party review processes, and certification processes were used. If there’s anything we’ve learned about voting systems, it’s that the federal and state certification processes were not up to the task of identifying security vulnerabilities, and that the vendors had demonstrably never intended their software to resist the sorts of the attacks that you would expect on an election system. Instead, we’re told that we can rely on poll workers following procedures correctly. Which, of course, is exactly what the article indicates is standard practice for these medical devices. We’re relying on the device operators to do the right thing, even when the software is crashing on them, and that’s clearly inappropriate.

Writing “correct” software, and further ensuring that it’s usable, is a daunting problem. In the voting case, we can at least come up with procedures based on auditing paper ballots, or using various cryptographic techniques, that allow us to detect and correct flaws in the software (although getting such procedures adopted is a daunting problem in its own right, but that’s a story for another day). In the aviation case, which I admit to not knowing much about, I do know they put in sanity-checking software, that will detect when the the more detailed algorithms are asking for something insane and will override it. For medical devices like radiation machines, we clearly need a similar combination of mechanisms, both to ensure that operators don’t make avoidable mistakes, and to ensure that the software they’re using is engineered properly.

Cyber Détente Part III: American Procedural Negotiation

The first post in this series rebutted the purported Russian motive for renewed cybersecurity negotiations and the second advanced more plausible self-interested rationales. This third and final post of the series examines the U.S. negotiating position through both substantive and procedural lenses.

——————————

American interest in a substantive cybersecurity deal appears limited, and the U.S. is rightly skeptical of Russian motives (perhaps for the reasons detailed in the prior two posts). Negotiators have publicly expressed support for institutional cooperation on the closely related issue of cybercrime, but firmly oppose an arms control or cyberterrorism treaty. This tenuous commitment is further implicated by the U.S. delegation’s composition. Representation of the NSA, State, DoD, and DHS suggests only a preliminary willingness to hear the Russians out and minimal consideration of a full-on bilateral negotiation.

While the cybersecurity talks may thus be substantively vacuous, they have great procedural merit when viewed in the context of shifting Russian relations and perceptions of cybersecurity.

The Bush administration’s Russia policy was marked by antagonism; proposed missile defense installations in Poland and the Czech Republic and NATO membership for Georgia and Ukraine particularly rankled the Kremlin. Upon taking office the Obama administration committed to “press[ing] the reset button” on U.S.-Russia relations by recommitting to cooperation in areas of shared interest.

Cybersecurity talks may best be evaluated as a facet of this systemic “reset.” Earnest discussions – including fruitless ones – may contribute towards a collegial relationship and further other more substantively promising negotiations between the two powers. The cybersecurity topic is particularly well suited for this role in that it brings often less-than-friendly defense, intelligence, and law enforcement agencies to the same table.

Inside-the-beltway perceptions of cybersecurity have also experienced a sea change. In the early Bush administration cybersecurity problems were predominantly construed as cybercrime problems, and consequently within the purview of law enforcement. For example, one of the first “major actions” advocated by the White House’s 2003 National Strategy to Secure Cyberspace was, “[e]nhance law enforcement’s capabilities for preventing and prosecuting cyberspace attacks.” But by the Obama administration cybersecurity was perceived as a national security issue; the 2009 Cyberspace Policy Review located primary responsibility for cybersecurity in the National Security Council.

This shift suggests additional procedural causes for renewed U.S.-Russia and UN cybersecurity talks. Not only do the discussions reflect the new perception of cybersecurity as a national security issue, but also they nudge other nations towards that view. And directly engaging defense and intelligence agencies accustoms them to viewing cybersecurity as an international issue within their domain.

The U.S. response of simultaneously substantively balking at and procedurally engaging with Russia on cybersecurity appears well-calibrated. Where meager opportunity exists for concluding a meaningful cybersecurity instrument given the Russian motives discussed earlier, the U.S. is nonetheless generating value.

While this favorable outcome is reassuring, it is by no means guaranteed for future cybersecurity talks. There is already a noxious atmosphere of often unwarranted alarmism about cyberwarfare and free-form parallels drawn between cyberattack and weapons of mass destruction. Admix the recurrently prophesied “Digital Pearl Harbor” and it is easy to imagine how an international compact on cybersecurity could look all-too-appealing. This pitfall can only be avoided by training an informed, critical eye on states’ motives to develop the appropriate – if any – cybersecurity negotiating position.

Cyber Détente Part II: Russian Diplomatic and Strategic Self-Interest

The first post in this series rebutted the purported Russian motive for negotiations, avoiding a security dilemma. This second post posits two alternative self-interested Russian inducements for rapprochement: legitimizing use of force and strategic advantage.

——————————

An alternative rationale for talks advanced by the Russians is fear of “cyberterror” – not the capacity for offensive cyberwarfare, but its use against civilians. A weapons use treaty of this sort could have value in establishing a norm against civilian cyberattack… but there are already strong international treaties and norms against attacks aimed at civilians. And at any rate the untraceability of most cyberattacks will take the teeth out of any use-banning treaty of this sort.

The U.S. delegation is rightly skeptical of this motive; the Russians may well be raising cyberterror in the interest of legitimating use of conventional force. The Russians have repeatedly likened political dissidence to cyberterror, and a substantive cyberterrorism treaty may be submitted by Russia as license to pursue political vendettas with conventional force. To probe how such a treaty might function, consider first a hypothetical full-blown infrastructure-crippling act of cyberterror where the perpetrator is known – Russia already need not restrain itself in retaliating. On the other hand, consider the inevitable website defacements by Chechen separatists or Georgian sympathizers in the midst of increasing hostilities – acts of cyberterrorism in violation of a treaty will assuredly be added to the list of provocations should Russia elect to engage in armed conflict.

This simple thought experiment reveals the deep faultlines that will emerge in negotiating any cyberterrorism treaty. Where is the boundary between vandalism (and other petty cybercrime) and cyberterror? What if acts are committed, as is often the case, by nationals of a state but not its government? What proof is required to sustain an allegation of cyberterror? Doubtlessly the Russian delegation would advance a broad definition of cyberterror, while the Americans would propose a narrowly circumscribed definition. Even if, improbably, the U.S. and Russia negotiated to a shared definition of cyberterror, I fail to see how it could be articulated in a manner not prone to later manipulation. It is not difficult to imagine, for example, how trivial defacement of a bank’s website might be shoehorned into a narrow definition: “destructive acts targeting critical civilian infrastructure.”

Another compelling motive for the Russians is realist self-interest: the Russians may believe they will gain a strategic advantage with a capacity-limiting cyberwarfare treaty. At first blush this seems an implausible reading – the U.S., with its technologically advanced and integrated armed forces, appears a far richer target for cyberattack than Russia given its reliance on decrepit Soviet equipment. Moreover, anecdotally the U.S. military has proven highly vulnerable: multiple unattributed attacks have penetrated defense-related systems (most prominently in 2007), and late last year the Wall Street Journal reported Iraqi militants trivially intercepted live video from Predator drones. But looking ahead a Russian self-interest motive is more plausible. Russia has made no secret of its attempts to rapidly stand up modern, professional armed forces, and in 2009 alone increased military spending by over 25% (projects include a revamped navy and a satellite positioning system, among many others). To accomplish this end the Russians may rely to a large degree on information technology, and particularly on commercial off-the-shelf hardware and software. Lacking time and finances the Russians may be unable to secure their new military systems against cyberattack. Thus while at present the U.S. is more vulnerable, in future Russia may have greater weaknesses. Locking in a cyberwarfare arms control agreement now, while the U.S. is more likely to sign on, could therefore be in Russia’s long-term strategic self-interest.

The specific offensive capabilities Russia has reportedly sought to ban are strongly corroborative of this self-interest rationale. In prior negotiations the Russian delegation has signaled particular concern of deliberately planted software and hardware that would allow disabling or co-opting military equipment. The U.S. will likely have far greater success in developing assets of this sort given the at times close relationship between intelligence agencies and commercial IT firms (e.g. the NSA warrantless wiretapping scandal) and the prevalence of American IT worldwide in military applications (think Windows). Russia, on the other hand, would likely have to rely on human intelligence to place assets of this sort.

Russia’s renewed interest in bilateral cybersecurity negotiations also belies its purported security dilemma rationale. Russian interest in talks lapsed between 1996 and 2009, suggesting a novel stimulus is at work, not some long-standing fear of a security dilemma. The recent rise of alleged “cyberterror” and attempts to modernize Russian armed forces – especially in the wake of the 2008 South Ossetia War with Georgia – far better correlate with Russia’s eagerness to come to the table.

To put a point on these two posts, I submit legitimization of use of force and strategic self-interest are far more plausible Russian motives for cybersecurity negotiations than the purported rationale of avoiding a security dilemma and consequent arms race or destabilization. In the following post I will explore the U.S. delegation’s position and argue the American response to Russia’s proposals is well-calibrated.

Google Attacks Highlight the Importance of Surveillance Transparency

Ed posted yesterday about Google’s bombshell announcement that it is considering pulling out of China in the wake of a sophisticated attack on its infrastructure. People more knowledgeable than me about China have weighed in on the announcement’s implications for the future of US-Sino relations and the evolution of the Chinese Internet. Rebecca MacKinnon, a China expert who will be a CITP visiting scholar beginning next month, says that “Google has taken a bold step onto the right side of history.” She has a roundup of Chinese reactions here.

One aspect of Google’s post that hasn’t received a lot of attention is Google’s statement that “only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” A plausible explanation for this is provided by this article (via James Grimmelmann) at PC World:

Drummond said that the hackers never got into Gmail accounts via the Google hack, but they did manage to get some “account information (such as the date the account was created) and subject line.”

That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.

Obviously, this report should be taken with a grain of salt since it’s based on a single anonymous source. But it fits a pattern identified by our own Jen Rexford and her co-authors in an excellent 2007 paper: when communications systems are changed to make it easier for US authorities to conduct surveillance, it necessarily increases the vulnerability of those systems to attacks by other parties, including foreign governments.

Rexford and her co-authors point to a 2006 incident in which unknown parties exploited vulnerabilities in Vodafone’s network to tap the phones of dozens of senior Greek government officials. According to news reports, these attacks were made possible because Greek telecommunications carriers had deployed equipment with built-in surveillance capabilities, but had not paid the equipment vendor, Ericsson, to activate this “feature.” This left the equipment in a vulnerable state. The attackers surreptitiously switched on the surveillance capabilities and used it to intercept the communications of senior government officials.

It shouldn’t surprise us that systems built to give law enforcement access to private communications could become vectors for malicious attacks. First, these interfaces are often backwaters in the system design. The success of any consumer product is going to depend on its popularity with customers. Therefore, a vendor or network provider is going to deploy its talented engineers to work on the public-facing parts of the product. It is likely to assign a smaller team of less-talented engineers to work on the law-enforcement interface, which is likely to be both less technically interesting and less crucial to the company’s bottom line.

Second, the security model of a law enforcement interface is likely to be more complex and less well-specified than the user-facing parts of the service. For the mainstream product, the security goal is simple: the customer should be able to access his or her own data and no one else’s. In contrast, determining which law enforcement officials are entitled to which information, and how those officials are to be authenticated, can become quite complex. Greater complexity means a higher likelihood of mistakes.

Finally, the public-facing portions of a consumer product benefit from free security audits from “white hat” security experts like our own Bill Zeller. If a publicly-facing website, cell phone network or other consumer product has a security vulnerability, the company is likely to hear about the problem first from a non-malicious source. This means that at least the most obvious security problems will be noticed and fixed quickly, before the bad guys have a chance to exploit them. In contrast, if an interface is shrouded in secrecy, and only accessible to law enforcement officials, then even obvious security vulnerabilities are likely to go unnoticed and unfixed. Such an interface will be a target-rich environment if a malicious hacker ever does get the opportunity to attack it.

This is an added reason to insist on rigorous public and judicial oversight of our domestic surveillance capabilities in the United States. There has been a recent trend, cemented by the 2008 FISA Amendments toward law enforcement and intelligence agencies conducting eavesdropping without meaningful judicial (to say nothing of public) scrutiny. Last month, Chris Soghoian uncovered new evidence suggesting that government agencies are collecting much more private information than has been publicly disclosed. Many people, myself included, oppose this expansion of domestic surveillance grounds on civil liberties grounds. But even if you’re unmoved by those arguments, you should still be concerned about these developments on national security grounds.

As long as these eavesdropping systems are shrouded in secrecy, there’s no way for “white hat” security experts to even begin evaluating them for potential security risks. And that, in turn, means that voters and policymakers will be operating in the dark. Programs that risk exposing our communications systems to the bad guys won’t be identified and shut down. Which means the culture of secrecy that increasingly surrounds our government’s domestic spying programs not only undermines the rule of law, it’s a danger to national security as well.

Update: Props to my colleague Julian Sanchez, who made the same observation 24 hours ahead of me.

Google Threatens to Leave China

The big news today is Google’s carefully worded statement changing its policy toward China. Up to now, Google has run a China-specific site, google.cn, which censors results consistent with the demands of the Chinese government. Google now says it plans to offer only unfiltered service to Chinese customers. Presumably the Chinese government will not allow this and will respond by setting the Great Firewall to block Google. Google says it is willing to close its China offices (three offices, with several hundred employees, according to a Google spokesman) if necessary.

This looks like a significant turning point in relations between U.S. companies and the Chinese government.

Before announcing the policy change, the statement discusses a series of cyberattacks against Google which sought access to Google-hosted accounts of Chinese dissidents. Indeed, most of the statement is about the attacks, with the policy change tacked on the end.

Though the statement adopts a measured tone, it’s hard to escape the conclusion that Google is angry, presumably because it knows or strongly suspects that the Chinese government is responsible for the attacks. Perhaps there are other details, which aren’t public at this time, that further explain Google’s reaction.

Or maybe the attacks are just the straw that broke the camel’s back — that Google had already concluded that the costs of engagement in China were higher than expected, and the revenue lower.

Either way, the Chinese are unlikely to back down from this kind of challenge. Expect the Chinese government, backed by domestic public opinion, to react with defiance. Already the Chinese search engine Baidu has issued a statement fanning the flames.

We’ll see over the coming days and weeks how the other U.S. Internet companies react. It will be interesting, too, to see how the U.S. government reacts — it can’t be happy with the attacks, but how far will the White House be willing to go?

Please, chime in with your own opinions.

[UPDATE (Jan. 13): I struck the sentence about Baidu’s statement, because I now have reason to believe the translated statement I saw may not be genuine.]