March 26, 2017

Archives for January 2010

Information Technology Policy in the Obama Administration, One Year In

[Last year, I wrote an essay for Princeton’s Woodrow Wilson School, summarizing the technology policy challenges facing the incoming Obama Administration. This week they published my follow-up essay, looking back on the Administration’s first year. Here it is.]

Last year I identified four information technology policy challenges facing the incoming Obama Administration: improving cybersecurity, making government more transparent, bringing the benefits of technology to all, and bridging the culture gap between techies and policymakers. On these issues, the Administration’s first-year record has been mixed. Hopes were high that the most tech-savvy presidential campaign in history would lead to an equally transformational approach to governing, but bold plans were ground down by the friction of Washington.

Cybersecurity : The Administration created a new national cybersecurity coordinator (or “czar”) position but then struggled to fill it. Infighting over the job description — reflecting differences over how to reconcile security with other economic goals — left the czar relatively powerless. Cyberattacks on U.S. interests increased as the Adminstration struggled to get its policy off the ground.

Government transparency: This has been a bright spot. The White House pushed executive branch agencies to publish more data about their operations, and created rules for detailed public reporting of stimulus spending. Progress has been slow — transparency requires not just technology but also cultural changes within government — but the ship of state is moving in the right direction, as the public gets more and better data about government, and finds new ways to use that data to improve public life.

Bringing technology to all: On the goal of universal access to technology, it’s too early to tell. The FCC is developing a national broadband plan, in hopes of bringing high-speed Internet to more Americans, but this has proven to be a long and politically difficult process. Obama’s hand-picked FCC chair, Julius Genachowski, inherited a troubled organization but has done much to stabilize it. The broadband plan will be his greatest challenge, with lobbyists on all sides angling for advantage as our national network expands.

Closing the culture gap: The culture gap between techies and policymakers persists. In economic policy debates, health care and the economic crisis have understandably taken center stage, but there seems to be little room even at the periphery for the innovation agenda that many techies had hoped for. The tech policy discussion seems to be dominated by lawyers and management consultants, as in past Administrations. Too often, policymakers still see techies as irrelevant, and techies still see policymakers as clueless.

In recent days, creative thinking on technology has emerged from an unlikely source: the State Department. On the heels of Google’s surprising decision to back away from the Chinese market, Secretary of State Clinton made a rousing speech declaring Internet freedom and universal access to information as important goals of U.S. foreign policy. This will lead to friction with the Chinese and other authoritarian governments, but our principles are worth defending. The Internet can a powerful force for transparency and democratization, around the world and at home.

Software in dangerous places

Software increasingly manages the world around us, in subtle ways that are often hard to see. Software helps fly our airplanes (in some cases, particularly military fighter aircraft, software is the only thing keeping them in the air). Software manages our cars (fuel/air mixture, among other things). Software manages our electrical grid. And, closer to home for me, software runs our voting machines and manages our elections.

Sunday’s NY Times Magazine has an extended piece about faulty radiation delivery for cancer treatment. The article details two particular fault modes: procedural screwups and software bugs.

The procedural screwups (e.g., treating a patient with stomach cancer with a radiation plan intended for somebody else’s breast cancer) are heartbreaking because they’re something that could be completely eliminated through fairly simple mechanisms. How about putting barcodes on patient armbands that are read by the radiation machine? “Oops, you’re patient #103 and this radiation plan is loaded for patent #319.”

The software bugs are another matter entirely. Supposedly, medical device manufacturers, and software correctness people, have all been thoroughly indoctrinated in the history of Therac-25, a radiation machine from the mid-80’s whose poor software engineering (and user interface design) directly led to several deaths. This article seems to indicate that those lessons were never properly absorbed.

What’s perhaps even more disturbing is that nobody seems to have been deeply bothered when the radiation planning software crashed on them! Did it save their work? Maybe you should double check? Ultimately, the radiation machine just does what it’s told, and the software than plans out the precise dosing pattern is responsible for getting it right. Well, if that software is unreliable (which the article clearly indicates), you shouldn’t use it again until it’s fixed!

What I’d like to know more about, and which the article didn’t discuss at all, is what engineering processes, third-party review processes, and certification processes were used. If there’s anything we’ve learned about voting systems, it’s that the federal and state certification processes were not up to the task of identifying security vulnerabilities, and that the vendors had demonstrably never intended their software to resist the sorts of the attacks that you would expect on an election system. Instead, we’re told that we can rely on poll workers following procedures correctly. Which, of course, is exactly what the article indicates is standard practice for these medical devices. We’re relying on the device operators to do the right thing, even when the software is crashing on them, and that’s clearly inappropriate.

Writing “correct” software, and further ensuring that it’s usable, is a daunting problem. In the voting case, we can at least come up with procedures based on auditing paper ballots, or using various cryptographic techniques, that allow us to detect and correct flaws in the software (although getting such procedures adopted is a daunting problem in its own right, but that’s a story for another day). In the aviation case, which I admit to not knowing much about, I do know they put in sanity-checking software, that will detect when the the more detailed algorithms are asking for something insane and will override it. For medical devices like radiation machines, we clearly need a similar combination of mechanisms, both to ensure that operators don’t make avoidable mistakes, and to ensure that the software they’re using is engineered properly.

Cyber Détente Part III: American Procedural Negotiation

The first post in this series rebutted the purported Russian motive for renewed cybersecurity negotiations and the second advanced more plausible self-interested rationales. This third and final post of the series examines the U.S. negotiating position through both substantive and procedural lenses.

——————————

American interest in a substantive cybersecurity deal appears limited, and the U.S. is rightly skeptical of Russian motives (perhaps for the reasons detailed in the prior two posts). Negotiators have publicly expressed support for institutional cooperation on the closely related issue of cybercrime, but firmly oppose an arms control or cyberterrorism treaty. This tenuous commitment is further implicated by the U.S. delegation’s composition. Representation of the NSA, State, DoD, and DHS suggests only a preliminary willingness to hear the Russians out and minimal consideration of a full-on bilateral negotiation.

While the cybersecurity talks may thus be substantively vacuous, they have great procedural merit when viewed in the context of shifting Russian relations and perceptions of cybersecurity.

The Bush administration’s Russia policy was marked by antagonism; proposed missile defense installations in Poland and the Czech Republic and NATO membership for Georgia and Ukraine particularly rankled the Kremlin. Upon taking office the Obama administration committed to “press[ing] the reset button” on U.S.-Russia relations by recommitting to cooperation in areas of shared interest.

Cybersecurity talks may best be evaluated as a facet of this systemic “reset.” Earnest discussions – including fruitless ones – may contribute towards a collegial relationship and further other more substantively promising negotiations between the two powers. The cybersecurity topic is particularly well suited for this role in that it brings often less-than-friendly defense, intelligence, and law enforcement agencies to the same table.

Inside-the-beltway perceptions of cybersecurity have also experienced a sea change. In the early Bush administration cybersecurity problems were predominantly construed as cybercrime problems, and consequently within the purview of law enforcement. For example, one of the first “major actions” advocated by the White House’s 2003 National Strategy to Secure Cyberspace was, “[e]nhance law enforcement’s capabilities for preventing and prosecuting cyberspace attacks.” But by the Obama administration cybersecurity was perceived as a national security issue; the 2009 Cyberspace Policy Review located primary responsibility for cybersecurity in the National Security Council.

This shift suggests additional procedural causes for renewed U.S.-Russia and UN cybersecurity talks. Not only do the discussions reflect the new perception of cybersecurity as a national security issue, but also they nudge other nations towards that view. And directly engaging defense and intelligence agencies accustoms them to viewing cybersecurity as an international issue within their domain.

The U.S. response of simultaneously substantively balking at and procedurally engaging with Russia on cybersecurity appears well-calibrated. Where meager opportunity exists for concluding a meaningful cybersecurity instrument given the Russian motives discussed earlier, the U.S. is nonetheless generating value.

While this favorable outcome is reassuring, it is by no means guaranteed for future cybersecurity talks. There is already a noxious atmosphere of often unwarranted alarmism about cyberwarfare and free-form parallels drawn between cyberattack and weapons of mass destruction. Admix the recurrently prophesied “Digital Pearl Harbor” and it is easy to imagine how an international compact on cybersecurity could look all-too-appealing. This pitfall can only be avoided by training an informed, critical eye on states’ motives to develop the appropriate – if any – cybersecurity negotiating position.