March 28, 2024

Archives for May 2009

Sizing Up "Code" with 20/20 Hindsight

Code and Other Laws of Cyberspace, Larry Lessig’s seminal work on Internet regulation, turns ten years old this year. To mark the occassion, the online magazine Cato Unbound (full disclosure: I’m a Cato adjunct scholar) invited Lessig and three other prominent Internet scholars to weigh in on Code‘s legacy: what it got right, where it went wrong, and what implications it has for the future of Internet regulation.

The final chapter of Code was titled “What Declan Doesn’t Get,” a jab at libertarians like CNet’s Declan McCullagh who believed that government regulation of the Internet was likely to do more harm than good. It’s fitting, then, that Declan got to kick things off with an essay titled (what else?) “What Larry Didn’t Get.” There were responses from Jonathan Zittrain (largely praising Code) and my co-blogger Adam Thierer (mostly criticizing it), and the Lessig got the last word. I think each contributor will be posting a follow-up essay in the coming days.

My ideological sympathies are with Declan and Adam, but rather than pile on to their ideological critiques, I want to focus on some of the specific technical predictions Lessig made in Code. People tend to forget that in addition to describing some key theoretical insights about the nature of Internet regulation, Lessig also made some pretty specific predictions about how cyberspace would evolve in the early years of the 21st Century. I think that enough time has elapsed that we can now take a careful look at those predictions and see how they’ve panned out.

Lessig’s key empirical claim was that as the Internet became more oriented around commerce, its architecture would be transformed in ways that undermined free speech and privacy. He thought that e-commerce would require the use of increasingly sophisticated public-key infrastructure that would allow any two parties on the net to easily and transparently exchange credentials. And this, in turn, would make anonymous browsing much harder, undermining privacy and making the Internet easier to regulate.

This didn’t happen, although for a couple of years after the publication of Code, it looked like a real possibility. At the time, Microsoft was pushing a single sign-on service called Passport that could have been the foundation of the kind of client authentication facility Lessig feared. But then passport flopped. Consumers weren’t enthusiastic about entrusting their identities to Microsoft, and businesses found that lighter-weight authentication processes were sufficient for most transactions. By 2005 companies like eBay started dropping Passport from their sites. The service has been rebranded Windows Live ID and is still limping along, but no one seriously expects it to become the kind of comprehensive identity-management system Lessig feared.

Lessig concedes that he was “wrong about the particulars of those technologies,” but he points to the emergence of a new generation of surveillance technologies—IP geolocation, deep packet inspection, and cookies—as evidence that his broader thesis was correct. I could quibble about whether any of these are really new technologies. Lessig discusses cookies in Code, and the other two are straightforward extensions of technologies that existed a decade ago. But the more fundamental problem is that these examples don’t really support Lessig’s original thesis. Remember that Lessig’s prediction was that changes to Internet architecture—such as the introduction of robust client authentication to web browsers—would transform the previously anarchic network into one that’s more easily regulated. But that doesn’t describe these technologies at all. Cookies, DPI, and geo-location are all technologies that work with vanilla TCP/IP, using browser technologies that were widely deployed in 1999. Technological changes made cyberspace more susceptible to regulation without any changes to the Internet’s architecture.

Indeed, it’s hard to think of any policy or architectural change that could have forestalled the rise of these technologies. The web would be extremely inconvenient if we didn’t have something like cookies. The engineering constraints on backbone routers make roughly geographical IP assignment almost unavoidable, and if IP addresses are tied to geopgrahy it’s only a matter of time before someone builds a database of the mapping. Finally, any unencrypted networking protocol is susceptible to deep packet inspection. Short of mandating that all traffic be encrypted, no conceivable regulatory intervention could have prevented the development of DPI tools.

Of course, now that these technologies exist, we can have a debate about whether to regulate their use. But Lessig was making a much stronger claim in 1999: that the Internet’s architecture (and, therefore, its susceptibility to regulation) circa 2009 would be dramatically different depending on the choices policymakers made in 1999. I think we can now say that this wasn’t right. Or, at least, the technologies he points to now aren’t good examples of that thesis.

It seems to me that the Internet is rather less malleable than Lessig imagined a decade ago. We would have gotten more or less the Internet we got regardless of what Congress or the FCC did over the last decade. And therefore, Lessig’s urgent call to action—his argument that we must act in 1999 to ensure that we have the kind of Internet we want in 2009—was misguided. In general, it works pretty well to wait until new technologies emerge and then debate whether to regulate them after the fact, rather than trying to regulate preemptively to shape the kinds of technologies that are developed.

As I wrote a few months back, I think Jonathan Zittrain’s The Future of the Internet and How to Stop It makes the same kind of mistake Lessig made a decade ago: overestimating regulators’ ability to shape the evolution of new technologies and underestimating the robustness of open platforms. The evolution of technology is mostly shaped by engineering and economic constraints. Government policies can sometimes force new technologies underground, but regulators rarely have the kind of fine-grained control they would need to promote “generative” technologies over sterile ones, any more than they could have stopped the emergence of cookies or DPI if they’d made different policy choices a decade ago.

A Modest Proposal: Three-Strikes for Print

Yesterday the French parliament adopted a proposal to create a “three-strikes” system that would kick people off the Internet if they are accused of copyright infringement three times.

This is such a good idea that it should be applied to other media as well. Here is my modest proposal to extend three-strikes to the medium of print, that is, to words on paper.

My proposed system is simplicity itself. The government sets up a registry of accused infringers. Anybody can send a complaint to the registry, asserting that someone is infringing their copyright in the print medium. If the government registry receives three complaints about a person, that person is banned for a year from using print.

As in the Internet case, the ban applies to both reading and writing, and to all uses of print, including informal ones. In short, a banned person may not write or read anything for a year.

A few naysayers may argue that print bans might be hard to enforce, and that banning communication based on mere accusations of wrongdoing raises some minor issues of due process and free speech. But if those issues don’t trouble us in the Internet setting, why should they trouble us here?

Yes, if banned from using print, some students will be unable to do their school work, some adults will face minor inconvenience in their daily lives, and a few troublemakers will not be allowed to participate in — or even listen to — political debate. Maybe they’ll think more carefully the next time, before allowing themselves to be accused of copyright infringement.

In short, a three-strikes system is just as good an idea for print as it is for the Internet. Which country will be the first to adopt it?

Once we have adopted three-strikes for print, we can move on to other media. Next on the list: three-strikes systems for sound waves, and light waves. These media are too important to leave unprotected.

[Français]

Recovery Act Spending: Getting to the Bottom Line

Under most circumstances, government spending is slow and deliberate—a key fact that helps reduce the chances of waste and fraud. But the recently passed Recovery Act is a special case: spending the money quickly is understood to be essential to the success of the Act. We all know that shoppers in a hurry tend to get less value for their money. But, ironically, the overall macroeconomic impact of the stimulus (and hence the average stimulative effect per dollar spent) may be maximized by quick spending, even if the speed premium does increase the total amount of waste and abuse.

This situation creates a paradox for transparency and oversight efforts. On the one hand, the quicker pace of spending makes it all the more important to provide for public scrutiny, and to provide information in ways that will rapidly enable as many people as possible to take advantage of the stimulus opportunities available to them. On the other, the same rush that makes transparency important also reduces the time available for those within government to design and build an infrastructure for stimulus transparency.

One of the troubling tradeoffs that has been made thus far involves information about stimulus funds that flow from the federal government to states and then from states to localities. This pattern is rarer than you might think, since much of the Recovery Act spending flows more directly from federal agencies to its end recipients. But for funds that do follow a path from federal to state to local officials, recent guidance issued April 3 by the Office of Management and Budget (OMB) makes clear that the federal reporting infrastructure being created for Recovery.gov will not collect information about what the localities ultimately do with the funds.

OMB says that it does have the legal authority to require detailed reporting on “all levels of subawards,” reaching end recipients (Acme Concrete or whomever gets a contract or grant from the municipality at the end of the governmental chain). But in the context of its sprint to get at least some system into place as soon as possible (with the debut date for the Recovery.gov system already pushed back to October), OMB has left this deep-level reporting out of its immediate plans. The office says that it “plans to expand the reporting model in the future to also obtain this information, once the system capabilities and processes have been established.”

On Monday, ten congressmen sent a letter to OMB urging it to collect this detailed information “as early as possible.” One reason for OMB to formulate detailed operational plans in this area, as I argued in recent testimony before the House Committee on Oversight and Government Reform, is that clarity from the top will help states make competent choices about what if anything they should do to support or supplement the federal reporting. As the members of Congress write:

While it is positive that OMB goes on to reserve the right in the guidance to expand this reporting model in the future, it would seem exercising this right and requiring this level of reporting as early as possible would help entities prepare for the disclosures before projects begin and provide clarification for states as they begin investing in new infrastructure to track ARRA funds.

In the end, everyone agrees that this detailed information about subawards is important to have—OMB “plans to collect” it and the signatories to yesterday’s letter want collection to start “as soon as possible.” But how soon is that? We don’t really know. The details of hard choices facing OMB as it races to implement the Recovery.gov reporting system are themselves not public, and making them public might (or might not) itself slow down the development of the site. If no system were permitted to launch without fully detailed reporting of subawards, we might wait longer for the web site’s launch. How much longer? OMB might not itself be sure, since software development times are notoriously difficult to forecast, and OMB has never before been asked to build a system of this kind. OMB asserts that it’s moving as fast as it can to collect as much information as possible, and without slowing it down to ask for explanations, we can’t really check that assertion.

Transparency often reduces the degree to which citizens must trust public officials. But in this case, ironically, it seems most reasonable to operate on the optimistic but realistic assumption that the people working on Recovery Act transparency are doing their jobs well, and to hope for good results.

Breathalyzer Source Code Secrecy Endangers Minnesota Drunk Driving Convictions

The Minnesota Supreme Court ruled recently that defendants accused of drunk driving in the state are entitled to have their experts inspect the source code for the software in the Intoxilyzer breath-testing machines used by police to gauge the defendants’ blood alcohol levels. The defendants argued, successfully, that they were entitled to examine and challenge the evidence against them, including the design and functioning of devices used to generate that evidence.

The ruling puts many of the state’s drunk driving prosecutions on thin ice, because CMI, the Intoxilyzer’s maker, is withholding the source code and the state apparently has no way to force CMI to provide the code.

Eric Rescorla argues, reasonably, that breath testers have many potential failure modes unrelated to software, and that source code analysis can be labor-intensive and might not turn up any clear problems. Both arguments are valid, as far as they go.

I’m not a lawyer, so I won’t try to guess whether the court’s ruling was correct as a matter of law. But the ruling does seem right as a matter of policy. If we are troubled by criminal convictions relying on secret evidence, then we should also be troubled by convictions relying on evidence generated by a secret process. To the extent that the Intoxilyzer functions as a secret process, the state should not be relying on it in criminal prosecutions.

(Though I haven’t thought carefully about the question, I might potentially draw a different policy conclusion in a civil case, where the standard of proof is preponderance of evidence, rather than guilt beyond a reasonable doubt.)

The problem is illustrated nicely by a contradiction in the arguments that CMI and the state are making. On the one hand, they argue that the machine’s source code contains valuable trade secrets — I’ll call them the “secret sauce” — and that CMI’s business would be substantially harmed if its competitors learned about the secret sauce. On the other hand, they argue that there is no need to examine the source code because it operates straightforwardly, just reading values from some sensors and doing simple calculations to derive a blood alcohol estimate.

It’s hard to see how both arguments can be correct. If the software contains secret sauce, then by definition it has aspects that are neither obvious nor straightforward, and those aspects are important for the software’s operation. In other words, the secret sauce — whatever it is — must relevant to the defendants’ claims.

As in electronic voting, where we have seen similar secrecy arguments, one can’t help suspecting that the real “secret” is that the software quality is not what it should be. A previous study of source code from New Jersey breath testers did appear to find some embarrassing errors.

Let’s hope that breath tester companies can do better than e-voting companies. A rigorous, independent evaluation of the breath tester source code would either determine that the code is sound, or it would undercover problems that could then be fixed, to restore confidence in the machines. Either way, the police in Minnesota would end up with a reliable tool for giving drunk drivers the punishment they deserve.

Sunlight on NASED ITA Reports

Short version: we now have gobs of voting system ITA reports, publicly available and hosted by the NSF ACCURATE e-voting center. As I explain below, ITA’s were the Independent Testing Authority laboratories that tested voting systems for many years.

Long version: Before the Election Assistance Commission (EAC) took over the testing and certification of voting systems under the Help America Vote Act (HAVA), this critical function was performed by volunteers. The National Association of State Election Directors (NASED) recognized a need for voting system testing and partnered with the Federal Election Commission (FEC) to establish a qualification program that would test systems as having met or exceeded the requirements of the 1990 and 2002 Voting System Standards.*

However, as I’ve lamented many, many times over the years, the input, output and intermediate work product of the NASED testing regime were completely secret, due to proprietary concerns on behalf of the manufacturers. Once a system completed testing, members of the public could see that an entry was made in a publicly-available spreadsheet listing the tested components and a NASED qualification number for the system. But the public was permitted no other insight into the NASED qualification regime.

Researchers were convinced from what evidence was available that the quality of the testing was highly inadequate and that the expertise didn’t exist within either the testing laboratories to perform adequate testing or the NASED technical committee to competently review the ultimate test reports submitted by the laboratories (called Independent Testing Authorities (ITA)). Naturally, when reports of problems started to crop-up, like the various Hursti vulnerabilities with Diebold memory cards, the NASED system scrambled to figure out what went wrong.

I know have more moderate views with respect to the NASED regime: sure, it was pretty bad and a lot of serious vulnerabilities slipped through the cracks, but I’m not yet convinced that just having the right people or a different process in place would have resulted in fewer problems in the field. To have fixed the NASED system would have required improvements on all fronts: the technology, the testing paradigms, the people involved and the testing and certification process.

The EAC has since taken over testing and certification. Their process is notable in its much higher level of openness and accountability; the test plans are published (previously claimed as proprietary by the testing labs), the test reports are published (previously claimed as proprietary by the vendors) and the process is specified in detail with a program manual, a laboratory manual, notices of clarification, etc.

This is all great and it helps to increase the transparency of the EAC certification program. But, what about the past? What about the testing that NASED did? Well, we don’t know much about it for a number of reasons, chief among them that we never saw any of the materials mentioned above that are now available in the new EAC system.

Through a fortunate FOIA request made of the EAC on behalf of election sleuth Susan Greenhalgh, we now have available a slew of ITA reports from one of the ITAs, Ciber.

The reports are available at the following location (hosted by our NSF ACCURATE e-voting center):

http://accurate-voting.org/docs/ita-reports/

These reports cover the Software ITA testing performed by the ITA Ciber for the following voting systems:

  • Automark AIMS 1.0.9
  • Diebold GEMS 1.18.19
  • Diebold GEMS 1.18.22
  • Diebold GEMS 1.18.24
  • Diebold AccuVote-TSx Model D
  • Diebold AccuVote-TSx Model D w/ AccuView Printer
  • Diebold Assure 1.0
  • Diebold Assure 1.1
  • Diebold Election Media Processor 4.6.2
  • Diebold Optical Scan Accumulator Adapter
  • Hart System 4.0
  • Hart System 4.1
  • Hart System 6.0
  • Hart System 6.2
  • Hart System 6.2.1

I’ll be looking at these in my leisure over coming weeks and pointing out interesting features of these reports and the associated correspondence included in the FOIA production.

*The distinction between certification and qualification, although vague, appears to be that under the NASED system, states did the ultimate certification of a voting system for fitness in future elections.