November 22, 2024

Archives for 2009

NJ Voting-machine Trial: Defense Witnesses

I’ve previously summarized my own testimony and other plaintiffs’ witnesses’ testimony in the New Jersey voting machines trial, Gusciora v. Corzine.

The defendant is the State of New Jersey (Governor and Secretary of State). The defense case comprised the following witnesses:

Defense witness James Clayton, the Ocean County voting machine warehouse supervisor, is a well-intentioned official who tries to have good procedures to secure the Ocean County voting machines. Still, it became apparent in his testimony that there are security gaps regarding transport of the machines, keys to the machines, and security at polling places before and after election day.

Richard Woodbridge is a patent attorney who has chaired the NJ Voting Machine Examination Committee for more than 20 years. It’s not clear why the defendants called him as a witness, because they conducted only a 15-minute direct examination in which he didn’t say much. On cross-examination he confirmed that his committee does not conduct an independent analysis of software and does not consult with any computer security experts.

Robert Giles, Director of Elections of the State of New Jersey, testified about experimenting with different forms of seals and locks that New Jersey might apply to its AVC Advantage voting machines. On cross examination, it became clear that there is no rhyme or reason in how the State is choosing seals and other security measures; that they’re not getting expert advice on these matters. Also he admitted that there are no statewide control or even supervision of the procedures that counties use to safeguard the voting machines, the results cartridges, keys, and so on. He confirmed that several counties use the cartridges as the official tally, in preference to paper printouts witnessed and signed (at the close of the polls) by election workers.

Edwin Smith testified as an expert witness for the State defendants. Mr. Smith is vice-president and part owner of Sequoia Voting Systems. He stands to gain financially depending on the verdict in this trial: NJ represents 20% of Sequoia’s market, and his bonuses depend on sales. Mr. Smith testified to rebut my testimony about fake Z80 processors. (Wayne Wolf, who testified for plaintiffs about fake Z80s, testified after Mr. Smith, as a rebuttal witness.) Even though Mr. Smith repeatedly referred to replacement of Z80s as “science fiction”, he then offered lengthy testimony about methods to try to detect fake Z80s. This gave credence to the fact that fraudulent CPUs are not only a possibility but a real threat.

Mr. Smith also confirmed that it is a security risk to connect WinEds computers (that prepare electronic ballot definitions and tabulate results) to the Internet, and that those counties in NJ that do so are making a mistake.

Paul Terwilliger testified as a witness for the defense. Mr. Terwilliger is a longtime employee and/or contractor for Sequoia, who has had primary responsibility over the development of the AVC Advantage for the last 15 years. Mr. Terwilliger admitted that in 2003 the WIPO found that he’d acted in bad faith by cybersquatting on the Diebold.com domain name at the request of Sequoia. Mr. Terwilliger testified that it is indeed possible to program an FPGA to make a “fake Z80” that cheats in elections. But, he said, there are some methods for detecting FPGAs installed on AVC Advantage voting machines instead of the legitimate (Some of these methods are impractical, others are ineffective, others are speculative; see Wayne Wolf’s report.) This testimony had the effect of underscoring the seriousness of the fake-Z80 threat.

Originally the defendants were going to rely on Professor Michael Shamos of Carnegie Mellon University as their only expert witness. But the Court never recognized him as an expert witness. The Court ruled that he could not testify about the security and accuracy of the AVC Advantage, because he had not offered an opinion about security and accuracy in his expert report or his deposition.

The Court did permit him to testify in general terms. He said that in real life, we have no proof that a “hacked election” has ever occurred; and that in real life, such a hack would somehow come to light. He offered no studies that support this claim.

Professor Shamos attempted to cast doubt in the Court’s mind about the need for software independence, and disparaging precinct-based optical scan voting (PCOS). But he offered no concrete examples and no studies regarding PCOS.

On many issues, Professor Shamos agreed with the plaintiffs’ expert: it’s straightforward to replace a ROM chip, plastic-strap seals provide only a veneer of protection, the transformed machine can cheat, and pre-election logic-and-accuracy testing would be ineffective in detecting the fraud. He does not dispute many of the bugs and user-interface design flaws that we found, and recommends that those should be fixed.

Professor Shamos admitted that he is alone among computer scientists in his support of paperless DREs. He tried to claim that other computer scientists such as Ted Selker, Douglas W. Jones, Joseph Lorenzo Hall also supported paperless DREs by saying they supported parallel testing–implying that those scientists would consider paperless DREs to be secure enough with parallel testing–but during cross-examination he backed off a bit from this claim. (In fact, as I testified in my rebuttal testimony, Drs. Jones and Hall both consider PCOS to have substantially stronger security, and to be substantially better overall, than DREs with parallel testing.)

Parallel testing is Professor Shamos’s proposed method to detect fraudulent software in electronic voting machines. In order to catch software that cheats only on election day, Professor Shamos proposes to cordon off a machine and cast a known list of test votes on it all day. He said that no state has ever implemented a satisfactory parallel testing protocol, however.

Summary of the defendant’s case

One of the plaintiffs’ most important claims–which they demonstrated on video to the Court–is that one can replace the firmware of the AVC Advantage voting machine with fraudulent firmware that changes votes before the polls close. No defense witness contradicted this. To the extent that the defense put up a case, it hinged on proposed methods for detecting such fraudulent firmware, or on proposed methods for slowing down the attack by putting tamper-evident seals in the way. On both of these issues, defense witnesses contradicted each other, and plaintiffs presented rebuttal witnesses.

NJ Voting-machine trial: Plaintiffs' witnesses

Both sides in the NJ voting-machines lawsuit, Gusciora v. Corzine, have finished presenting their witnesses. Briefs (in which each side presents proposed conclusions) are due June 15 (plaintiffs) and July 15 (defendants), then the Court will eventually issue a decision.

In summary, the plaintiffs argue that New Jersey’s voting machines (Sequoia AVC Advantage) can’t be trusted to count the votes, because they’re so easily hacked to make them cheat. Thus, using them is unconstitutional (under the NJ state constitution), and the machines must be abandoned in favor of a method that provides software independence, for example precinct-count optical-scan voting.

The plaintiffs’s first witness was Stephanie Harris, who testified for half an hour about her experience voting on an AVC Advantage where the pollworker asked her to go back and recast her ballot for a total of three or four times, because the pollworker wasn’t sure that it registered. Ms. Harris testified that to this day she’s not sure whether her vote registered 0 times, or 1, or 2, or 3, or 4.

I testified second, as I’ve described. I testified about many things, but the most important is that you can easily replace the firmware of an AVC Advantage voting machine to make it cheat in elections (but not cheat when it’s being tested outside of elections).

The third witness was Ed Felten, who testified for about an hour that on several different occasions he found unattended voting machines in Princeton, on weekends before elections, and he took pictures. (Of course, as the Court was well aware by this time in the trial, a hacker could take advantage of an unattended voting machine to install vote-stealing firmware.) Ed wrote about this on Freedom-to-Tinker here, here, and here; he brought all those pictures with him to show the Court.

Next were Elisa Gentile, Hudson County voting machine warehouse supervisor, and Daryl Mahoney, Bergen County voting machine warehouse supervisor. Mr. Mahoney also serves on the NJ Voting Machine Examination committee (which recommends certification of voting machines for use in NJ). These witnesses were originally proposed by the defense, but in their depositions before trial, they said things so helpful to the plaintiffs that the plaintiffs called them instead! They testified about lax security with regard to transport and storage of voting machines, lax handling of keys to the voting machines, and no security at polling places where the machines are delivered several days before the election. They didn’t seem to have a clue about information security and how it affects the integrity of elections conducted using computers.

Next the plaintiffs called County Clerk of Union County, Joanne Rajoppi, who had the sophistication to notice a discrepancy in the results report by AVC Advantage voting machine, the integrity to alert the newspapers and the public, and the courage to testify about all the things that have been going wrong with AVC Advantage voting machines in her county. Ms. Rajoppi testified about (among other things):

  • Soon after the February 5, 2008 Super Tuesday presidential primary, she noticed inconsistencies in AVC Advantage results-reports printouts (and cartridge data): the number of votes in some primaries was higher than the number of voters. (See Section 56 of my report, or Ed Felten’s analysis on Freedom-to-Tinker)
  • She brought this to the attention of State election officials, but the State officials made no move at all to investigate the problem. She arranged for Professor Felten of Princeton University to examine the Union County voting machines, but she stopped when she was threatened with a lawsuit by Edwin Smith, vice president of Sequoia Voting Systems.
  • In a different election, the Sequoia AVC voting system refused to accept a candidate’s name with a tilde over the ñ. Sequoia technicians produced a hand-edited ballot definition file; she was uneasy about turning control of the ballot definition file over to Sequoia.
  • Results Cartridges get locked in the machines sometimes (when pollworkers forget to bring them back from the polling places for tabulation). (During this time they are vulnerable to vote-changing manipulation; see Section 40 of my report.)
  • Union County considers the vote data in the cartridges to be the official election results, not the vote data printed out at the close of the polls (and then signed by witnesses). (This is unwise for several reasons; see Sections 40 and 57 of my report.)

The defendant (the State of New Jersey) presented several witnesses. I’ll summarize them in my next post. After the defense witnesses, the plaintiffs called rebuttal witnesses.

Plaintiffs’ rebuttal witness Roger Johnston is an expert on physical security at the U.S. government’s Argonne National Laboratory (testifying as a pro bono expert on his own behalf, not representing the views of the U.S. government). Dr. Johnston testified that supposedly tamper-evident seals and tape can be defeated; that it does no good to have seals without a rigorous protocol for inspecting them (which NJ does not have); that such a protocol (and the training it requires) would be very expensive to implement and execute; that AVC Advantage’s design makes it impractical to really secure using seals; and that in general New Jersey’s “security culture” and its proposed methods for securing these voting machines are incoherent and dysfunctional. He demonstrated for the Court one defeat of each seal, and testified about other defeats of these kinds of seals.

The last plaintiffs’ witness was Wayne Wolf, professor of Electrical Engineering at Georgia Tech. Professor Wolf testified (and wrote in his expert report) that it’s straightforward to build a fake computer processor chip and install it to replace the Z80 computer chip in the AVC Advantage voting machine. (See also Section 12 of my report.) This fake chip could (from time to time) ignore the instructions in the AVC Advantage ROM memory about how to add up votes, and instead transfer votes from one candidate to another. It can cheat just like the ROM-replacement hack that I testified about, but it can’t be detected by examining the ROM chips. Professor Wolf also testified about the difficulty (or impossibility) of detecting fake Z80 chips by some of the methods proposed by defense witnesses.

European Antitrust Fines Against Intel: Possibly Justified

Last week the European Commission competition authorities charged Intel with anticompetitive behavior in the market for microprocessor chips, and levied a €1.06 billion ($1.45 billion) fine on the company. Some commentators attacked the ruling as ridiculous on its face. I disagree. Let me explain why the European action, though not conclusively justified at this point, is at least plausible.

The starting point of any competition analysis is to recall the purpose of competition law: not to protect rival firms (such as AMD in this case), but to protect competition for the benefit of consumers. The key is to understand what is fair competition and what is not. If a firm dominates a market, and even drives other firms out, but does so by producing better products at better prices, they deserve applause. If a dominant firm takes steps that are aimed more at undermining competition than at serving customers, then they may be crossing the line into anticompetitive behavior.

To do even a superficial analysis in a single blog post, we’re going to have to make some assumptions. First, for the sake of this post let’s accept as true the EC’s claims about Intel’s specific actions. Second, let’s set aside the details of European law and instead ask whether Intel’s actions were fair and justified. Third, let’s assume that there is a single market for processor chips, in the sense that any processor chip can be used in any system. A serious analysis would have to consider carefully all of these factors, but these assumptions will help us get started.

With all that in mind, does the EC have a plausible case against Intel?

First we have to ask whether Intel has monopoly power. Economists define monopoly power as the ability to raise prices above the competitive level without losing money as a result. We know that Intel has high market share, but that by itself does not imply monopoly power. Presumably the EC will argue that there is a significant barrier to entry which keeps new firms out of the microprocessor market, and that this barrier to entry plus Intel’s high market share adds up to monopoly power. This is at least plausible, and there isn’t space here to dissect that argument in detail, so let’s accept it for the sake of our analysis.

Now: having monopoly power, did Intel abuse that power by acting anticompetitively?

The EC accused Intel of two anticompetitive strategies. First, the EC says that Intel gave PC makers discounts if they agreed to ship Intel chips in 100% of their systems, or 80% of their systems. Is this anticompetitive? It’s hard to say. Volume discounts are common in many industries, but this is not a typical volume discount. The price goes down when the customer buys more Intel chips — that’s a typical volume discount — but the price of Intel chips also goes up when the customer buys more competing chips — which is unusual and might have anticompetitive effects. Whether Intel has a competitive justification for this remains to be seen.

Second, and more troubling, the EC says that “Intel awarded computer manufacturers payments – unrelated to any particular purchases from Intel – on condition that these computer manufacturers postponed or cancelled the launch of specific AMD-based products and/or put restrictions on the distribution of specific AMD-based products.” This one seems hard for Intel to justify. A firm with monopoly power, spending money to block competitor’s distribution channels, is a classic anticompetitive strategy.

None of this establishes conclusively that Intel broke the law, or that the EC’s fine is justified. We made a lot of assumptions along the way, and we would have to reconsider each of them carefully, before we could conclude that the EC’s argument is correct. We would also need to give Intel a chance to offer pro-competitive justifications for their behavior. But despite all of these caveats, I think we can conclude that although it is far from proven at this point, the EC’s case should be taken seriously.

The future of high school yearbooks

The Dallas Morning News recently ran a piece about how kids these days aren’t interested in buying physical, printed yearbooks. (Hat tip to my high school’s journalism teacher, who linked to it from our journalism alumni Facebook group.) Why spend $60 on a dead-trees yearbook when you can get everything you need on Facebook? My 20th high school reunion is coming up this fall, and I was the “head” photographer for my high school’s yearbook and newspaper, so this is a topic near and dear to my heart.

Let’s break down everything that a yearbook actually is and then think about how these features can and cannot be replicated in the digital world. A yearbook has:

  • higher-than-normal photographic quality (yearbook photographers hopefully own better camera equipment and know how to use their gear properly)
  • editors who do all kinds of useful things (sending photographers to events they want covered, selecting the best pictures for publication, captioning them, and indexing the people in them)
  • a physical artifact that people can pass around to their friends to mark up and personalize, and which will still be around years later

If you get rid of the physical yearbook, you’ve got all kinds of issues. Permanence is the big one. There’s nothing that my high school can do to delete my yearbook after it’s been published. Conversely, if high schools host their yearbooks on school-owned equipment, then they can and will fail over time. (Yes, I know you could run a crawler and make a copy, but I wouldn’t trust a typical high school’s IT department to build a site that will be around decades later.) To pick one example, my high school’s web site, when it first went online, had a nice alumni registry. Within a few years, it unceremoniously went away without warning.

Okay, what about Facebook? At this point, almost a third of my graduating class is on Facebook, and I’m sure the numbers are much higher for more recent classes. Some of my classmates are digging up old pictures, posting them, and tagging each other. With social networking as part of the yearbook process from the start, you can get some serious traction in replacing physical yearbooks. Yearbook editors and photography staff can still cover events, select good pictures, caption them, and index them. The social networking aspect covers some of the personalization and markup that we got by writing in each others’ yearbooks. That’s fun, but please somebody convince me that Facebook will be here ten or twenty years from now. Any business that doesn’t make money will eventually go out of business, and Facebook is no exception.

Aside from the permanence issue, is anything else lost by going to a Web 2.0 social networking non-printed yearbook? Censorship-happy high schools (and we all know what a problem that can be) will never allow a social network site that they control to have students’ genuine expressions of their distaste for all the things that rebellious youth like to complain about. Never mind that the school has a responsibility to maintain some measure of student privacy. Consequently, no high school would endorse the use of a social network that they couldn’t control and censor. I’m sure several of the people who wrote in my yearbook could have gotten in trouble if the things they wrote there were to have been raised before the school administration, yet those comments are the best part of my yearbook. Nothing takes you back quite as much as off-color commentary.

One significant lever that high school yearbooks have, which commercial publications like newspapers generally lack, is that they’re non-profit. If the yearbook financially breaks even, they’re doing a good job. (And, in the digital universe, the costs are perhaps lower. I personally shot hundreds of rolls of black&white film, processed them, and printed them, and we had many more photographers on our staff. My high school paid for all the film, paper, and photo-chemistry that we used. Now they just need computers, although those aren’t exactly cheap, either.) So what if they don’t print so many physical yearbooks? Sure, the yearbook staff can do a short, vanity press run, so they can enter competitions and maybe win something, but otherwise they can put out a PDF or pickle the bowdlerized social network’s contents down to a DVD-ROM and call it a day. That hopefully creates enough permanence. What about uncensored commentary? That’s probably going to have to happen outside of the yearbook context. Any high school student can sign up for a webmail account and keep all their email for years to come. (Unlike Facebook, the webmail companies seem to be making money.) Similarly, the ubiquity of digital point-and-shoot cameras ensures that students will have uncensored, personal, off-color memories.

[Sidebar: There’s a reality show on TV called “High School Reunion.” Last year, they reunited some people from my school’s class of 1987. I was in the class of 1989. Prior to the show airing, I was contacted by one of the producers, wanting to use some of my photographs in the show. She sent me a waiver that basically had me indemnifying them for their use of my work; of course, they weren’t offering to pay me anything. Really? No thanks. One of the interesting questions was whether my photos were even “my property” to which I could even give them permission to use. There were no contracts of any kind when I signed up to work on the yearbook. You could argue that the school retains an interest in the pictures, never mind the original subjects from whom we never got model releases. Our final contract said, in effect, that I represented that I took the pictures and had no problem with them using them, but I made no claims as to ownership, and they indemnified me against any issues that might arise.

Question for the legal minds here: I have three binders full of negatives from my high school years. I could well invest a week of my time, borrow a good scanner, and get the whole collection online and post it online, either on my own web site or on Facebook. Should I? Am I opening myself to legal liability?]

Sizing Up "Code" with 20/20 Hindsight

Code and Other Laws of Cyberspace, Larry Lessig’s seminal work on Internet regulation, turns ten years old this year. To mark the occassion, the online magazine Cato Unbound (full disclosure: I’m a Cato adjunct scholar) invited Lessig and three other prominent Internet scholars to weigh in on Code‘s legacy: what it got right, where it went wrong, and what implications it has for the future of Internet regulation.

The final chapter of Code was titled “What Declan Doesn’t Get,” a jab at libertarians like CNet’s Declan McCullagh who believed that government regulation of the Internet was likely to do more harm than good. It’s fitting, then, that Declan got to kick things off with an essay titled (what else?) “What Larry Didn’t Get.” There were responses from Jonathan Zittrain (largely praising Code) and my co-blogger Adam Thierer (mostly criticizing it), and the Lessig got the last word. I think each contributor will be posting a follow-up essay in the coming days.

My ideological sympathies are with Declan and Adam, but rather than pile on to their ideological critiques, I want to focus on some of the specific technical predictions Lessig made in Code. People tend to forget that in addition to describing some key theoretical insights about the nature of Internet regulation, Lessig also made some pretty specific predictions about how cyberspace would evolve in the early years of the 21st Century. I think that enough time has elapsed that we can now take a careful look at those predictions and see how they’ve panned out.

Lessig’s key empirical claim was that as the Internet became more oriented around commerce, its architecture would be transformed in ways that undermined free speech and privacy. He thought that e-commerce would require the use of increasingly sophisticated public-key infrastructure that would allow any two parties on the net to easily and transparently exchange credentials. And this, in turn, would make anonymous browsing much harder, undermining privacy and making the Internet easier to regulate.

This didn’t happen, although for a couple of years after the publication of Code, it looked like a real possibility. At the time, Microsoft was pushing a single sign-on service called Passport that could have been the foundation of the kind of client authentication facility Lessig feared. But then passport flopped. Consumers weren’t enthusiastic about entrusting their identities to Microsoft, and businesses found that lighter-weight authentication processes were sufficient for most transactions. By 2005 companies like eBay started dropping Passport from their sites. The service has been rebranded Windows Live ID and is still limping along, but no one seriously expects it to become the kind of comprehensive identity-management system Lessig feared.

Lessig concedes that he was “wrong about the particulars of those technologies,” but he points to the emergence of a new generation of surveillance technologies—IP geolocation, deep packet inspection, and cookies—as evidence that his broader thesis was correct. I could quibble about whether any of these are really new technologies. Lessig discusses cookies in Code, and the other two are straightforward extensions of technologies that existed a decade ago. But the more fundamental problem is that these examples don’t really support Lessig’s original thesis. Remember that Lessig’s prediction was that changes to Internet architecture—such as the introduction of robust client authentication to web browsers—would transform the previously anarchic network into one that’s more easily regulated. But that doesn’t describe these technologies at all. Cookies, DPI, and geo-location are all technologies that work with vanilla TCP/IP, using browser technologies that were widely deployed in 1999. Technological changes made cyberspace more susceptible to regulation without any changes to the Internet’s architecture.

Indeed, it’s hard to think of any policy or architectural change that could have forestalled the rise of these technologies. The web would be extremely inconvenient if we didn’t have something like cookies. The engineering constraints on backbone routers make roughly geographical IP assignment almost unavoidable, and if IP addresses are tied to geopgrahy it’s only a matter of time before someone builds a database of the mapping. Finally, any unencrypted networking protocol is susceptible to deep packet inspection. Short of mandating that all traffic be encrypted, no conceivable regulatory intervention could have prevented the development of DPI tools.

Of course, now that these technologies exist, we can have a debate about whether to regulate their use. But Lessig was making a much stronger claim in 1999: that the Internet’s architecture (and, therefore, its susceptibility to regulation) circa 2009 would be dramatically different depending on the choices policymakers made in 1999. I think we can now say that this wasn’t right. Or, at least, the technologies he points to now aren’t good examples of that thesis.

It seems to me that the Internet is rather less malleable than Lessig imagined a decade ago. We would have gotten more or less the Internet we got regardless of what Congress or the FCC did over the last decade. And therefore, Lessig’s urgent call to action—his argument that we must act in 1999 to ensure that we have the kind of Internet we want in 2009—was misguided. In general, it works pretty well to wait until new technologies emerge and then debate whether to regulate them after the fact, rather than trying to regulate preemptively to shape the kinds of technologies that are developed.

As I wrote a few months back, I think Jonathan Zittrain’s The Future of the Internet and How to Stop It makes the same kind of mistake Lessig made a decade ago: overestimating regulators’ ability to shape the evolution of new technologies and underestimating the robustness of open platforms. The evolution of technology is mostly shaped by engineering and economic constraints. Government policies can sometimes force new technologies underground, but regulators rarely have the kind of fine-grained control they would need to promote “generative” technologies over sterile ones, any more than they could have stopped the emergence of cookies or DPI if they’d made different policy choices a decade ago.