April 18, 2014

avatar

iPhone Apps: Apple Picks a Little, Talks a Little

Last week Apple, in an incident destined for the textbooks, rejected an iPhone app called Eucalyptus, which lets you download and read classic public-domain books from Project Gutenberg. The rejection meant that nobody could download or use the app (without jailbreaking their phone). Apple’s rationale? Some of the books, in Apple’s view, were inappropriate.

Apple’s behavior put me in mind of the Pick-a-Little Ladies from the classic musical The Music Man. These women, named for their signature song “Pick a Little, Talk a Little,” condemn Marian the Librarian for having inappropriate books in her library:

Maud: Professor, her kind of woman doesn’t belong on any committee. Of course, I shouldn’t tell you this but she advocates dirty books.

Harold: Dirty books?!

Alma: Chaucer!

Ethel: Rabelais!

Eulalie: Balzac!

This is pretty much the scene we saw last week, with the Eucalyptus app in the role of Marian — providing works by Chaucer, Rabelais, and Balzac — and Apple in the role of the Pick-a-Little Ladies. Visualize Steve Jobs, in his black turtleneck and jeans, transported back to 1912 Iowa and singing along with these frumpy busybodies.

Later in The Music Man, the Pick-a-Little Ladies decide that Marian is all right after all, and they praise her for offering great literature. (“The Professor told us to read those books, and we simply adored them all!”) In the same way, Apple, after the outcry over its muzzling of Eucalyptus, reverse course and un-rejected Eucalyptus. Now we can all get Chaucer! Rabelais! Balzac! on our iPhones.

But there is one important difference between Apple and the Pick-a-Little Ladies. Apple had the power to veto Eucalyptus, but the Ladies couldn’t stop Marian from offering dirty books. The Ladies were powerless because Old Man Mason had cleverly bequeathed the library building to the town but the books to Marian. In today’s terms, Mason had jailbroken the library.

All of this highlights the downside of Apple’s controlling strategy. It’s one thing to block apps that are fraudulent or malicious, but Apple has gone beyond this to set itself up as the arbiter of good taste in iPhone apps. If you were Apple, would you rather be the Pick-a-Little Ladies, pretending to sit in judgement over the town, or Old Man Mason, letting people make their own choices?

avatar

NJ Voting-machine Trial: Defense Witnesses

I’ve previously summarized my own testimony and other plaintiffs’ witnesses’ testimony in the New Jersey voting machines trial, Gusciora v. Corzine.

The defendant is the State of New Jersey (Governor and Secretary of State). The defense case comprised the following witnesses:

Defense witness James Clayton, the Ocean County voting machine warehouse supervisor, is a well-intentioned official who tries to have good procedures to secure the Ocean County voting machines. Still, it became apparent in his testimony that there are security gaps regarding transport of the machines, keys to the machines, and security at polling places before and after election day.

Richard Woodbridge is a patent attorney who has chaired the NJ Voting Machine Examination Committee for more than 20 years. It’s not clear why the defendants called him as a witness, because they conducted only a 15-minute direct examination in which he didn’t say much. On cross-examination he confirmed that his committee does not conduct an independent analysis of software and does not consult with any computer security experts.

Robert Giles, Director of Elections of the State of New Jersey, testified about experimenting with different forms of seals and locks that New Jersey might apply to its AVC Advantage voting machines. On cross examination, it became clear that there is no rhyme or reason in how the State is choosing seals and other security measures; that they’re not getting expert advice on these matters. Also he admitted that there are no statewide control or even supervision of the procedures that counties use to safeguard the voting machines, the results cartridges, keys, and so on. He confirmed that several counties use the cartridges as the official tally, in preference to paper printouts witnessed and signed (at the close of the polls) by election workers.

Edwin Smith testified as an expert witness for the State defendants. Mr. Smith is vice-president and part owner of Sequoia Voting Systems. He stands to gain financially depending on the verdict in this trial: NJ represents 20% of Sequoia’s market, and his bonuses depend on sales. Mr. Smith testified to rebut my testimony about fake Z80 processors. (Wayne Wolf, who testified for plaintiffs about fake Z80s, testified after Mr. Smith, as a rebuttal witness.) Even though Mr. Smith repeatedly referred to replacement of Z80s as “science fiction”, he then offered lengthy testimony about methods to try to detect fake Z80s. This gave credence to the fact that fraudulent CPUs are not only a possibility but a real threat.

Mr. Smith also confirmed that it is a security risk to connect WinEds computers (that prepare electronic ballot definitions and tabulate results) to the Internet, and that those counties in NJ that do so are making a mistake.

Paul Terwilliger testified as a witness for the defense. Mr. Terwilliger is a longtime employee and/or contractor for Sequoia, who has had primary responsibility over the development of the AVC Advantage for the last 15 years. Mr. Terwilliger admitted that in 2003 the WIPO found that he’d acted in bad faith by cybersquatting on the Diebold.com domain name at the request of Sequoia. Mr. Terwilliger testified that it is indeed possible to program an FPGA to make a “fake Z80″ that cheats in elections. But, he said, there are some methods for detecting FPGAs installed on AVC Advantage voting machines instead of the legitimate (Some of these methods are impractical, others are ineffective, others are speculative; see Wayne Wolf’s report.) This testimony had the effect of underscoring the seriousness of the fake-Z80 threat.

Originally the defendants were going to rely on Professor Michael Shamos of Carnegie Mellon University as their only expert witness. But the Court never recognized him as an expert witness. The Court ruled that he could not testify about the security and accuracy of the AVC Advantage, because he had not offered an opinion about security and accuracy in his expert report or his deposition.

The Court did permit him to testify in general terms. He said that in real life, we have no proof that a “hacked election” has ever occurred; and that in real life, such a hack would somehow come to light. He offered no studies that support this claim.

Professor Shamos attempted to cast doubt in the Court’s mind about the need for software independence, and disparaging precinct-based optical scan voting (PCOS). But he offered no concrete examples and no studies regarding PCOS.

On many issues, Professor Shamos agreed with the plaintiffs’ expert: it’s straightforward to replace a ROM chip, plastic-strap seals provide only a veneer of protection, the transformed machine can cheat, and pre-election logic-and-accuracy testing would be ineffective in detecting the fraud. He does not dispute many of the bugs and user-interface design flaws that we found, and recommends that those should be fixed.

Professor Shamos admitted that he is alone among computer scientists in his support of paperless DREs. He tried to claim that other computer scientists such as Ted Selker, Douglas W. Jones, Joseph Lorenzo Hall also supported paperless DREs by saying they supported parallel testing–implying that those scientists would consider paperless DREs to be secure enough with parallel testing–but during cross-examination he backed off a bit from this claim. (In fact, as I testified in my rebuttal testimony, Drs. Jones and Hall both consider PCOS to have substantially stronger security, and to be substantially better overall, than DREs with parallel testing.)

Parallel testing is Professor Shamos’s proposed method to detect fraudulent software in electronic voting machines. In order to catch software that cheats only on election day, Professor Shamos proposes to cordon off a machine and cast a known list of test votes on it all day. He said that no state has ever implemented a satisfactory parallel testing protocol, however.

Summary of the defendant’s case

One of the plaintiffs’ most important claims–which they demonstrated on video to the Court–is that one can replace the firmware of the AVC Advantage voting machine with fraudulent firmware that changes votes before the polls close. No defense witness contradicted this. To the extent that the defense put up a case, it hinged on proposed methods for detecting such fraudulent firmware, or on proposed methods for slowing down the attack by putting tamper-evident seals in the way. On both of these issues, defense witnesses contradicted each other, and plaintiffs presented rebuttal witnesses.

avatar

NJ Voting-machine trial: Plaintiffs' witnesses

Both sides in the NJ voting-machines lawsuit, Gusciora v. Corzine, have finished presenting their witnesses. Briefs (in which each side presents proposed conclusions) are due June 15 (plaintiffs) and July 15 (defendants), then the Court will eventually issue a decision.

In summary, the plaintiffs argue that New Jersey’s voting machines (Sequoia AVC Advantage) can’t be trusted to count the votes, because they’re so easily hacked to make them cheat. Thus, using them is unconstitutional (under the NJ state constitution), and the machines must be abandoned in favor of a method that provides software independence, for example precinct-count optical-scan voting.

The plaintiffs’s first witness was Stephanie Harris, who testified for half an hour about her experience voting on an AVC Advantage where the pollworker asked her to go back and recast her ballot for a total of three or four times, because the pollworker wasn’t sure that it registered. Ms. Harris testified that to this day she’s not sure whether her vote registered 0 times, or 1, or 2, or 3, or 4.

I testified second, as I’ve described. I testified about many things, but the most important is that you can easily replace the firmware of an AVC Advantage voting machine to make it cheat in elections (but not cheat when it’s being tested outside of elections).

The third witness was Ed Felten, who testified for about an hour that on several different occasions he found unattended voting machines in Princeton, on weekends before elections, and he took pictures. (Of course, as the Court was well aware by this time in the trial, a hacker could take advantage of an unattended voting machine to install vote-stealing firmware.) Ed wrote about this on Freedom-to-Tinker here, here, and here; he brought all those pictures with him to show the Court.

Next were Elisa Gentile, Hudson County voting machine warehouse supervisor, and Daryl Mahoney, Bergen County voting machine warehouse supervisor. Mr. Mahoney also serves on the NJ Voting Machine Examination committee (which recommends certification of voting machines for use in NJ). These witnesses were originally proposed by the defense, but in their depositions before trial, they said things so helpful to the plaintiffs that the plaintiffs called them instead! They testified about lax security with regard to transport and storage of voting machines, lax handling of keys to the voting machines, and no security at polling places where the machines are delivered several days before the election. They didn’t seem to have a clue about information security and how it affects the integrity of elections conducted using computers.

Next the plaintiffs called County Clerk of Union County, Joanne Rajoppi, who had the sophistication to notice a discrepancy in the results report by AVC Advantage voting machine, the integrity to alert the newspapers and the public, and the courage to testify about all the things that have been going wrong with AVC Advantage voting machines in her county. Ms. Rajoppi testified about (among other things):

  • Soon after the February 5, 2008 Super Tuesday presidential primary, she noticed inconsistencies in AVC Advantage results-reports printouts (and cartridge data): the number of votes in some primaries was higher than the number of voters. (See Section 56 of my report, or Ed Felten’s analysis on Freedom-to-Tinker)
  • She brought this to the attention of State election officials, but the State officials made no move at all to investigate the problem. She arranged for Professor Felten of Princeton University to examine the Union County voting machines, but she stopped when she was threatened with a lawsuit by Edwin Smith, vice president of Sequoia Voting Systems.
  • In a different election, the Sequoia AVC voting system refused to accept a candidate’s name with a tilde over the ñ. Sequoia technicians produced a hand-edited ballot definition file; she was uneasy about turning control of the ballot definition file over to Sequoia.
  • Results Cartridges get locked in the machines sometimes (when pollworkers forget to bring them back from the polling places for tabulation). (During this time they are vulnerable to vote-changing manipulation; see Section 40 of my report.)
  • Union County considers the vote data in the cartridges to be the official election results, not the vote data printed out at the close of the polls (and then signed by witnesses). (This is unwise for several reasons; see Sections 40 and 57 of my report.)

The defendant (the State of New Jersey) presented several witnesses. I’ll summarize them in my next post. After the defense witnesses, the plaintiffs called rebuttal witnesses.

Plaintiffs’ rebuttal witness Roger Johnston is an expert on physical security at the U.S. government’s Argonne National Laboratory (testifying as a pro bono expert on his own behalf, not representing the views of the U.S. government). Dr. Johnston testified that supposedly tamper-evident seals and tape can be defeated; that it does no good to have seals without a rigorous protocol for inspecting them (which NJ does not have); that such a protocol (and the training it requires) would be very expensive to implement and execute; that AVC Advantage’s design makes it impractical to really secure using seals; and that in general New Jersey’s “security culture” and its proposed methods for securing these voting machines are incoherent and dysfunctional. He demonstrated for the Court one defeat of each seal, and testified about other defeats of these kinds of seals.

The last plaintiffs’ witness was Wayne Wolf, professor of Electrical Engineering at Georgia Tech. Professor Wolf testified (and wrote in his expert report) that it’s straightforward to build a fake computer processor chip and install it to replace the Z80 computer chip in the AVC Advantage voting machine. (See also Section 12 of my report.) This fake chip could (from time to time) ignore the instructions in the AVC Advantage ROM memory about how to add up votes, and instead transfer votes from one candidate to another. It can cheat just like the ROM-replacement hack that I testified about, but it can’t be detected by examining the ROM chips. Professor Wolf also testified about the difficulty (or impossibility) of detecting fake Z80 chips by some of the methods proposed by defense witnesses.

avatar

European Antitrust Fines Against Intel: Possibly Justified

Last week the European Commission competition authorities charged Intel with anticompetitive behavior in the market for microprocessor chips, and levied a €1.06 billion ($1.45 billion) fine on the company. Some commentators attacked the ruling as ridiculous on its face. I disagree. Let me explain why the European action, though not conclusively justified at this point, is at least plausible.

The starting point of any competition analysis is to recall the purpose of competition law: not to protect rival firms (such as AMD in this case), but to protect competition for the benefit of consumers. The key is to understand what is fair competition and what is not. If a firm dominates a market, and even drives other firms out, but does so by producing better products at better prices, they deserve applause. If a dominant firm takes steps that are aimed more at undermining competition than at serving customers, then they may be crossing the line into anticompetitive behavior.

To do even a superficial analysis in a single blog post, we’re going to have to make some assumptions. First, for the sake of this post let’s accept as true the EC’s claims about Intel’s specific actions. Second, let’s set aside the details of European law and instead ask whether Intel’s actions were fair and justified. Third, let’s assume that there is a single market for processor chips, in the sense that any processor chip can be used in any system. A serious analysis would have to consider carefully all of these factors, but these assumptions will help us get started.

With all that in mind, does the EC have a plausible case against Intel?

First we have to ask whether Intel has monopoly power. Economists define monopoly power as the ability to raise prices above the competitive level without losing money as a result. We know that Intel has high market share, but that by itself does not imply monopoly power. Presumably the EC will argue that there is a significant barrier to entry which keeps new firms out of the microprocessor market, and that this barrier to entry plus Intel’s high market share adds up to monopoly power. This is at least plausible, and there isn’t space here to dissect that argument in detail, so let’s accept it for the sake of our analysis.

Now: having monopoly power, did Intel abuse that power by acting anticompetitively?

The EC accused Intel of two anticompetitive strategies. First, the EC says that Intel gave PC makers discounts if they agreed to ship Intel chips in 100% of their systems, or 80% of their systems. Is this anticompetitive? It’s hard to say. Volume discounts are common in many industries, but this is not a typical volume discount. The price goes down when the customer buys more Intel chips — that’s a typical volume discount — but the price of Intel chips also goes up when the customer buys more competing chips — which is unusual and might have anticompetitive effects. Whether Intel has a competitive justification for this remains to be seen.

Second, and more troubling, the EC says that “Intel awarded computer manufacturers payments – unrelated to any particular purchases from Intel – on condition that these computer manufacturers postponed or cancelled the launch of specific AMD-based products and/or put restrictions on the distribution of specific AMD-based products.” This one seems hard for Intel to justify. A firm with monopoly power, spending money to block competitor’s distribution channels, is a classic anticompetitive strategy.

None of this establishes conclusively that Intel broke the law, or that the EC’s fine is justified. We made a lot of assumptions along the way, and we would have to reconsider each of them carefully, before we could conclude that the EC’s argument is correct. We would also need to give Intel a chance to offer pro-competitive justifications for their behavior. But despite all of these caveats, I think we can conclude that although it is far from proven at this point, the EC’s case should be taken seriously.

avatar

The future of high school yearbooks

The Dallas Morning News recently ran a piece about how kids these days aren’t interested in buying physical, printed yearbooks. (Hat tip to my high school’s journalism teacher, who linked to it from our journalism alumni Facebook group.) Why spend $60 on a dead-trees yearbook when you can get everything you need on Facebook? My 20th high school reunion is coming up this fall, and I was the “head” photographer for my high school’s yearbook and newspaper, so this is a topic near and dear to my heart.

Let’s break down everything that a yearbook actually is and then think about how these features can and cannot be replicated in the digital world. A yearbook has:

  • higher-than-normal photographic quality (yearbook photographers hopefully own better camera equipment and know how to use their gear properly)
  • editors who do all kinds of useful things (sending photographers to events they want covered, selecting the best pictures for publication, captioning them, and indexing the people in them)
  • a physical artifact that people can pass around to their friends to mark up and personalize, and which will still be around years later

If you get rid of the physical yearbook, you’ve got all kinds of issues. Permanence is the big one. There’s nothing that my high school can do to delete my yearbook after it’s been published. Conversely, if high schools host their yearbooks on school-owned equipment, then they can and will fail over time. (Yes, I know you could run a crawler and make a copy, but I wouldn’t trust a typical high school’s IT department to build a site that will be around decades later.) To pick one example, my high school’s web site, when it first went online, had a nice alumni registry. Within a few years, it unceremoniously went away without warning.

Okay, what about Facebook? At this point, almost a third of my graduating class is on Facebook, and I’m sure the numbers are much higher for more recent classes. Some of my classmates are digging up old pictures, posting them, and tagging each other. With social networking as part of the yearbook process from the start, you can get some serious traction in replacing physical yearbooks. Yearbook editors and photography staff can still cover events, select good pictures, caption them, and index them. The social networking aspect covers some of the personalization and markup that we got by writing in each others’ yearbooks. That’s fun, but please somebody convince me that Facebook will be here ten or twenty years from now. Any business that doesn’t make money will eventually go out of business, and Facebook is no exception.

Aside from the permanence issue, is anything else lost by going to a Web 2.0 social networking non-printed yearbook? Censorship-happy high schools (and we all know what a problem that can be) will never allow a social network site that they control to have students’ genuine expressions of their distaste for all the things that rebellious youth like to complain about. Never mind that the school has a responsibility to maintain some measure of student privacy. Consequently, no high school would endorse the use of a social network that they couldn’t control and censor. I’m sure several of the people who wrote in my yearbook could have gotten in trouble if the things they wrote there were to have been raised before the school administration, yet those comments are the best part of my yearbook. Nothing takes you back quite as much as off-color commentary.

One significant lever that high school yearbooks have, which commercial publications like newspapers generally lack, is that they’re non-profit. If the yearbook financially breaks even, they’re doing a good job. (And, in the digital universe, the costs are perhaps lower. I personally shot hundreds of rolls of black&white film, processed them, and printed them, and we had many more photographers on our staff. My high school paid for all the film, paper, and photo-chemistry that we used. Now they just need computers, although those aren’t exactly cheap, either.) So what if they don’t print so many physical yearbooks? Sure, the yearbook staff can do a short, vanity press run, so they can enter competitions and maybe win something, but otherwise they can put out a PDF or pickle the bowdlerized social network’s contents down to a DVD-ROM and call it a day. That hopefully creates enough permanence. What about uncensored commentary? That’s probably going to have to happen outside of the yearbook context. Any high school student can sign up for a webmail account and keep all their email for years to come. (Unlike Facebook, the webmail companies seem to be making money.) Similarly, the ubiquity of digital point-and-shoot cameras ensures that students will have uncensored, personal, off-color memories.

[Sidebar: There's a reality show on TV called "High School Reunion." Last year, they reunited some people from my school's class of 1987. I was in the class of 1989. Prior to the show airing, I was contacted by one of the producers, wanting to use some of my photographs in the show. She sent me a waiver that basically had me indemnifying them for their use of my work; of course, they weren't offering to pay me anything. Really? No thanks. One of the interesting questions was whether my photos were even "my property" to which I could even give them permission to use. There were no contracts of any kind when I signed up to work on the yearbook. You could argue that the school retains an interest in the pictures, never mind the original subjects from whom we never got model releases. Our final contract said, in effect, that I represented that I took the pictures and had no problem with them using them, but I made no claims as to ownership, and they indemnified me against any issues that might arise.

Question for the legal minds here: I have three binders full of negatives from my high school years. I could well invest a week of my time, borrow a good scanner, and get the whole collection online and post it online, either on my own web site or on Facebook. Should I? Am I opening myself to legal liability?]

avatar

Sizing Up "Code" with 20/20 Hindsight

Code and Other Laws of Cyberspace, Larry Lessig’s seminal work on Internet regulation, turns ten years old this year. To mark the occassion, the online magazine Cato Unbound (full disclosure: I’m a Cato adjunct scholar) invited Lessig and three other prominent Internet scholars to weigh in on Code‘s legacy: what it got right, where it went wrong, and what implications it has for the future of Internet regulation.

The final chapter of Code was titled “What Declan Doesn’t Get,” a jab at libertarians like CNet’s Declan McCullagh who believed that government regulation of the Internet was likely to do more harm than good. It’s fitting, then, that Declan got to kick things off with an essay titled (what else?) “What Larry Didn’t Get.” There were responses from Jonathan Zittrain (largely praising Code) and my co-blogger Adam Thierer (mostly criticizing it), and the Lessig got the last word. I think each contributor will be posting a follow-up essay in the coming days.

My ideological sympathies are with Declan and Adam, but rather than pile on to their ideological critiques, I want to focus on some of the specific technical predictions Lessig made in Code. People tend to forget that in addition to describing some key theoretical insights about the nature of Internet regulation, Lessig also made some pretty specific predictions about how cyberspace would evolve in the early years of the 21st Century. I think that enough time has elapsed that we can now take a careful look at those predictions and see how they’ve panned out.

Lessig’s key empirical claim was that as the Internet became more oriented around commerce, its architecture would be transformed in ways that undermined free speech and privacy. He thought that e-commerce would require the use of increasingly sophisticated public-key infrastructure that would allow any two parties on the net to easily and transparently exchange credentials. And this, in turn, would make anonymous browsing much harder, undermining privacy and making the Internet easier to regulate.

This didn’t happen, although for a couple of years after the publication of Code, it looked like a real possibility. At the time, Microsoft was pushing a single sign-on service called Passport that could have been the foundation of the kind of client authentication facility Lessig feared. But then passport flopped. Consumers weren’t enthusiastic about entrusting their identities to Microsoft, and businesses found that lighter-weight authentication processes were sufficient for most transactions. By 2005 companies like eBay started dropping Passport from their sites. The service has been rebranded Windows Live ID and is still limping along, but no one seriously expects it to become the kind of comprehensive identity-management system Lessig feared.

Lessig concedes that he was “wrong about the particulars of those technologies,” but he points to the emergence of a new generation of surveillance technologies—IP geolocation, deep packet inspection, and cookies—as evidence that his broader thesis was correct. I could quibble about whether any of these are really new technologies. Lessig discusses cookies in Code, and the other two are straightforward extensions of technologies that existed a decade ago. But the more fundamental problem is that these examples don’t really support Lessig’s original thesis. Remember that Lessig’s prediction was that changes to Internet architecture—such as the introduction of robust client authentication to web browsers—would transform the previously anarchic network into one that’s more easily regulated. But that doesn’t describe these technologies at all. Cookies, DPI, and geo-location are all technologies that work with vanilla TCP/IP, using browser technologies that were widely deployed in 1999. Technological changes made cyberspace more susceptible to regulation without any changes to the Internet’s architecture.

Indeed, it’s hard to think of any policy or architectural change that could have forestalled the rise of these technologies. The web would be extremely inconvenient if we didn’t have something like cookies. The engineering constraints on backbone routers make roughly geographical IP assignment almost unavoidable, and if IP addresses are tied to geopgrahy it’s only a matter of time before someone builds a database of the mapping. Finally, any unencrypted networking protocol is susceptible to deep packet inspection. Short of mandating that all traffic be encrypted, no conceivable regulatory intervention could have prevented the development of DPI tools.

Of course, now that these technologies exist, we can have a debate about whether to regulate their use. But Lessig was making a much stronger claim in 1999: that the Internet’s architecture (and, therefore, its susceptibility to regulation) circa 2009 would be dramatically different depending on the choices policymakers made in 1999. I think we can now say that this wasn’t right. Or, at least, the technologies he points to now aren’t good examples of that thesis.

It seems to me that the Internet is rather less malleable than Lessig imagined a decade ago. We would have gotten more or less the Internet we got regardless of what Congress or the FCC did over the last decade. And therefore, Lessig’s urgent call to action—his argument that we must act in 1999 to ensure that we have the kind of Internet we want in 2009—was misguided. In general, it works pretty well to wait until new technologies emerge and then debate whether to regulate them after the fact, rather than trying to regulate preemptively to shape the kinds of technologies that are developed.

As I wrote a few months back, I think Jonathan Zittrain’s The Future of the Internet and How to Stop It makes the same kind of mistake Lessig made a decade ago: overestimating regulators’ ability to shape the evolution of new technologies and underestimating the robustness of open platforms. The evolution of technology is mostly shaped by engineering and economic constraints. Government policies can sometimes force new technologies underground, but regulators rarely have the kind of fine-grained control they would need to promote “generative” technologies over sterile ones, any more than they could have stopped the emergence of cookies or DPI if they’d made different policy choices a decade ago.

avatar

A Modest Proposal: Three-Strikes for Print

Yesterday the French parliament adopted a proposal to create a “three-strikes” system that would kick people off the Internet if they are accused of copyright infringement three times.

This is such a good idea that it should be applied to other media as well. Here is my modest proposal to extend three-strikes to the medium of print, that is, to words on paper.

My proposed system is simplicity itself. The government sets up a registry of accused infringers. Anybody can send a complaint to the registry, asserting that someone is infringing their copyright in the print medium. If the government registry receives three complaints about a person, that person is banned for a year from using print.

As in the Internet case, the ban applies to both reading and writing, and to all uses of print, including informal ones. In short, a banned person may not write or read anything for a year.

A few naysayers may argue that print bans might be hard to enforce, and that banning communication based on mere accusations of wrongdoing raises some minor issues of due process and free speech. But if those issues don’t trouble us in the Internet setting, why should they trouble us here?

Yes, if banned from using print, some students will be unable to do their school work, some adults will face minor inconvenience in their daily lives, and a few troublemakers will not be allowed to participate in — or even listen to — political debate. Maybe they’ll think more carefully the next time, before allowing themselves to be accused of copyright infringement.

In short, a three-strikes system is just as good an idea for print as it is for the Internet. Which country will be the first to adopt it?

Once we have adopted three-strikes for print, we can move on to other media. Next on the list: three-strikes systems for sound waves, and light waves. These media are too important to leave unprotected.

[Français]

avatar

Recovery Act Spending: Getting to the Bottom Line

Under most circumstances, government spending is slow and deliberate—a key fact that helps reduce the chances of waste and fraud. But the recently passed Recovery Act is a special case: spending the money quickly is understood to be essential to the success of the Act. We all know that shoppers in a hurry tend to get less value for their money. But, ironically, the overall macroeconomic impact of the stimulus (and hence the average stimulative effect per dollar spent) may be maximized by quick spending, even if the speed premium does increase the total amount of waste and abuse.

This situation creates a paradox for transparency and oversight efforts. On the one hand, the quicker pace of spending makes it all the more important to provide for public scrutiny, and to provide information in ways that will rapidly enable as many people as possible to take advantage of the stimulus opportunities available to them. On the other, the same rush that makes transparency important also reduces the time available for those within government to design and build an infrastructure for stimulus transparency.

One of the troubling tradeoffs that has been made thus far involves information about stimulus funds that flow from the federal government to states and then from states to localities. This pattern is rarer than you might think, since much of the Recovery Act spending flows more directly from federal agencies to its end recipients. But for funds that do follow a path from federal to state to local officials, recent guidance issued April 3 by the Office of Management and Budget (OMB) makes clear that the federal reporting infrastructure being created for Recovery.gov will not collect information about what the localities ultimately do with the funds.

OMB says that it does have the legal authority to require detailed reporting on “all levels of subawards,” reaching end recipients (Acme Concrete or whomever gets a contract or grant from the municipality at the end of the governmental chain). But in the context of its sprint to get at least some system into place as soon as possible (with the debut date for the Recovery.gov system already pushed back to October), OMB has left this deep-level reporting out of its immediate plans. The office says that it “plans to expand the reporting model in the future to also obtain this information, once the system capabilities and processes have been established.”

On Monday, ten congressmen sent a letter to OMB urging it to collect this detailed information “as early as possible.” One reason for OMB to formulate detailed operational plans in this area, as I argued in recent testimony before the House Committee on Oversight and Government Reform, is that clarity from the top will help states make competent choices about what if anything they should do to support or supplement the federal reporting. As the members of Congress write:

While it is positive that OMB goes on to reserve the right in the guidance to expand this reporting model in the future, it would seem exercising this right and requiring this level of reporting as early as possible would help entities prepare for the disclosures before projects begin and provide clarification for states as they begin investing in new infrastructure to track ARRA funds.

In the end, everyone agrees that this detailed information about subawards is important to have—OMB “plans to collect” it and the signatories to yesterday’s letter want collection to start “as soon as possible.” But how soon is that? We don’t really know. The details of hard choices facing OMB as it races to implement the Recovery.gov reporting system are themselves not public, and making them public might (or might not) itself slow down the development of the site. If no system were permitted to launch without fully detailed reporting of subawards, we might wait longer for the web site’s launch. How much longer? OMB might not itself be sure, since software development times are notoriously difficult to forecast, and OMB has never before been asked to build a system of this kind. OMB asserts that it’s moving as fast as it can to collect as much information as possible, and without slowing it down to ask for explanations, we can’t really check that assertion.

Transparency often reduces the degree to which citizens must trust public officials. But in this case, ironically, it seems most reasonable to operate on the optimistic but realistic assumption that the people working on Recovery Act transparency are doing their jobs well, and to hope for good results.

avatar

Breathalyzer Source Code Secrecy Endangers Minnesota Drunk Driving Convictions

The Minnesota Supreme Court ruled recently that defendants accused of drunk driving in the state are entitled to have their experts inspect the source code for the software in the Intoxilyzer breath-testing machines used by police to gauge the defendants’ blood alcohol levels. The defendants argued, successfully, that they were entitled to examine and challenge the evidence against them, including the design and functioning of devices used to generate that evidence.

The ruling puts many of the state’s drunk driving prosecutions on thin ice, because CMI, the Intoxilyzer’s maker, is withholding the source code and the state apparently has no way to force CMI to provide the code.

Eric Rescorla argues, reasonably, that breath testers have many potential failure modes unrelated to software, and that source code analysis can be labor-intensive and might not turn up any clear problems. Both arguments are valid, as far as they go.

I’m not a lawyer, so I won’t try to guess whether the court’s ruling was correct as a matter of law. But the ruling does seem right as a matter of policy. If we are troubled by criminal convictions relying on secret evidence, then we should also be troubled by convictions relying on evidence generated by a secret process. To the extent that the Intoxilyzer functions as a secret process, the state should not be relying on it in criminal prosecutions.

(Though I haven’t thought carefully about the question, I might potentially draw a different policy conclusion in a civil case, where the standard of proof is preponderance of evidence, rather than guilt beyond a reasonable doubt.)

The problem is illustrated nicely by a contradiction in the arguments that CMI and the state are making. On the one hand, they argue that the machine’s source code contains valuable trade secrets — I’ll call them the “secret sauce” — and that CMI’s business would be substantially harmed if its competitors learned about the secret sauce. On the other hand, they argue that there is no need to examine the source code because it operates straightforwardly, just reading values from some sensors and doing simple calculations to derive a blood alcohol estimate.

It’s hard to see how both arguments can be correct. If the software contains secret sauce, then by definition it has aspects that are neither obvious nor straightforward, and those aspects are important for the software’s operation. In other words, the secret sauce — whatever it is — must relevant to the defendants’ claims.

As in electronic voting, where we have seen similar secrecy arguments, one can’t help suspecting that the real “secret” is that the software quality is not what it should be. A previous study of source code from New Jersey breath testers did appear to find some embarrassing errors.

Let’s hope that breath tester companies can do better than e-voting companies. A rigorous, independent evaluation of the breath tester source code would either determine that the code is sound, or it would undercover problems that could then be fixed, to restore confidence in the machines. Either way, the police in Minnesota would end up with a reliable tool for giving drunk drivers the punishment they deserve.

avatar

Sunlight on NASED ITA Reports

Short version: we now have gobs of voting system ITA reports, publicly available and hosted by the NSF ACCURATE e-voting center. As I explain below, ITA’s were the Independent Testing Authority laboratories that tested voting systems for many years.

Long version: Before the Election Assistance Commission (EAC) took over the testing and certification of voting systems under the Help America Vote Act (HAVA), this critical function was performed by volunteers. The National Association of State Election Directors (NASED) recognized a need for voting system testing and partnered with the Federal Election Commission (FEC) to establish a qualification program that would test systems as having met or exceeded the requirements of the 1990 and 2002 Voting System Standards.*

However, as I’ve lamented many, many times over the years, the input, output and intermediate work product of the NASED testing regime were completely secret, due to proprietary concerns on behalf of the manufacturers. Once a system completed testing, members of the public could see that an entry was made in a publicly-available spreadsheet listing the tested components and a NASED qualification number for the system. But the public was permitted no other insight into the NASED qualification regime.

Researchers were convinced from what evidence was available that the quality of the testing was highly inadequate and that the expertise didn’t exist within either the testing laboratories to perform adequate testing or the NASED technical committee to competently review the ultimate test reports submitted by the laboratories (called Independent Testing Authorities (ITA)). Naturally, when reports of problems started to crop-up, like the various Hursti vulnerabilities with Diebold memory cards, the NASED system scrambled to figure out what went wrong.

I know have more moderate views with respect to the NASED regime: sure, it was pretty bad and a lot of serious vulnerabilities slipped through the cracks, but I’m not yet convinced that just having the right people or a different process in place would have resulted in fewer problems in the field. To have fixed the NASED system would have required improvements on all fronts: the technology, the testing paradigms, the people involved and the testing and certification process.

The EAC has since taken over testing and certification. Their process is notable in its much higher level of openness and accountability; the test plans are published (previously claimed as proprietary by the testing labs), the test reports are published (previously claimed as proprietary by the vendors) and the process is specified in detail with a program manual, a laboratory manual, notices of clarification, etc.

This is all great and it helps to increase the transparency of the EAC certification program. But, what about the past? What about the testing that NASED did? Well, we don’t know much about it for a number of reasons, chief among them that we never saw any of the materials mentioned above that are now available in the new EAC system.

Through a fortunate FOIA request made of the EAC on behalf of election sleuth Susan Greenhalgh, we now have available a slew of ITA reports from one of the ITAs, Ciber.

The reports are available at the following location (hosted by our NSF ACCURATE e-voting center):

http://accurate-voting.org/docs/ita-reports/

These reports cover the Software ITA testing performed by the ITA Ciber for the following voting systems:

  • Automark AIMS 1.0.9
  • Diebold GEMS 1.18.19
  • Diebold GEMS 1.18.22
  • Diebold GEMS 1.18.24
  • Diebold AccuVote-TSx Model D
  • Diebold AccuVote-TSx Model D w/ AccuView Printer
  • Diebold Assure 1.0
  • Diebold Assure 1.1
  • Diebold Election Media Processor 4.6.2
  • Diebold Optical Scan Accumulator Adapter
  • Hart System 4.0
  • Hart System 4.1
  • Hart System 6.0
  • Hart System 6.2
  • Hart System 6.2.1

I’ll be looking at these in my leisure over coming weeks and pointing out interesting features of these reports and the associated correspondence included in the FOIA production.

*The distinction between certification and qualification, although vague, appears to be that under the NASED system, states did the ultimate certification of a voting system for fitness in future elections.