November 21, 2024

Archives for December 2006

Paper Trail Standard Advances

On Tuesday, the Technical Guidelines Development Committee (TGDC), the group drafting the next-generation Federal voting-machine standards, voted unanimously to have the standards require that new voting machines be software-independent, which in practice requires them to have some kind of paper trail.

(Officially, TGDC is drafting “guidelines”, but the states generally require compliance with the guidelines, so they are de facto standards. For brevity, I’ll call them standards.)

The first attempt to pass such a requirement failed on Monday, on a 6-6 vote; but a modified version passed unanimously on Tuesday. The most interesting modification was an exception for existing machines: new machines will have to be software-independent but already existing machines won’t. There’s no scientific or security rationale for treating new and old machines differently, so this is clearly a political compromise designed to lower the cost of compliance by sacrificing some security.

If you believe, as almost all computer scientists do, that paper trails are necessary today for security, you’ll be happy to see the requirement for new machines, but disappointed that existing paperless voting machines will be allowed to persist.

Whether you see the glass as half full or half empty depends on whether you see the quest for paper trails as mainly legal or mainly political, that is, whether you look to courts or legislatures for progress.

In court, the exception for existing machines will be strong, assuming it’s written clearly into the standard. It will be hard to get rid of the old machines by filing lawsuits, or at least the new standards won’t be useful in court. If anything, the new standards may be seen as ratifying the decision to stick with old, insecure machines.

In legislatures, on the other hand, the standard will be an official ratification of the fact that paper trails are preferable. The latest, greatest technology will use paper trails, and paperless designs will look old-fashioned. The exception for old machines will look like a money-saving compromise, and few legislators will want to be seen as risking democracy to save money.

As for me, I see legislatures more than courts, and politics more than lawyering, as driving the trend toward paper trails. Thirty-five states either have a paper trail statewide or require one to be adopted by 2008. The glass is already 70% full, and the new standards will help fill it the rest of the way.

Spam is Back

A quiet trend broke into the open today, when the New York Times ran a story by Brad Stone on the recent increase in email spam. The story claims that the volume of spam has doubled in recent months, which seems about right. Many spam filters have been overloaded, sending system administrators scrambling to buy more filtering capacity.

Six months ago, the conventional wisdom was that we had gotten the upper hand on spammers by using more advanced filters that relied on textual analysis, and by identifying and blocking the sources of spam. One smart venture capitalist I know declared spam to be a solved problem.

But now the spammers have adopted new tactics: sending spam from botnets (armies of compromised desktop computers), sending images rather than text, adding randomly varying noise to the messages to make them harder to analyze, and providing fewer URLs in messages. The effect of these changes is to neutralize the latest greatest antispam tools; and so the spammers are pulling back ahead, for now.

In the long view, not much has changed. The arms race will continue, with each side deploying new tricks in response to the other side’s moves, unless one side is forced out by economics, which looks unlikely.

To win, the good guys must make the cost of sending a spam message exceed the expected payoff from that message. A spammer’s per-message cost and payoff are both very small, and probably getting smaller. The per-message payoff is probably decreasing as spammers are forced to new payoff strategies (e.g., switching from selling bogus “medical” products to penny-stock manipulation). But their cost to send a message is also dropping as they start to use other people’s computers (without paying) and those computers get more and more capable. Right now the cost is dropping faster, so spam is increasing.

From the good guys’ perspective, the cost of spam filtering is increasing. Organizations are buying new spam-filtering services and deploying more computers to run them. The switch to image-based spam will force filters to use image analysis, which chews up a lot more computing power than the current textual analysis. And the increased volume of spam will make things even worse. Just as the good guys are trying to raise the spammers’ costs, the spammers’ tactics are raising the good guys’ costs.

Spam is growing problem in other communication media too. Blog comment spam is rampant – this blog gets about eight hundred spam comments a day. At the moment our technology is managing them nicely (thanks to akismet), but that could change. If the blog spammers get as clever as the email spammers, we’ll be in big trouble.

For Once, BCS Controversy Not the Computers' Fault

It’s that time of year again. You know, the time when sports pundits bad-mouth the Bowl Championship Series (BCS) for picking the wrong teams to play in college football’s championship game. The system is supposed to pick the two best teams. This year it picked Ohio State, clearly the best team, and Florida, a controversial choice given that Michigan arguably had better results.

Something like this happens every year. What makes this year different is that for once it’s not being blamed on computers.

BCS uses a numerical formula combining rankings from various sources, including human polls and computerized rankings. In past years, the polls and computers differed slightly. The problem generally was that the computers missed the important nuances that human voters see. Computers didn’t know that games at the beginning of the year count much less, or that last year’s ranking is supposed to influence this year’s, or that games count more if they’re nationally televised, or that there’s a special bonus for Notre Dame or a retiring coach. And so the computers and humans sometimes disagreed.

Human pundits sided unsurprisingly with the humans. The computer pundits all sided with the computers, but without an effective talk radio presence they were shouted down.

This year the computers cleverly ducked responsibility by rating Florida and Michigan exactly even, thereby forcing humans to take the heat for picking one or the other. The humans picked Florida. Problem was, the humans had previously rated Michigan above Florida but somehow flipped the two at the end, on the basis of not much new evidence (Florida performing as expected against a good opponent). The bottom line was simple: an Ohio State-Florida game would be cooler than an Ohio State-Michigan one – yet another factor the computers didn’t know about.

Since this year’s controversy is the humans’ fault, will the computers be given more weight next year? Don’t count on it.

NIST Recommends Not Certifying Paperless Voting Machines

In an important development in e-voting policy, NIST has issued a report recommending that the next-generation federal voting-machine standards be written to prevent (re-)certification of today’s paperless e-voting systems. (NIST is the National Institute of Standards and Technology, a government agency, previously called the National Bureau of Standards, that is a leading source of independent technology expertise in the U.S. government.) The report is a recommendation to another government body, the Technical Guidelines Development Committee (TGDC), which is drafting the 2007 federal voting-machine standards. The new report is notable for its direct tone and unequivocal recommendation against unverifiable paperless voting systems, and for being a recommendation of NIST itself and not just of the report’s individual authors.

[UPDATE (Dec. 2): NIST has now modified the document’s text, for example by removing the “NIST recommends…” language in some places and adding a preface saying it is only a discussion draft.]

The key concept in the report is software independence.

A voting system is software-independent if a previously undetected change or error in its software cannot cause an undetectable change or error in an election outcome. In other words, it can be positively determined whether the voting system’s (typically, electronic) CVRs [cast-vote records] are accurate as cast by the voter or in error.

This gets to the heart of the problem with paperless voting: we can’t be sure the software in the machines on election day will work as expected. It’s difficult to tell for sure which software is present, and even if we do know which software is there we cannot be sure it will behave correctly. Today’s paperless e-voting systems (known as DREs) are not software-independent.

NIST does not known how to write testable requirements to make DREs secure, and NIST’s recommendation to the STS [a subcommittee of the TGDC] is that the DRE in practical terms cannot be made secure. Consequently, NIST and the STS recommend that [the 2007 federal voting standard] should require voting systems to be [software independent].

In other words, NIST recommends that the 2007 standard should be written to exclude DREs.

Though the software-independence requirement and condemnation of DREs as unsecureable will rightly get most of the attention, the report makes three other good recommendations. First, attention should be paid to improving the usability and accessibility of voting systems that use paper. Second, the 2007 standard should include high-level discussion of new approaches to software independence, such as fancy cryptographic methods. Third, more research is needed to develop new kinds of voting technologies, with special attention paid to improving usability.

Years from now, when we look back on the recent DRE fad with what-were-we-thinking hindsight, we’ll see this NIST report as a turning point.