November 23, 2024

Archives for 2004

Rubin and Rescorla on E-Voting

There are two interesting new posts on e-voting over on ATAC.

In one post, Avi Rubin suggests a “hacking challenge” for e-voting technology: let experts tweak an existing e-voting system to rig it for one candidate, and then inject the tweaked system quietly into the certification pipeline and see if it passes. (All of this would be done with official approval and oversight, of course.)

In the other post (also at Educated Guesswork, with better comments), Eric Rescorla responds to Clive Thompson’s New York Times Magazine piece calling for open e-voting software. Thompson invoked the many-eyeballs phenomenon, saying that open software gets the benefit of inspection by many people, so that opening e-voting software would help to find any security flaws in it.

Eric counters by making two points. First, opening software just creates the opportunity to audit, but it doesn’t actually motivate skilled people to spend a lot of their scarce time doing a disciplined audit. Second, bugs can lurk in software for a long time, even in code that experts look at routinely. So, Eric argues, instituting a formal audit process that has real teeth will do more good than opening the code.

While I agree with Eric that open source software isn’t automatically more secure than closed source, I suspect that voting software may be the exceptional case where smart people will volunteer their time, or philanthropists will volunteer their money, to see that a serious audit actually happens. It’s true, in principle, that the same audit can happen if the software stays closed. But I think it’s much less likely to happen in practice with closed software – in a closed-source world, too many people have to approve the auditors or the audit procedures, and not all of those people will want to see a truly fair and comprehensive audit.

Eric also notes, correctly, the main purpose of auditing, which is not to find all of the security flaws (a hopeless task) but to figure out how trustworthy the software is. To me, the main benefit of opening the code is that the trustworthiness of the code can become a matter of public debate; and the debate will be better if its participants can refer directly to the evidence.

Google Hires Ph.D.'s; Times Surprised

Yesterday’s New York Times ran a story by Randall Stross, marveling at the number of Ph.D.’s working at Google. Indeed, the story marveled about Google wanting to hire Ph.D.’s at all. Many other companies shun Ph.D.’s.

Deciding whether to hire bachelors-level employees or Ph.D.’s really boils down to whether you want employees who are good at doing homework on short deadlines or employees who are good at figuring things out in an unstructured environment. (Like all generalizations, this is true only on average. There are plenty of outliers.) Google is a bit unusual in opting for the latter.

What the article doesn’t say is that Google does not hire just anybody with a Ph.D. diploma. They’re pretty careful about which Ph.D.’s they hire. Google can afford to be choosy since so many people seem to want to work there. Google benefits from a virtuous cycle that sometimes develops at a company, where the company has an unusual concentration of really smart employees, so lots of people want to work there, so the company can be very picky about whom it hires, thus allowing itself to hire more very smart people.

The article also hints at Google’s success in integrating research with production. The usual model in the industry is to hire a small number of eggheads and send them off to some distant building to Think Deep Thoughts, so as not to disturb the mass of employees who make products. By contrast, Google generally uses the very same people to do research and create products. They do this by letting every employee spend 20% of their time doing anything they like that might be useful to the company. Doing this ensures that the research is done by people who understand the problems that come up in the company’s day-to-day business.

Sustaining this model requires at least three things. First, you have to have employees who will use the unstructured research time productively; this is where the Ph.D.’s and other very smart people come in. Second, you need to maintain a work environment that is attractive to these people, because they’ll have no trouble finding work elsewhere if they want to leave. Third, management has to have the discipline to avoid constantly canceling the 20% research time in order to meet the deadline du jour.

Google does all of this well. They probably benefit also from the nature of their product, which generates revenue every time it is used (rather than only when customers decide to pay for an upgrade), and which can be improved incrementally. Revenue doesn’t depend on cranking out each year a version that can be sold as all-new, so the company can focus simply on making its products work well.

Can Google maintain all of this after it has gone public? My guess is that it can, as long as it is viewed as the technology leader in a lucrative area. If Google ever loses its aura, though, watch out – when the green eyeshades come out, many of those smart employees will leave for greener pastures, probably for a company that bills itself as “the new Google.”

Designed for Spying

A Mark Glassman story at the New York Times discusses the didtheyreadit email-tracking software that I wrote about previously.

The story quotes the head of didtheyreadit as saying that the purpose of the software is to tell whether an email reached its intended recipient. “I won’t deny that it has a potentially stealth purpose,” he adds. He implies pretty strongly that the stealthiness is just a side-effect and not in fact the main goal of the product.

The fact is that spying is built into the didtheyreadit product, by design. For example, it would have been easier for them to report to a message’s original sender only whether a message had ever been read: “Yes, it’s been read” or “No, it hasn’t been read yet”, and nothing more. Instead, they went to the extra trouble to report all kinds of additional information to the sender.

It does seem to be a side-effect of their web-bug-based design that didtheyreadit could gather much more information about where and when a message was read. But nothing forces them to actually collect and store this extra information, and nothing forces them to report it to anybody. They made a design choice, to store and pass on as much private information as they could.

Even the basic stealthiness of the product was a deliberate design choice. They are already adding an image to email messages. Why not make the image some kind of “delivery assured by didtheyreadit” icon? That way the message recipient would know what was happening; and the icon could be used for viral marketing – click it and you’re taken to the didtheyreadit site for a sales pitch. Why did they pass up this valuable marketing opportunity? They made a design choice to hide their product from email recipients.

Sometimes engineering imperatives force us to accept some bad features in order to get good ones. But this is not one of those cases. didtheyreadit is designed as a spying tool, and the vendor ought to admit it.

Wireless Unleashed

WirelessUnleashed is a new group blog, dedicated to wireless policy, from Kevin Werbach, Andrew Odlyzko, David Isenberg, and Clay Shirky. Based on the author-list alone, it’s worth our attention.

E-Voting Testing Labs Not Independent

E-voting vendors often argue that their systems must be secure, because they have been tested by “independent” labs. Elise Ackerman’s story in Sunday’s San Jose Mercury-News explains the depressing truth about how the testing process works.

There are only three labs, and they are overseen by a private body that is supported financially by the vendors. There is no government oversight. The labs have refused to release test results to state election officials, saying the results are proprietary and will be given only to the vendor whose product was tested:

Dan Reeder, a spokesman for Wyle, which functioned as the nation’s sole testing lab from 1994 to 1997, said the company’s policy is to provide information to the manufacturers who are its customers.

It’s worth noting, too, that the labs do not test the security of the e-voting systems; they only test the systems’ compliance with standards.

SysTest Labs President Brian Phillips said the security risks identified by the outside scientists were not covered by standards published by the Federal Election Commission. “So long as a system does not violate the requirements of the standards, it is OK,” Phillips said.

A few states do their own testing, or hire their own independent labs. It seems to me that state election officials should be able to get together and establish a truly independent testing procedure that has some teeth.