March 28, 2024

Archives for January 2004

Senate File Pilfering "Extensive"

Charlie Savage reports in today’s Boston Globe:

Republican staff members of the US Senate Judiciary Commitee infiltrated opposition computer files for a year, monitoring secret strategy memos and periodically passing on copies to the media, Senate officials told The Globe.

From the spring of 2002 until at least April 2003, members of the GOP committee staff exploited a computer glitch that allowed them to access restricted Democratic communications without a password. Trolling through hundreds of memos, they were able to read talking points and accounts of private meetings discussing which judicial nominees Democrats would fight – and with what tactics.

We already knew there were unauthorized accesses; the news here is that they were much more extensive than had previously been revealed, and that the results of the snooping were leaked to the media on several occasions.

Committee Chairman Orrin Hatch (a Republican) has strongly condemned the accesses, saying that he is “mortified that this improper, unethical and simply unacceptable breach of confidential files may have occurred on my watch.”

The accesses were possible because of a technician’s error, according to the Globe story:

A technician hired by the new judiciary chairman, Patrick Leahy, Democrat of Vermont, apparently made a mistake [in 2001] that allowed anyone to access newly created accounts on a Judiciary Committee server shared by both parties – even though the accounts were supposed to restrict access only to those with the right password.

An investigation is ongoing. It sounds like the investigators have a pretty good idea who the culprits are. Based on Sen. Hatch’s statement, it’s pretty clear that people will be fired. Criminal charges seem likely as well.

UPDATE (Friday, January 23): Today’s New York Times runs a surprisingly flat story by Neil A. Lewis. The story seems to buy the accused staffer’s lame rationalization of the accesses, and it treats the investigation, rather than the improper acts being investigated, as the main news. The headline even refers, euphemistically, to files that “went astray”. How much of this is sour grapes at being beaten to this story by the Globe?

Report Critical of Internet Voting

Four respected computer scientists, members of a government-commissioned study panel, have published a report critical of SERVE, a proposed system to let overseas military people vote in elections via a website. (Links: the report itself; John Schwartz story at N.Y. Times; Dan Keating story at Washington Post.) The report’s authors are David Jefferson, Avi Rubin, Barbara Simons, and David Wagner. The problem is not in the design of the voting technology itself, but in the simple fact that it is built on ordinary PCs and the Internet, leaving it open to all of the standard security attacks that ordinary systems face:

The real barrier to success is not a lack of vision, skill, resources, or dedication; it is the fact that, given the current Internet and PC security technology, and the goal of a secure, all-electronic remote voting system, the [program] has taken on an essentially impossible task. There really is no good way to build such a voting system without a radical change in overall architecture of the Internet and the PC, or some unforeseen security breakthrough.

SERVE advocates have two reponses. The first is simple stonewalling (for example, saying “We have addressed all of those problems”, which is just false). I’ll ignore the stonewalling. The second response, which does have some force, says that SERVE is worth pursuing as an experiment. An experiment would have some value in understanding user-interface issues relating to e-voting; and the security risk would be acceptable as long as the experiment was small.

The authors of the report disagree, because they worry that the “experiment” would not be an experiment at all but just the first phase of deployment of a manifestly insecure system. If an experiment is done, and no fraud occurs – or at least no fraud is detected – this might be taken as showing that the system is secure, which it clearly is not.

This reminds me of an analogy used by the physicist Richard Feynman to criticize NASA’s safety culture after the Challenger space shuttle accident. (Feynman served on the Challenger commission, and famously demonstrated the brittleness of the rubber O-ring material by dunking it in his glass of ice water during a hearing.) Feynman likened NASA to a man playing Russian Roulette. The man spins the cylinder, puts the gun to his head, and pulls the trigger. Click; he survives. “Aha!” the man says, “This must be safe.”

UPDATE (Saturday, January 24): The Washington Post site has a chat with Avi Rubin, one of the report’s authors.

UPDATE (Thursday, February 6): The DoD has decided not to use SERVE in the November 2004 elections.

Bio Analogies in Computer Security

Every so often, somebody gets the idea that computers should detect viruses in the same way that the human immune system detects bio-viruses. Faced with the problem of how to defend against unexpected computer viruses, it seems natural to emulate the body’s defenses against unexpected bio-viruses, by creating a “digital immune system.”

It’s an enticing idea – our immune systems do defend us well against the bio-viruses they see. But if we dig a bit deeper, the analogy doesn’t seem so solid.

The human immune system is designed to stave off viruses that arose by natural evolution. Confronted by an engineered bio-weapon, our immune systems don’t do nearly so well. And computer viruses really are more like bio-weapons than like evolved viruses. Computer viruses, like bio-weapons, are designed by people who understand how the defensive systems work, and are engineered to evade the defenses.

As far as I can tell, a “digital immune system” is just a complicated machine learning algorithm that tries to learn how to tell virus code apart from nonvirus code. To succeed, it must outperform the other machine learning methods that are available. Maybe a biologically inspired learning algorithm will turn out to be the best, but that seems unlikely. In any case, such an algorithm must be justified by performance, and not merely by analogy.

Searching for Currency-Detection Software

Richard M. Smith observes that several products known to detect images of currency refer users to http://www.rulesforuse.org, a site that explains various countries’ laws about use of currency images. It seems a good bet that any software containing that URL has some kind of currency detection feature.

So you can look for currency-detecting software on your own computer. Just search the contents of your computer for the character string “http://www.rulesforuse.org”, and see if you find that string in any software such as an application or a printer driver.

Richard reports finding the string in drivers for the following printers: HP 130, HP 230, HP 7150, HP 7345, HP 7350, and HP 7550.

Go ahead, try it yourself. If you find anything, post a comment here with the details.

Photoshop and Currency

Several things have been missed in the recent flare-up over Adobe Photoshop’s refusal to import images of currency. (For background, see Ted Bridis’s APstory.)

There’s a hidden gem in the Slashdot discussion, pointing to a comment by Markus Kuhn of Cambridge University. Markus established that some color copiers look for a special pattern of five circles (usually yellow or orange in color), and refuse to make high-res copies of documents containing them. Sure enough, the circles are common on paper money. (On the new U.S. $20 bills, they’re the zeroes in the little yellow “20”s that pepper the background on the back side of the bill.) Markus called the special five-dot pattern the “constellation EURion” because he first spotted it on Euro notes.

But reported experiments by others show that Photoshop is looking for something other than EURion. For example, Jon Sullivan says that Photoshop refuses to load this image, which nobody would mistake for currency.

There’s been lots of talk, too, about artists’ legitimate desire to use currency images, and lots of criticism of Adobe for stopping them from doing so. But check out the U.S. government’s legal limitations on representations of currency, which are much more restrictive than I expected. Representations of U.S. currency must be one-sided, and must differ substantially in size from real bills, and all copies (including computer files) must be destroyed after their final use. Photographs or other likenesses of other U.S. securities, or non-U.S. currency, must satisfy all of the preceding rules, and must be in black and white. (Other countries’ rules are available too.)

Finally, the European Central Bank (ECB) is considering recommending legislation to the EU to require inclusion of currency recognition into digital imaging products. Predictably, the ECB’s proposal is wildly overbroad, applying to “any equipment, software, or other product[]” that is “capable of capturing images or transferring images into, or out of, computer systems, or of manipulating or producing digital images for the purposes of counterfeiting”. As usual, the “capable of” construction captures just about every general purpose communication technology in existence – the Internet, for example, is clearly “capable of … transferring images into, or out of, computer systems”. Note to self: it’s way past time to write that piece about the difficulties of regulating general purpose technologies.

[Thanks to Seth Schoen for pointers to some of this information.]