November 22, 2024

Safire: US Blew Up Soviet Pipeline with Software Trojan Horse

William Safire tells an amazing story in his column in today’s New York Times. He says that in the early 1980’s, the U.S. government hid malicious code in oil-pipeline-control software that the Soviet Union then stole and used to control a huge trans-Siberia pipeline. The malicious code manipulated the pipelines valves and other controls in a way that caused a huge explosion, ruining the pipeline.

After that, Safire reports, “all the software [the Soviet Union] had stolen for years was suddenly suspect, which stopped or delayed the work of thousands of worried Russian technicians and scientists.”

I should emphasize that as of yet there is no corroboration for this story; and the story appears in an editorial-page column and not on the news pages of the Times (where it would presumably be subject to more stringent fact-checking, especially in light of the Times’ recent experience).

From a purely technical standpoint, this sort of thing is definitely possible. Any time you rely on somebody else to write your software, especially software that controls dangerous equipment, you’re trusting that person not to insert malicious code. Whether it’s true or not, Safire’s story is instructive.

Was the Senate File Pilfering Criminal?

Some people have argued that the Senate file pilfering could not have violated the law, because the files were reportedly on a shared network drive that was not password-protected. (See, for instance, Jack Shafer’s Slate article.) Assuming those facts, were the accesses unlawful?

Here’s the relevant wording from the Computer Fraud and Abuse Act (18 U.S.C. 1030):

Whoever … intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains … information from any department or agency of the United States … shall be punished as provided in subsection (c) …

[T]he term ”exceeds authorized access” means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter

To my non-lawyer’s eye, this looks like a judgment call. It seems not to matter that the files were on a shared server or that the staffers may have been entitled to access other files on that server.

The key issue is whether the staffers were “entitled to” access the particular files in question. And this issue, to me at least, doesn’t look clear-cut. The fact that it was easy to access the files isn’t dispositive – “entitled to access” is not the same as “able to access”. (An “able to access” exception would render the provision vacuous – a violation would require someone to access information that they are unable to access.)

The lack of password protection cuts in favor of an entitlement to access, if failure to protect the files is taken to indicate a decision not to protect them, or at least an indifference to whether they were protected. But if the perpetrators knew that the failure to use password protection was a mistake, that would cut against entitlement. The rules and practices of the Senate seem relevant too, but I don’t know much about them.

The bottom line is that unsupported claims that the accesses were obviously lawful, or obviously unlawful, should be taken with a large grain of salt. I’d love to hear the opinion of a lawyer experienced with the CFAA.

(Disclaimer: This post is only about whether the accesses were lawful. Even if lawful, they appear unethical.)

Senate File Pilfering "Extensive"

Charlie Savage reports in today’s Boston Globe:

Republican staff members of the US Senate Judiciary Commitee infiltrated opposition computer files for a year, monitoring secret strategy memos and periodically passing on copies to the media, Senate officials told The Globe.

From the spring of 2002 until at least April 2003, members of the GOP committee staff exploited a computer glitch that allowed them to access restricted Democratic communications without a password. Trolling through hundreds of memos, they were able to read talking points and accounts of private meetings discussing which judicial nominees Democrats would fight – and with what tactics.

We already knew there were unauthorized accesses; the news here is that they were much more extensive than had previously been revealed, and that the results of the snooping were leaked to the media on several occasions.

Committee Chairman Orrin Hatch (a Republican) has strongly condemned the accesses, saying that he is “mortified that this improper, unethical and simply unacceptable breach of confidential files may have occurred on my watch.”

The accesses were possible because of a technician’s error, according to the Globe story:

A technician hired by the new judiciary chairman, Patrick Leahy, Democrat of Vermont, apparently made a mistake [in 2001] that allowed anyone to access newly created accounts on a Judiciary Committee server shared by both parties – even though the accounts were supposed to restrict access only to those with the right password.

An investigation is ongoing. It sounds like the investigators have a pretty good idea who the culprits are. Based on Sen. Hatch’s statement, it’s pretty clear that people will be fired. Criminal charges seem likely as well.

UPDATE (Friday, January 23): Today’s New York Times runs a surprisingly flat story by Neil A. Lewis. The story seems to buy the accused staffer’s lame rationalization of the accesses, and it treats the investigation, rather than the improper acts being investigated, as the main news. The headline even refers, euphemistically, to files that “went astray”. How much of this is sour grapes at being beaten to this story by the Globe?

Bio Analogies in Computer Security

Every so often, somebody gets the idea that computers should detect viruses in the same way that the human immune system detects bio-viruses. Faced with the problem of how to defend against unexpected computer viruses, it seems natural to emulate the body’s defenses against unexpected bio-viruses, by creating a “digital immune system.”

It’s an enticing idea – our immune systems do defend us well against the bio-viruses they see. But if we dig a bit deeper, the analogy doesn’t seem so solid.

The human immune system is designed to stave off viruses that arose by natural evolution. Confronted by an engineered bio-weapon, our immune systems don’t do nearly so well. And computer viruses really are more like bio-weapons than like evolved viruses. Computer viruses, like bio-weapons, are designed by people who understand how the defensive systems work, and are engineered to evade the defenses.

As far as I can tell, a “digital immune system” is just a complicated machine learning algorithm that tries to learn how to tell virus code apart from nonvirus code. To succeed, it must outperform the other machine learning methods that are available. Maybe a biologically inspired learning algorithm will turn out to be the best, but that seems unlikely. In any case, such an algorithm must be justified by performance, and not merely by analogy.

Insecurity Features

An “insecurity feature” is a product feature that looks like it provides security, but really doesn’t. Insecurity features can make you less secure, because they trick you into trusting something of value to a product that can’t properly protect it.

A classic example is the “Password to Modify” feature of Microsoft Word, as revealed recently on BugTraq by Thorsten Delbrouck-Konetzko. This feature allows a document’s author to establish a password that must be entered before the document can be modified. That would be a pretty useful feature – if Word actually provided it. But as Mr. Delbrouck-Konetzko revealed, it is easy for anybody to modify such a file without knowing the password. In other words, Password to Modify is an insecurity feature.

The flaw that caused this is pretty easy to understand. Word implemented the Password to Modify feature by storing the hash of the password at a special place in the Word document file. The problem was that there was nothing to connect the stored password-hash with the rest of the file, so there was nothing to stop somebody from moving a hashed password from one Word file to another. So suppose Alice created a file and put the password “A” on it. Bob could create his own file with password “B” and then copy his password into Alice’s file; then Bob could modify Alice’s file (since it contained his password, which he knew). For extra style points, when Bob was done he could copy Alice’s password back into the modified file.

Microsoft responded to this report by issuing a bulletin helpfully explaining that the feature was never really meant to provide security. The bulletin contains such statements as this:

Not all features that are found on the Security tab are designed to help make your documents and files more secure.

Unfortunately, Word’s user interface doesn’t do much of anything to help users distinguish insecurity features from real security features. For example, here is the relevant dialog box from my copy of Word 2000:



I’ve outlined the relevant area in red. The box on the left lets you establish a password to open the file; that’s a real security feature. The box on the right lets you establish a password to modify the file; that’s an insecurity feature. Nothing in the user interfaces tells you that the features provide very different levels of protection.

There’s another lesson here, in the fact that such an obvious problem exists in a popular Microsoft product, despite Microsoft’s recent focus on security, and despite all of the genuine security experts who work there. This flaw reflects a bad decision made by some non-expert programmer or manager a long time ago, a decision that has persisted for so long, one assumes, through sheer inattention and inertia. And it’s not only Microsoft who failed to notice this for so long. Any good cryptographer, on hearing a description of what the Password to Modify feature supposedly did, should have been very suspicious. The problem was there to see for a long time; but apparently nobody looked.