November 22, 2024

Warning Fatigue

One of the many problems facing security engineers is warning fatigue – the tendency of users who have seen too many security warnings to start ignoring the warnings altogether. Good designers think carefully about every warning they display, knowing that each added warning will dilute the warnings that were already there.

Warning fatigue is a significant security problem today. Users are so conditioned to warning boxes that they click them away, unread, as if instinctively swatting a fly.

Which brings us to H.R. 2752, the “Author, Consumer, and Computer Owner Protection and Security (ACCOPS) Act of 2003”, introduced in the House of Representatives in July, and discussed by Declan McCullagh in his latest column. The bill would require a security warning, and user consent, before allowing the download of any “software that, when installed on the user’s computer, enables 3rd parties to store data on that computer, or use that computer to search other computers’ contents over the Internet.”

Most users already know that downloading software is potentially risky. Most users are already accustomed to swatting away warning boxes telling them so. One more warning is unlikely to deter the would-be KaZaa downloader.

This is especially true given that the same warning would have to be placed on many other types of programs that meet the bill’s criteria, including operating systems and web browsers. The ACCOPS warning will be just another of those dialog boxes that nobody reads.

A Virus Made Me Do It

According to press reports, an Alabama accountant has been acquitted on charges of tax evasion, after he argued that a computer virus had caused him to underreport his income three years in a row. He could not say which virus it was. Nor could he explain why it had affected only his own return, but not any of his clients’ returns which he had prepared on the same computer.

If the reports are accurate, the man’s claims sound bogus. I suppose the jury felt they had a reasonable doubt about whether his story was true.

It’s hard to see how juries can reach just outcomes in cases like this. Virus infestations are common, and it’s often hard to tell after the fact what happened. We’ll probably see more computer-virus defenses in cases like this, and some of them will lead to unjust verdicts.

This is yet another price we have to pay for the persistent insecurity of our computer systems.

[Thanks to Brian Kernighan for pointing out this story.]

Why So Many Worms?

Many people have remarked on the recent flurry of worms and viruses going around on the Internet. Is this a trend, or just a random blip? A simple model predicts that worm/virus damage should increase in proportion to the square of the number of people on the Net.

First, it seems likely that the amount of damage done by each worm will be proportional to the number of people on the Net. This is based on three seemingly reasonable assumptions.

(1) Each worm will exploit a security flaw that exists (on average) on a fixed fraction of the machines on the Net.
(2) Each worm will infect a fixed fraction (nearly 100%, probably) of the susceptible machines.
(3) Each infected machine will suffer (or inflict on others) a fixed amount of damage.

Second, it seems likely that the rate of worm creation will also be proportional to the number of people on the Net. This is based on two more seemingly reasonable assumptions.

(4) A fixed (albeit very small) fraction of the people on the Net will have the knowledge and inclination to be active authors of worms.
(5) Would-be worm authors will find an ample supply of security flaws for their worms to exploit.

It follows from these five assumptions that the amount of worm damage per unit time will increase as the square of the number of people on the Net. As the online population continues to increase, worm damage will increase even faster. Per capita worm damage will grow as the Net gets larger.

Assuming that the online population will keep growing, the only way out of this problem is to falsify one of the five assumptions. And each of the five assumptions seems pretty well entrenched.

We can try to address Assumption 1 by applying security patches promptly, but this carries costs of its own, and in any case it only works for flaws that have been discovered by (or reported to) the software vendor.

We can try to address Assumption 2 by building defenses that can quarantine a worm before it spreads too far. But aggressive worms spread very quickly, infecting all of the susceptible machines in the world in as little as ten minutes. We’re far from devising any safe and effective defense that can operate so quickly.

Assumption 3 seems impossible to prevent, since a successful worm is assumed to have seized control of at least one significant part of the victim’s computer.

Assumption 4 seems to be human nature. Perhaps we could deter worm authors more effectively than we do, but deterrence will only go so far, especially given that we’ve had very little success so far at catching (non-rookie) worm authors, and that worms can originate anywhere in the world.

So we’re left with Assumption 5. Can we reduce the number of security flaws in popular software? Given the size and complexity of popular programs, and the current state of the art in secure software development, I doubt we can invalidate Assumption 5.

It sure looks like we’re in for an infestation of worms.

Why Aren't Virus Attacks Worse?

Dan Simon notes a scary NYT op-ed, “Terrorism and the Biology Lab,” by Henry C. Kelly. Kelly argues convincingly that ordinary molecular biology students will soon be able to make evil bio-weapons. Simon points out the analogy to computer viruses, which are easily made and easily released. If serious bio-weapons become as common as computer viruses, we are indeed in deep trouble.

Eric Rescorla responds by noting that the computer viruses we actually see do relatively little damage, at least compared to what they might have done. Really malicious viruses, that is, ones engineered to do maximum damage, are rare. What we see instead are viruses designed to get attention and to show that the author could have done damage. The most likely explanation is that the authors of well-known viruses have written them as a sort of (twisted) intellectual exercise rather than out of spite. [By the way, don’t miss the comments on Eric’s post.]

This reminds me of a series of conversations I had a few years ago with a hotshot mo-bio professor, about the national-security implications of bio-attacks versus cyber-attacks. I started out convinced that the cyber-attack threat, while real, was overstated; but bio-attacks terrified me. He had the converse view, that bio-attacks were possible but overhyped, while cyber-attacks were the real nightmare scenario. Each of us tried to reassure the other that really large-scale malicious attacks of the type we knew best (cyber- for me, bio- for him) were harder to carry out, and less likely, than commonly believed.

It seems to me that both of us, having spent many days in the lab, understood how hard it really is to make a novel, sophisticated technology work as planned. Since nightmare attacks are, by definition, novel and sophisticated and thus not fully testable in advance, the odds are pretty high that something would go “wrong” for the attacker. With a better understanding of how software can go wrong, I fully appreciated the cyber-attacker’s problem; and with a better understanding of how bio-experiments can go wrong, my colleague fully appreciated the bio-attacker’s problem. If there is any reassurance here, it is in the likelihood that any would-be attacker will miss some detail and his attack will fizzle.

Palladium as P2P Enabler

A new paper by Stuart Schechter, Rachel Greenstadt, and Mike Smith, of Harvard, points out what should have been obvious all along: that “trusted computing” systems like Microsoft’s now-renamed Palladium, if they work, can be used to make peer-to-peer file sharing systems essentially impervious to technical countermeasures.

The reason is that Palladium-like systems allow any software program absolute control over which hardware/software configurations it will interoperate with. So a P2P system can refuse to interoperate with any “unauthorized” version of itself. This would keep copyright owners (or anyone else) from spoofing file contents. Although the paper doesn’t point this out directly, a clever Palladium-enabled P2P system would make it much harder for anyone to trace the true source of a copyrighted file.

The moral of this story is simple. Computer security is, ultimately, a battle for control of a computer system; and in that battle, both sides will use the available tools. The same tools that make robust networks also make robust P2P networks. The same tools that prevent infiltration by viruses also prevent infiltration by spoofing agents.