November 21, 2024

Why So Many Worms?

Many people have remarked on the recent flurry of worms and viruses going around on the Internet. Is this a trend, or just a random blip? A simple model predicts that worm/virus damage should increase in proportion to the square of the number of people on the Net.

First, it seems likely that the amount of damage done by each worm will be proportional to the number of people on the Net. This is based on three seemingly reasonable assumptions.

(1) Each worm will exploit a security flaw that exists (on average) on a fixed fraction of the machines on the Net.
(2) Each worm will infect a fixed fraction (nearly 100%, probably) of the susceptible machines.
(3) Each infected machine will suffer (or inflict on others) a fixed amount of damage.

Second, it seems likely that the rate of worm creation will also be proportional to the number of people on the Net. This is based on two more seemingly reasonable assumptions.

(4) A fixed (albeit very small) fraction of the people on the Net will have the knowledge and inclination to be active authors of worms.
(5) Would-be worm authors will find an ample supply of security flaws for their worms to exploit.

It follows from these five assumptions that the amount of worm damage per unit time will increase as the square of the number of people on the Net. As the online population continues to increase, worm damage will increase even faster. Per capita worm damage will grow as the Net gets larger.

Assuming that the online population will keep growing, the only way out of this problem is to falsify one of the five assumptions. And each of the five assumptions seems pretty well entrenched.

We can try to address Assumption 1 by applying security patches promptly, but this carries costs of its own, and in any case it only works for flaws that have been discovered by (or reported to) the software vendor.

We can try to address Assumption 2 by building defenses that can quarantine a worm before it spreads too far. But aggressive worms spread very quickly, infecting all of the susceptible machines in the world in as little as ten minutes. We’re far from devising any safe and effective defense that can operate so quickly.

Assumption 3 seems impossible to prevent, since a successful worm is assumed to have seized control of at least one significant part of the victim’s computer.

Assumption 4 seems to be human nature. Perhaps we could deter worm authors more effectively than we do, but deterrence will only go so far, especially given that we’ve had very little success so far at catching (non-rookie) worm authors, and that worms can originate anywhere in the world.

So we’re left with Assumption 5. Can we reduce the number of security flaws in popular software? Given the size and complexity of popular programs, and the current state of the art in secure software development, I doubt we can invalidate Assumption 5.

It sure looks like we’re in for an infestation of worms.

Comments

  1. …I use an old Macintosh to read e-mail…even if someone destroyed it with some sort of tailored virus…why would I care?

  2. Steve: I agree that that is a much more difficult problem. On the other hand, if you look at e-mail worms, they seem to be relatively backloaded, so it’s easy to believe that a set of static filters with a good distribution system would make a big difference on that front.

  3. EKR: (re: spam-like techniques) You’re using static filters and they work great. I use them too. It’s systems that try to predict that a new type of worm or virus has occurred based on newly observed behavior that get into trouble.

    Some spam engines use evidence based techniques to predict whether new email is spam. These techniques (especially the stuff used by Yahoo!) seem to work OK for most users, but they don’t seem to work well at stopping Zero Hour virus/worm proliferation. I personally think there is promise here, though.

  4. “many people have tried to do spam-like techniques on virus/worms. However, to get any degree of effectiveness at blocking the virus/worm code, the false positive rate has been way too high to be useable.”

    I’m not sure what you mean by “spam-like techniques”. I’ve been using procmail and postfix filters to block out SoBig for the past 2-3 weeks. Works great. Moreover, there are lots of virus-filtering firewalls that do a pretty good job of preventing infection,

  5. Hackers write for platforms that have the highest return for them, ie to infect the most machines they can with their software. This means that the more popular the platform the more likely it is to encounter infection. The simplest way to avoid getting viruses and worms yourself is to use capable but less popular platforms yourself. In order I would suggest 1) don’t use outlook 2) don’t use Windows 3) don’t use Intel hardware. If you do all three you will not very likely become infected.

  6. WR: many people have tried to do spam-like techniques on virus/worms. However, to get any degree of effectiveness at blocking the virus/worm code, the false positive rate has been way too high to be useable.

  7. I’m surprised anti-spam techniques haven’t been applied yet to this issue. Most Anti-virus software currently requires a known signature and will not do anything about unknown viruses no matter how obvious.

    If I receive a word document with a macro containing a macro with various features – ip / web addresses, file copy, e-mail access, duplication code, etc. it is extremely likely it is dangerous code and should be quarantined.

    If a dozen different emails arrive (server level or local) that all have the same executable attachment containing questionable code, but from different people, it should be quarantined.

    That isn’t going to get everything, but it would sure catch a lot of this stuff.

  8. SARS is er niets bij …

    Er komen steeds meer wormen en virussen op het internet. Een trend of een tijdelijk verschijnsel?…

  9. I don’t mean to turn this into an argument, but I have never, not even once, received a piece of Visual Basic in my mail that was not a virus. I have received .exe files, but only as a part of performing actual work; none of my friends or family has ever sent me a .exe, and if the ability to receive and run these files was turned off by default, nobody I know would suffer any inconvenience. If this feature simply required user intervention to enable it, the viruses would not propagate, because most people would stay with the default settings. Based on my experience, it is not at all true that “most people” would be inconvenienced by Java-style sandboxing — for my personal sample, ALL people derive NO utility at all from executable email (.EXE, .VBS, .PIF), and hence would not even be aware of any restrictions, no matter how draconian.

    Or, failing that, here is a different tack on allowing executable content while thwarting viruses. Have the OS limit creation of outbound SMTP connections to trusted programs. This can be the mailer itself (the user agent) or something that acts as a proxy. Any mailing of executable content requires a quick authentication. You do not even need to type a password — just dismiss an alert box, to indicate that a human was in the loop.

  10. To EKR: if the Usenix symposium papers represent the best efforts in the field then we’re in serious trouble.

    To David Chase: You have good reason to believe your assumptions are correct. The primary problem with sandboxing is that the functionality restrictions are too great for most people, and Microsoft has been working on safer ways to extend sandboxing such that the openings they create are watched more carefully. Java and .Net runtime implementations are early versions of this, as are Microsoft’s SAFER APIs in W2K and XP.

    The Colusa acquisition probably helped Microsoft get an injection of new ideas in that area, since they largely stalled on their own.

  11. (In response to EKR’s comment on sandboxes). A properly designed sandbox would have restrictions similar to those placed on untrusted Java running in a browser. The code could not open any files on the client machine, nor can it open general-purpose network connections. In general, a Java applet cannot read files, cannot modify files, and cannot connect to any machine other than the one from which the HTML was retrieved. Any windows created by the applet must be decorated in a way that prevents them from spoofing a window that might legitimately request the user’s password. This is a solved problem in browsers, barring the rare bug in the sandbox implementation. The mailer is most definitely NOT accessible from within the sandbox.

    An binary sandbox could use the technology developed by Lucco and Wahbe (at Colusa Software, acquired by Microsoft). Similar restrictions would apply on file and network operations.

    And, as I noted previously, the tossed-in-the-trash sandbox (no operations allowed, at all) has not cramped my use of email in the least, and it provides simple, 100% reliable protection against email viruses like SoBig.

  12. Open source (ie, well-reviewed code) and heterogenous networks are a good defense: too many computing environments standardize on brand names rather than functionality and interoperable and standardized protocols.

  13. It seems that many (including myself) have chided M$ for being largely responsible for the recent problems… however, Prof. Felten, do you think this could be a tactic to push Trusted Computing? (That is, there is a need that we can only fulfill with the proper machine architecture)

    It would seem that M$ could benefit from steppting away from the cathedral-method of software devlopment… at least, in part. For example, they could have a couple “striker” coders (used in the sense of the soccer position “striker” where a specific player doesn’t have a fixed position but plays wherever the ball is) that have a more wholistic view of the M$ codebase (for Windows or a given app).

  14. There’s actually quite a bit of work on assumption 5. In the latest USENIX Security Symposium, for instance, there were 6 papers on hardening alone.

    Unfortunately, the sandboxing approach doesn’t work well. Look at SoBig, for instance, which infects your mailer and gets it to … send out mail. Now, sandboxing may stop SoBig as it is now, but in principle it can’t protect against that kind of worm.

  15. I think we can do a lot to reduce the number of security flaws in software.

    First, get serious about sandboxing all the different scripting languages (Java, Javascript, VB, Wintel executables). In practice, the trash can is a good sandbox for all script-carrying email. I do not know one single person who has ever derived any utility from email that contained Java, JavaScript, Visual Basic, or a Wintel executable. Perhaps someone, somewhere, makes use of this technology, but the default should be that it is turned off. This would essentially kill off all the email viruses.

    Second, we kill the buffer-overflow bugs by (re)writing all the popular software in a safe language. I don’t care which one, there are several to choose from (Java, Lisp, Smalltalk, Modula-3, C#). This stops Nimda, Code Red, and Blaster dead in their tracks. Failing that, vendors (in particular, Microsoft) can do a better job of shipping systems without all ports open and active by default.

    Third, for those cases where scripts do need to be run, the sandbox needs to be better, and the security model needs to be better. The typical case for running “trustworthy” code from over the net is that an alert box pops up asking me to grant permission for the software to modify my system in some unspecified and unlimited way. That’s not acceptable — it makes an informed decision impossible, and it trains people to say “yes” when they really should not. Scripts could perhaps run in a copy-on-write virtual world laid on top of the real one, with changes copied back only after they were approved by a something checking for virus-like behavior. This would also have a lighter touch than many of the current “virus checkers”, which seem to interfere with all sorts of operations, and consequently make the system slow and crashy.

    None of this is rocket science. We know how to build sandboxes. It’s easy enough to rip out the code from mailers that runs Visual Basic and .exe attachments. This “feature” has little or no utility; we won’t miss it.

  16. As the recent round of Sobig shows:

    6) Many, if not most computer users treat their computers like a toaster rather than a car that needs regular maintenance and care in use.

    Did Sobig actually exploit any security holes, other than the one between the keyboard and the monitor? A proper security model can help to mitigate against this, but it’s a long, long ways off from being on most desks.

    When the first couple Sobig emails hit my inbox, before the news hit, I actually thought someone was deliberately targeting me. Either way, there was no way in hell I was going to even open the email, much less run the attachment. Sigh.

  17. Interesting. Did you consider that changing the restrictions on the public I/O streams could have a significant impact on Assumption 5?

    As an example, if every Windows desktop system were changed to have a default policy of drop all packets unless they came from an originated session AND translate all inbound & outbound email to type TEXT?

    As a more way-out example, Intel processors actually had the ability to mark data segments as not allowing code execution. What if we only stuck data into such segments?

    I guess the point is, isn’t ANYBODY actually working on advances in security technology that would combat assumption 5? If not, that’s a startup that should be funded.

  18. Read This

    This is a good look at how bad a worm attack could be if they pulled out all of the stops. It makes you think long and hard.

  19. I’d say the week point is 2) – not so much that everyone should move to a non-windows OS, but that, say, half of users should. That 50% mark (not to discount other OSes) is what creates a maximum heterogeneity in the wild. Such a state would make it easier to contain viruses by reducing the fixed percentage of vulnerable targets, or at least raise the bar for virus writers by forcing them to be able lto integrate multi-vendor exploits into their worms.

    “Hackers loooooooove standards!”, to mangle a phrase from the Ramen worm. Ramen was a Linux worm, ironically enough, that exploited a common vulnerability in a stock Red Hat distribution at the time.

    Ideally, the more different OSes people run, and the more competitive the balance between them, the more difficult it is to find a universal vulnerability.

    Unfortunately, users benefit from standards too – standard protocols are the reason we can all talk to each other so easily. We can use standard protocols, however, without standard software, though (with standard vulnerabilities).

  20. I’m tempted to say Assumption 5 has the implication “use Linux” 🙂

    I know the counter-argument is that Linux has flaws too. However, the counter-counter argument is that it doesn’t have as many of them.

    Note there are some deep implications to Assumption 3. Biologically, human epidemics have sometimes been addressed by measures which go against human nature, in a very authoritarian way – quarantines and forced treatment.

    We may see calls do something similar here – remote installations of patches – which of course brings it’s own, err, can of worms.