Lately, computer security researchers have been pointing out the risks of software monoculture. The idea is that if everybody uses the same software product, then a single virtual pathogen can wipe out the entire population, like Dutch Elm Disease mowing down a row of identical trees. A more diverse population would better resist infection. While this basic observation is accurate, the economics of monoculture vulnerability are subtle. Let’s unpack them a bit.
First, we need to review why monoculture is a problem. The more common a product is, the more it will suffer from infection by malware (computer viruses and worms), for two reasons. First, common products make attractive targets, so the bad guys are more likely to attack them. Second, infections of common products spread rapidly, because an attempt to propagate to a new host is likely to succeed if a high fraction of hosts are running the targeted product. Because of these twin factors, common products are much more prone to malware problems than are rare products. Let’s call this increased security risk the “monoculture penalty” associated with the popular product.
The monoculture penalty affects the incentives of consumers, making otherwise unpopular products more attractive due to their smaller penalty. If this effect is strong enough, it will prevent monoculture as consumers protect themselves by shunning popular products. Often, however, this effect will be outweighed by consumers’ desire for compatibility, which has the opposite effect of making popular products more valuable. It might be that monoculture is efficient because its compatibility benefits outweigh its security costs. And it might be that the market will make the right decision about whether to adopt a monoculture.
Or maybe not. At least three factors confound this analysis. First, monoculture is often another word for monopoly, and monopolists behave differently, and often less efficiently, than firms in competitive markets.
Second, if you decide to adopt a popular product, you incur a monoculture penalty. Of course, you take that into account in deciding whether to do so. But in adopting the popular product, you also increase the monoculture penalties paid by other people – and you have no incentive to avoid this harm to others. This externality will make you too eager to adopt the popular product; and there is no practical way for the other affected people to pay you to protect their interests.
Third, it may be possible to have the advantages of compatibility, without the risks of monoculture, thereby allowing users to work together while suffering a lower monoculture penalty. Precisely how to do this is a matter of ongoing research.
This looks like a juicy problem for some economist to tackle, perhaps with help from a techie or two. A model accounting for the incentives of consumers, producers, and malware authors might tell us something interesting.
Monoculture Debate: Geer vs. Charney
Yesterday the USENIX Conference featured a debate between Dan Geer and Scott Charney about whether operating-system monoculture is a threat to computer security. (Dan Geer is a prominent security expert who co-wrote last year’s CCIA report on the monoc…
A couple of the posts seem to narrowly define a monoculture as a software vendor’s monopoly. The earlier post that discussed the positive effects of standards (compatibility, etc.) are dead on. If you define a monoculture as simply the widespread use of proprietary software that approaches or is a monopoly, then you have defined the issue with a policy bias and any analysis will suffer from the narrowness of the definition. industry standards and the widespread use of a particular open source software product must be lumped together with other moonocultures because the same negative and positive factors are equally at play (compatibility, viral effects, etc.).
One interesting difference with open source software might be its capacity to cope with security issues either before or after their exploitation. Perhaps this is the road to having the positives and avoiding or mitigating some of the monoculture negatives.
To Dan’s comment: Ed seems to be addressing the cost of actual attacks and not the potential for attacks.
The potential for attacks could be disastrous, but if attackers aren’t incented to attack the system then do we care as much?
I think Ross Anderson’s paper on perverse economic incentives (http://www.ftp.cl.cam.ac.uk/ftp/users/rja14/econ.pdf) is a good base to have discussions from. His paper says:
1) Finding security bugs is an attacker’s market today.
2) Even a hugely dedicated team of individuals will have trouble stopping serious bugs from being found by rogue attackers.
3) Business interests conflict with maximizing security protection, making the job of even a dedicated team very difficult.
Point 3 seems to be at the heart of the monoculture argument. The business practices that made Microsoft dominant also contribute to its problems with security defense. It seems natural that their products would suffer these losses as a natural outcome. The conclusion being that it can be safer to not use Microsoft products if your primary fear is that you will be the victim of an attack against the Microsoft-product-using population. However, this calculation ignores the risk of the potential to be attacked from your new choice.
Arguably, the potential to be attacked from the non-monoculture choice is equal (if not greater) because it is an attacker’s market. However, the hope of the switchers is that attackers lack incentives to target them as individuals.
This is my justification for my use of non-Microsoft products on my higher-value systems. And, in practice, it works very well.
Dan, that’s a very good point, that our goal should be to minimize the damage to the network’s resources as a whole, and diversity won’t achieve that all by itself.
However, the analogy to security protocols doesn’t work, because designing a secure OS is infinitely harder than designing a secure protocol or cryposystem. Something like SSL or DES can be fully specified in a few pages, while an OS would take a whole book if not a bookshelf. And since the difficulty of evaluation increases enormously as the size and complexity of the system grow, there is no way we can get an appreciable degree of confidence that a particular OS is secure.
I don’t see why a monoculture is inherently less secure than diversity. In nature, both attackers and defenders are designed through selection of random processes, and hence both use diversity to increase the number of novel attacks and defenses, and to raise the probability that at least one variety will survive. In the world of computers, however, the goal is to achieve maximal overall security and uptime, not to maximize the likelihood that some machine, somewhere, will still be running. Moreover, applied design, implementation and evaluation resources can make a particular system much, much safer than a randomly generated one, and to the extent that diversity dilutes the use of those resources, it may inhibit the emergence of highly secure, reliable systems.
In the field of security protocols, it has long been accepted that designing a very small number of standard security protocols, studying the heck out of them, and then trying to get everybody to use them, is vastly superior to having everyone “roll their own”. It’s true that an undetected hole in a widely-used protocol could be disastrous, but it’s also assumed to be pretty unlikely, once the protocol has been sufficiently vetted. On the other hand, the existence of lots of security protocols virtually guarantees that many security holes will slip under the radar, because the community of intelligent, experienced, interested analysts simply can’t cover the entire range of protocols in use.
…the universality of a widely-accepted standard…
What does that have to do with a monoculture? Standards are not a benefit of a monoculture. Monocultures (monopolies) are in fact the opposite of standards and oppose the creation of standards. PPP, SMTP, HTTP, and JPEG are all widely-accepted standards that help to prevent monocultures.
Cypherpunk,
I was trying to include the positive externalities. That’s what I was driving at with this passage:
unfortunately, the monoculture penalty is not paid until it’s too late (that is, harder to change your decision) whereas the anti-monoculture penalty (Linux, Mac OS X) is paid up front, every damn day!
In my opinion, monoculture by itself is much less of a problem in computer security than (1) Microsoft’s priorities and (2) sloppy coding (e.g. buffer overflows). It is apparent that Microsoft figures that it will make more money by making Windows as easy to use as possible than it would by making Windows more secure but harder to use. That means, for example, that executable email attachments or scripts in web pages can be run with little or no user intervention, and have full access to the user’s files and in some cases to operating system files as well. With a system designed like this, it’s no surprise that viruses can do tremendous damage.