Lately, computer security researchers have been pointing out the risks of software monoculture. The idea is that if everybody uses the same software product, then a single virtual pathogen can wipe out the entire population, like Dutch Elm Disease mowing down a row of identical trees. A more diverse population would better resist infection. While this basic observation is accurate, the economics of monoculture vulnerability are subtle. Let’s unpack them a bit.
First, we need to review why monoculture is a problem. The more common a product is, the more it will suffer from infection by malware (computer viruses and worms), for two reasons. First, common products make attractive targets, so the bad guys are more likely to attack them. Second, infections of common products spread rapidly, because an attempt to propagate to a new host is likely to succeed if a high fraction of hosts are running the targeted product. Because of these twin factors, common products are much more prone to malware problems than are rare products. Let’s call this increased security risk the “monoculture penalty” associated with the popular product.
The monoculture penalty affects the incentives of consumers, making otherwise unpopular products more attractive due to their smaller penalty. If this effect is strong enough, it will prevent monoculture as consumers protect themselves by shunning popular products. Often, however, this effect will be outweighed by consumers’ desire for compatibility, which has the opposite effect of making popular products more valuable. It might be that monoculture is efficient because its compatibility benefits outweigh its security costs. And it might be that the market will make the right decision about whether to adopt a monoculture.
Or maybe not. At least three factors confound this analysis. First, monoculture is often another word for monopoly, and monopolists behave differently, and often less efficiently, than firms in competitive markets.
Second, if you decide to adopt a popular product, you incur a monoculture penalty. Of course, you take that into account in deciding whether to do so. But in adopting the popular product, you also increase the monoculture penalties paid by other people – and you have no incentive to avoid this harm to others. This externality will make you too eager to adopt the popular product; and there is no practical way for the other affected people to pay you to protect their interests.
Third, it may be possible to have the advantages of compatibility, without the risks of monoculture, thereby allowing users to work together while suffering a lower monoculture penalty. Precisely how to do this is a matter of ongoing research.
This looks like a juicy problem for some economist to tackle, perhaps with help from a techie or two. A model accounting for the incentives of consumers, producers, and malware authors might tell us something interesting.