December 12, 2024

An Inexhaustible Supply of Bugs

Eric Rescorla recently released an interesting paper analyzing data on the discovery of security bugs in popular products. I have some minor quibbles with the paper’s main argument (and I may write more about that later) but the data analysis alone makes the paper worth reading. Briefly, what Eric did is to take data about reported security vulnerabilities, and fit it to a standard model of software reliability. This allowed him to estimate the number of security bugs in popular software products and the rate at which those bugs will be found in the future.

When a product version is shipped, it contains a certain number of security bugs. Over time, some of these bugs are found and fixed. One hopes that the supply of bugs is depleted over time, so that it gets harder (for both the good guys and the bad guys) to find new bugs.

The first conclusion from Eric’s analysis is that there are many, many security bugs. This confirms the expectations of many security experts. My own rule of thumb is that typical release-quality industrial code has about one serious security bug per 3,000 lines of code. A product with tens of millions of lines of code will naturally have thousands of security bugs.

The second conclusion is a bit more surprising: there is little if any depletion of the bug supply. Finding and fixing bugs seems to have a small effect, or no effect at all, on the rate at which new bugs are discovered. It seems that the supply of security bugs is practically inexhaustible.

If true, this conclusion has profound implications for how we think about software security. It implies that once a version of a software product is shipped, there is nothing anybody can do to improve its security. Sure, we can (and should) apply software patches, but patching is just a treadmill and not a road to better security. No matter how many bugs we fix, the bad guys will find it just as easy to uncover new ones.

Comments

  1. So it sounds like the bottom line is we need to learn to make secure systems from insecure components. Is that possible?

  2. A few responses here:

    Alex:
    It’s true that constant flow doesn’t imply an infinite supply. It depends on your model for how bugs are found. However, most standard reliability models assume that the rate at which bugs are found is somehow correlated to the density of bugs in the code. As the density goes down, so should the rate of finding. Thus, a constant rate seems to imply that the pool is effectively quite large.
    Also, I don’t agree with the assertion that patching bugs in a section of code means that the rest of that code is error-free. It’s not that uncommon to have multiple bugs in the same code section found at different times. For one example, see the recent bugs found in the Linux mremap() code.

    Cypherpunk:
    I would state what we can say a little differently, namely: “The evidence is consistent with there being no depletion”. However, I’d make two observations about this. (1) This is precisely the kind of negative statement one usually makes when things don’t work. E.g., “the data is consistent with this drug not having any effect.” The typical response to that is to try to do more sensitive studies, which is indeed what I recommend here. (2) From a policy perspective, I think we need to consider whether we should be engaging in a practice for which we don’t have any evidence of effectiveness.

    Actually, I do briefly mention the secret fix case (footnote at p. 15). I don’t have an ideological bias against it, it’s just a little harder to model because we don’t know the rate of conversion of such patches to exploits. There seem to be widely varying opinions about how difficult that kind of reverse engineering is.

    Karl:
    I agree that once a bug is known about and a patch is available, you should probably apply that patch.

  3. Alex Stapleton states “Unless of course there is another bug in that section of code, or the patch it’s self introduces a new bug. We shall assume however, that this is a rare enough scenario for it’s effects to (on average) be negligible….

    I think this is a very bad assumption. I have worked at a number of commercial companies and in the internal studies it was found that for every ten changes, there were between one and three new bugs introduced. And there were twenty or more changes per day. Commercial software development is to bugs as a cesspool is to bacteria. A great breeding place with new food added around the clock.

    In addition, due to the amount of “cut’n’paste” coding, finding one bug in a section of code actually increases the changes that there is another bug in the same section of code. This may be counter-intuitive but true.

    Finally, although researchers may say finding a bug should cause a code inspection of the entire section and a search for similar code in other places in the codebase, the realities of commercial development is that this happens infrequently. This leads to an emphasis on buying or building automated tools to help find “similar” code. But this just leads to a dependence on such tools, which dumbs down the developer, which increases the rate of new problems.

  4. Even if there are a lot of undiscovered vulnerabilities out there, it still makes sense to install at least the patches for the known ones, since these are far more likely to lead be actually exploited.

    Every time you actually patch one known security hole, you are improving the situation, even if it is only a small improvement. So it is obviously not correct to say that “there is nothing anybody can do to improve its security”.

  5. Cypherpunk says

    While Eric Rescorla made a valiant attempt to pull meaning out of his figures, ultimately I’m afraid this is a case of GIGO. As Seth’s quote explains, his data can be consistent with anything from a zero depletion rate to a half life of 3.5 years. (And actually, close study of figure 17 looks to me more like a 2.5 year half life.) When you then read all the caveats about problems with the data, inaccuracies, noise, and worst of all, the inability to even estimate the bug-finding effort (which surely controls the rate of discovery of bugs), the resulting figures can’t be trusted. The only valid conclusion is, we can’t tell from this data how quickly bugs are depleted.

    Also, the obviously-best strategem for dealing with vulnerabilities isn’t even considered. That is to fix it, distribute the fix with general information about severity and implications, but not to provide enough information to reveal the vulnerability. Now, this would obviously work better for closed source than open, so he probably has an ideological bias against any such strategy. But in my software career, this has been the solution adopted by most companies I’ve worked with. Fix the bug, roll it into the next release, do an immediate re-release if it’s important enough, and hope that nobody outside the company ever finds out what we fixed.

  6. Ed writes: “The second conclusion is a bit more surprising: there is little if any depletion of the bug supply. Finding and fixing bugs seems to have a small effect, or no effect at all, on the rate at which new bugs are discovered. It seems that the supply of security bugs is practically inexhaustible.”

    –> My quibble with this conclusion is that, from experience, I can tell you that software companies have a perverse incentive not to fix security bugs in any timely manner. Simultaneously, they have incentives to claim they do everything possible. Even when a company does patch, the patch is typically the smallest change possible to deflect the instant vulnerability. Rather than patching for today’s vulnerability, the patch could have been a more global recognition that an entire module had architectural flaws.

    So the supply would seem inexhaustible if the software producer not only did as little as possible to cause depletion in the source of security bugs, but also continuously added to the supply of bugs through new code modules.

    This theory seems much stronger in terms of process for causal inference.

  7. Hmm … more precisely:

    “6.7 Are we depleting the pool of vulnerabilities? We are now in a position to come back to our basic question from Section 5: to what extent does vulnerability finding deplete the pool of vulnerabilities. The data from Sections 6.5 and 6.4 provides only very weak support for a depletion effect. Even under conditions of extreme bias, the highest depletion estimate we can obtain from Section 6.5.1, is that the half-life for vulnerabilities is approximately 3.5 years. However, no depletion whatsoever cannot be ruled out given this data. In that case, the probability of rediscovery pr would be vanishingly small. The conclusion that there is fairly little depletion accords with anecdotal evidence. It’s quite common to discover vulnerabilities that have been in programs for years, despite extensive audits of those programs. For instance, OpenSSH has recently had a number of vulnerabilities [21] that were in the original SSH source and survived audits by the OpenSSH team.”

  8. Alex Stapleton says

    A constant supply of bugs does not indicate that the supply is inexhaustible. It indicates that people are constantly supplying enough effort to find new bugs. Patching previous bugs, should in fact make it easier (not harder) to find new bugs, as you now know that in theory there are no bugs in that section of code. Unless of course there is another bug in that section of code, or the patch it’s self introduces a new bug. We shal assume however, that this is a rare enough scenario for it’s effects to (on average) be negligible and that the ease of finding any single bug is either reduced, or remains constant with the number of bugs found.

    What we should see eventually is that the supply of bugs for any individual application suddenly, stops almost entirely. I say almost entirely because there may be a few very obscure bugs left over.

    Merely because a tap can supply water at a (more or less) constant rate, does not mean that there is an infinite supply of water. It means that there is water in the pipe. And when there isn’t any water in the pipe, you won’t get any water coming out the tap.