November 24, 2024

Software That Lasts 200 Years

Dan Bricklin has a provocative new essay arguing that at least some software should be built to last for a long time, perhaps as long as 200 years.

We need to start thinking about software in a way more like how we think about building bridges, dams, and sewers. What we build must last for generations without total rebuilding. This requires new thinking and new ways of organizing development. This is especially important for governments of all sizes as well as for established, ongoing businesses and institutions.

It’s definitely worth thinking about how to do this, but after some thought I am skeptical that this kind of long-term investment really makes sense given the present rate of improvement in software.

Whenever we trade off present spending against future spending, we have to be careful that costs in the future are properly discounted, to account for the time value of money and for the greater efficiency of future engineers. What should the discount rate be for software investments? That’s arguable, but the correct rate is reasonably large.

Some costs deflate according to Moore’s Law, or about 60% per year (compounded). Others deflate according to the rate of improvement in programmer productivity, which I will estimate (via an utterly unsupported wild guess) as 10% annually. Some deflate as standard business expenses would; I’ll estimate that rate at 5% annually. According to those rates, over a 200 year period, Moore’s Law expenses will deflate, astronomically, by a factor of about 10 to the 40th power; programming time will deflate by a factor of about 200,000,000; and ordinary expenses will deflate by a factor of about 17,000. So an investment of $1 now is only worthwhile if it saves year-2204 expenses of $17,000 (for ordinary expenses), $200 million (of programming expenses), or a bazillion dollars of Moore’s-Law-sensitive expenses.

Given those numbers, how much should we be willing to invest now, to provide benefits to our 200-years-from-now descendants? Present investment is only worthwhile if it creates enormous savings in that distant future – and it’s hard to believe that we know much of anything about what will be wanted, technologically, that far in the future. Remember, it was only sixty years ago that Thomas Watson of IBM famously estimated that the total world market would demand only five computers.

There is one area where it certainly makes sense to invest now to provide future benefits, and that is in ensuring that records of major events (birth and death records, and similar social archives) are recoverable in the future. The easy part of doing this is ensuring that the data are archived in an easily decoded format, so that it can be reconstructed later, if necessary. (Remember, programmer effort in the far future is cheap.)

The hard part of preserving these records is in making sure that the data is stored on a medium that is still readable (and hasn’t been misplaced) two hundred years from now. Many of today’s storage media have a life much shorter than that. I understand that there is a method involving patterns of pigment on thin cellulose-based sheets that is quite promising for this purpose.

Comments

  1. Dan talks at length about “societal infrastructure software”, which he defines as “the software that keeps our societal records, controls and monitors our physical infrastructure (from traffic lights to generating plants), and directly provides necessary non-physical aspects of society such as connectivity”, and then goes on about how this sort of software should be designed to last 200 years, rather than needing to be upgraded or replaced every several years.

    From a historical perspective, this type of infrastructure has been the target of regular upgrades and replacements long before computers became viable in the last 20 to 40 years. Record storage systems, for example, have gone through numerous “upgrades” over the last 200 years… various filing systems, paper document formats, microfilm, and so on. Systems of monitoring and control of key machinery have changed over the years, from direct human observation to mechanical to pneumatic (compressed air) to analog electronic to simple digital and various levels of computers. Communication systems (or “connectivity”) have also dramatically improved over the last 200 years, as a long sequence of major changes (telegraph, telephone, radio, television/video, etc) and many minor improvements within each of these major eras.

    Why then should software last for 200 years, when most other “societal infrastructure” doesn’t usually have such a long service life, and regular upgrades and updates occur.

    As nearly as I can tell, Dan’s complaint is that bridges, power generators, dams, traffic lights, and sewers last for a very long time, yet software life cycles are often 2-3 years and sometimes even shorter. At least commidity PC software. Dan negelects two types of software.

    First, quite a lot of software does last for many years. There are many important systems that still run on DOS, Vax and old Unix. The “Y2K bug” actually brought to public perception the fact that many important systems were running software that had lasted 10, 15, 20 and sometimes even 30+ years. Much of it required only minor revision… not dissimilar to bridges that need occasional rivets replaced, dams needing inspection and upkeep, traffic lights needing reprogramming for different volumes of cars, and regular maintainence on sewers. Unlike these physical world counterparts, much of this important infrastructure software had received little or no maintainence over many years.

    Second, much of the important “infrastructure software” does not run on PC style hardware or 32 bit processors. It runs on microcontrollers that are embedded inside products. Monitoring and control devices for machinery are a good example. The code in most of these devices is not upgradable, or if a flash upgrade capability exists, it is rarely used after the device is installed. These fixed-function products, which often times have very complex code inside, are designed to last for many years.

    But 200 years is unrealistic. Perhaps some software written today, for PC-style machines or embedded microcontrollers may still be in use 200 years from now. But the probability is low, not because of fatal flaws in software development methodology, not because of declining costs associated with programming, but because almost all of the infrastructure that supports our society is regularly upgraded and replaced over the years, wether it is based on software or not.

    Important infrastructure software, such as the PLC controllers that run all the traffic lights, are much more reliable and last a lot longer than Dan believes.

  2. These future-value calculations seem questionable for at least three reasons: first, the deflators are dicey; second, the argument misstates the timing of future expenses; third, it’s not at all clear how much incremental cost we’re talking about (versus redirecting current expenditures).

    Even if Moore’s Law continues (at roughly 50% every 18 months, which is closer to 30% annually than 60%) It’s still trumped by Amdahl’s law, which says that the slow stuff comes to dominate the equation. (It is an argument for not building 200-year hardware today, albeit the National Airspace System might be a counterargument there.)
    Programmer productivity, meanwhile, seems like something of a chimerical thing to put a 10%-a-year improvement number on in light of the fact that the majority of large IT projects still fail. And “ordinary business expenses” don’t seem to be deflating much beyond the rate of measured productivity increases (historically about 2% a year, more lately in the US albeit with some measurement questions).

    But more important than those numbers is the fact that we’re not really talking about spending a dollar now, followed by spending (or not spending) some number of FY2205 dollars. We’re talking about spending money in a more or less steady stream of upgrades every 5-20 years — and spending a lot more of that money if systems today aren’t designed to be portable and upgradable. Think of how many projects today are still constrained by design decisions made 30 or 40 years ago, and multiply by the increased pervasiveness of software and hardware infrastructure.

    Meanwhile, instead of being a counterargument, the suggestion that some records might best be kept on (acid-free) paper is a perfect example of the kind of decision that might be made by people building infrastructure for the long term. It trades some losses of data structuring and accessibility in the short term for potential big gains in the long term (when Moore’s law and improvements in programmer productivity have given us artificially intelligent robotic file clerks to sort through it as easily as a CPU searches its disks today). Looking at all storage media as temporary is part of designing for the long term…