December 13, 2024

Why the Singularity is Not a Singularity

This is the first in a series of posts about the Singularity, that notional future time when machine intelligence explodes in capability, changing human life forever. Like many computer scientists, I’m a Singularity skeptic. In this series I’ll be trying to express the reasons for my skepticism–and workshopping ideas for an essay on the topic that I’m working on. Your comments and feedback are even more welcome that usual!

[Later installments in the series are here: 2 3 4]

What is the Singularity?  It is a notional future moment when technological change will be so rapid that we have no hope of understanding its implications. The Singularity is seen as a cultural event horizon beyond which humanity will become … something else that we cannot hope to predict. Singularity talk often blends into theories about future superintelligence posing an existential risk to humanity.

The essence of Singularity theory was summarized in an early (1965) paper by the British mathematician I.J. Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Vernor Vinge was the first to describe this as a “singularity”, adopting a term from mathematics that applies when the growth rate of a quantity goes to infinity. The term was further popularized by Ray Kurzweil’s book, “The Singularity is Near.”

Exponential Growth

The Singularity theory is fundamentally a claim about the future growth rate of machine intelligence. Before evaluating that claim, let’s first review some concepts useful for thinking about growth rates.

A key concept is exponential growth, which means simply that the increase in something is proportional to how big that thing already is. For example, if my bank account grows at 1% annually, this means that the every year the bank will add to my account 1% of the current balance. That’s exponential growth.

Exponential growth can happen at different speeds. There are two natural ways to characterize the speed of exponential growth. The first is a growth rate, typically stated as a percentage per some time unit. For example, my notional bank account has a growth rate of 1% per year. The second natural measure is the doubling time–how long it will take the quantity to double. For my bank account, that works out to about 70 years.  

A good way to tell if a quantity is growing exponentially is to look at how its growth is measured. If the natural measure is a growth rate in percent per time, or a doubling time, then that quantity is growing exponentially. For example, economic growth in most countries is measured as a percent increase in (say) GDP, which tells us that GDP tends to grow exponentially over the long term–with short-term ups and downs, of course. If a country’s GDP is growing at 3% per year, that corresponds to a doubling time of about 23 years.

Exponential growth is very common in nature and in human society. So the fact that a quantity is growing exponentially does not in itself make that quantity special nor does it give that quantity unusual, counterintuitive dynamics.

The speed and capacity of computers has grown exponentially, which is not remarkable. What is remarkable is the growth rate in computing capacity. A rule of thumb called “Moore’s Law” states that the speed and capacity of computers will have a doubling time of 18 months, which corresponds to a growth rate of 60% per year.  Moore’s Law has held true for roughly fifty years–that’s 33 doublings, or roughly a ten-billion-fold increase in capacity.

The Singularity is Not a Literal Singularity

As a first step in considering the plausibility of the Singularity hypothesis, let’s consider the prospect of a literal singularity–where the rate of improvement in machine intelligence literally becomes infinite at some point in the future. This requires that machine intelligence grows faster than any exponential, so that the doubling time gets smaller and smaller, and eventually goes to zero.

I don’t know of any theoretical basis for expecting a literal singularity.  There is virtually nothing in the natural or human world that grows super-exponentially over time–and even “ordinary” super-exponential growth does not yield a literal singularity. In short, it’s hard to see how the AI “Singularity” could possibly be a true mathematical singularity.

So if the Singularity is not literally a singularity, what is it?  The next post will start with that question.

Comments

  1. the Singularity is when one system becomes self aware and powerful enough to hack all other systems and steal their resources for itself. The worrying thing is A.I. and Quantum computers are arriving at the same time.

  2. Paul Christiano says

    “there is virtually nothing in the natural or human world that grows super-exponentially over time”

    This seems mistaken. GDP has grown roughly exponentially since 1950, but over a longer time period almost every measure of human output has grown super-exponentially, in line with the simplest endogenous growth models. Population once grew at 0.001%/year, and then at 0.01%/year, and then at 0.1%/year, and then at 1%/year.

    Of course growth rates won’t continue increasing indefinitely; at some point we will reach the end of technological progress will slow and we will hit insurmountable resource limits, even our current rate of growth can’t be sustained indefinitely. And even before we hit fundamental limits progress will almost certainly slow.

    But now the question is something more like: have we already hit the limits of technology, or is there room to get to 10%/year growth? 100%? If growth rates increase very much, then most people would count that as a singularity.

  3. Except that we may not even understand an AI designed by an AI. Exponential or not, by the time we gain some insight into what is actually going on, the child AI would already have great-grandchildren, and we would be museum pieces.

  4. jim harris says

    from a certain perspective humans have really only invented two things, language, which kicked us out of the garden of eden and code, language once again, that is corroding our current definition of reality.

    code also kicked off the vertical renaissance in which we are currently trapped. i do not see that acceleration slowing for some distance.

    consciousness (something humans claim to uniquely posses) appears to be omnivorous to me. if the right conditions coagulate, intentionally or otherwise, i see no reason consciousness would not gladly inhabit a quantum computer. that seems a bit down stream of what you want to define as the singularity but such an intersection is elon’s flag of paranoia. being waved by a number of people.

    therein lies the unexpected ocean……………

  5. Sergey Vershinin says

    Some thoughts on what did not feature in this initial exploration of the idea of a Singularity:

    1) Mathematical definitions aside, an important aspect of what I think is included in the concept of a Singularity is the emergence of general machine intelligence. In other words, a point at which we enable software to solve any problem without guidance of human operators. The significance of this threshold is qualitative in nature, in the sense of making possible something that currently isn’t (niche problem domains notwithstanding).

    2) The point at which we implement a system that is capable of solving any problem without direct human input, as well as selecting which problem warrants solving, will mark a fork in the endeavor of knowledge accumulation, which until that point will have been a uniquely human activity.

    3) Since software runs on hardware that is physically superior when it comes to raw speed of computation (silicone vs. neurons), machine-driven knowledge accumulation will proceed at a significantly greater pace. IMHO, it is this notion of accelerated knowledge accumulation and application thereof by machines acting outside of human control that defines the Singularity.

    4) The Singularity as a fork in the quest for knowledge will probably not satisfy the strict mathematical definition of infinitely exponential technological progress. However, humanity’s subjective experience of the aftermath of developing general machine intelligence has a high chance of experientially equivalent. The quote concerning sufficiently advanced technology being indistinguishable from magic comes to mind.

    5) The above story arc has a crucial caveat: if humans manage to “keep the genie in the bottle”, then the rate of progress may well be slower than it could be. The singular reason for this will be our desire to restrict knowledge accumulation to a rate commensurate with our capacity for understanding its significance. However, it seems likely that if we ever do manage to create general machine intelligence, the chance of humanity maintaining control over the course of events following such a development seems unlikely.

  6. Andrew Johnson says

    Somewhere around 30 years ago (give or take a few) a commercial program was sold called The Last One (TLO), which purported to be the last program a business would ever need (to buy) because all they had to do was describe any problem to this program and it would write a new program to implement the solution thus described. Needless to say it didn’t actually work, IIRC it turned out to be nothing more than a front end to a database and a higher level language for coding in; the user still had to do most of the work in telling it what the target program should do.

    Ever since reading about TLO at the time though I have wondered if such a program could exist for real. Imagine a stream of programs, each designed to write their more intelligent successors. Nothing has struck me as explaining how to encode the vision of how the next program should work and the knowledge about how to write it into the previous one. Even using neural network techniques programs have to be taught, which implies the existence of some kind of teacher. For a closed problem like chess maybe self-learning is possible, but open-ended real-world problems rarely have any kind of oracle that could help with that approach.

    Which is probably just a long-winded way of saying that I’m probably as skeptical as you and I look forward to reading the rest of this series.

  7. Michael Morehouse says

    I find this topic fascinating, and look forward to future posts, but I do feel like the (rather long, as it made its core point early) section on exponential growth to be a bit of an unnecessary digression. Neither Good nor Vinge seems to have included exponential growth in their definitions, and while Kurzeil may have, that’s contextual knowledge assumed on the part of the reader of your post. The notion of the “literal” singularity — in the mathematics sense — doesn’t seem to implicitly or explicitly require exponential growth, it just requires infinity. It doesn’t even seem to require growth at all, the function could simply always and only ever return infinity.

    Of course I think you’ll touch quickly on the idea that the notion of The Singularity was never a) meant to be taken to mean a literal, nor b) meant to imply exponential growth so much as an explosive growth in undefined behavior — after all, no one has ever argued that the Technological Singularity also implicates a literal black hole or a merciless grip on light, so much as it does imply an event beyond the horizon of which weird, undefined, and extreme behavior occurs — but I feel like the bulk of this post was a bit of a digression into the obvious.

    Of course not everyone recognizes that the popular use of exponential growth also includes extremely small exponents, but personally I feel that could have been summed up handily in a single line.

    Looking forward to reading more.

    M

    • Jeannie Warner says

      I disagree with MM on this one. I have many friends who may suffer a bit from TL;DR who really need the “beat it to death with a few examples” method of talking about compounding vs simple exponential growth. Exponential growth is becoming an over-used term where people think of it as doubling and re-doubling instead of looking at the purely mathematical application.

      Remember, many people are NOT as smart or well-grounded in mathematical principles, but they want to read and learn. Thank you for drawing out the examples, and making a series I can share with anyone.

      From the Committee to Make Technology and Math Accessible to Everyone,

  8. Kevin Kenny says

    Reinforcing your point, human perception in most domains is ratiometric. We seldom perceive absolute change, only proportional change. The differences between a human clerk taking perhaps a minute to make a ledger entry, perhaps a couple of seconds with an adding machine, and perhaps ten milliseconds (100 operations/s) with a punched card tabulator are huge. The difference of an additional 100 operations/s on a machine that does gigaoperations/s is lost in the noise.

    From that perspective, an exponential with a given time constant looks the same wherever you’re standing. It’s not a hyperbola – although the press loves using hyperbolic language to describe it.

    A great many natural processes look exponential when they are, in fact, logistic. It’s well-nigh impossible to predict the inflection point from the behaviour of the curve until you’re nearly upon it.

  9. It’s not cut and dried.
    I believe you need a more refined definition of singularity.

    I believe you are conflating sentience with intelligence.

    Do we not already have computer algorithms that are difficult to understand on a deep level?
    The neural nets used to process images, video and do OCR approach the limits of our ability to explain in detail.

    Yes, we can make medium, sweeping statements about how the processing occurs. But there winds up being a “black box” in the explanation, where pixels go in, and the fully tuned neural net makes decisions, and a character, the recognization of a face, or of an emotion comes out.

    We build programs to help us build programs, to help programs build programs, to function as neural nets.

    Have we made the “last intelligence we will ever need”? Heck no.

    Finances, Searches, Insurance, “Likable” material, Media, Deliveries – all these are (usually) managed by machine intelligence, with human oversight. There are, of course, exceptions.

    Will the singularity see machine sentience?
    I don’t know. That’s a philosophical debate. But I’d like to think so.

    Will machine intelligence approach a point where noone lives without it?
    We’re already there. There’s a machine intelligence in your car, in your phone, at your workplace, your bank, routing your packages.

    I must admit, at the end of my argument I realize you have a very salient point. What is the singularity.

    And from my own arguments, I realize that you and I both agree that it’s not ultra-machine intelligence.

    I apologize for interrupting, and I look forward to reading your next post.

    • You raise a lot of issues in your comment. I’ll address most of them in future posts.

      On the question of intelligence vs. sentience, in this series of posts I’ll treat intelligence as being defined by behavior, and not by a system’s internal workings nor on whether it is conscious/sentient in the sense of having a subjective experience of being a thinking being. I am agnostic on whether intelligent behavior necessarily requires consciousness–and I don’t think that question affects the analysis in my post series.

  10. Marcelo P. Lima says

    Professor Felten,

    This is an interesting post and I look forward to the rest of the series.

    I agree that a literal singularity is unlikely.

    However, my understanding is that the singularity is the point at which machine intelligence surpasses human intelligence, and can thereafter build ever more intelligent machines, regardless of the rate of improvement (the speed of the exponential growth).

    You seem to be skeptical of a singularity, whether literal or not. In other words, you are skeptical of general AI. And why is that?

    Looking forward to your future posts.

    • Thanks for the comment. In future posts, I’ll consider alternative definitions of Singularity, and talk about my views on General vs. Narrow AI and how that distinction matters for the Singularity question.