December 15, 2024

Multiple Intelligences, and Superintelligence

Superintelligent machines have long been a trope in science fiction. Recent advances in AI have made them a topic for nonfiction debate, and even planning. And that makes sense. Although the Singularity is not imminent–you can go ahead and buy that economy-size container of yogurt–it seems to me almost certain that machine intelligence will surpass ours eventually, and quite possibly within our lifetimes.

Arguments to the contrary don’t seem convincing. Kevin Kelly’s recent essay in Backchannel is a good example. His subtitle, “The AI Cargo Cult: The Myth of a Superhuman AI” implies that AI of superhuman intelligence will not occur. His argument centers on five “myths”:

  1. Artificial intelligence is already getting smarter than us, at an exponential rate.
  2. We’ll make AIs into a general purpose intelligence, like our own.
  3. We can make human intelligence in silicon.
  4. Intelligence can be expanded without limit.
  5. Once we have exploding superintelligence it can solve most of our problems.

He rebuts these “myths” with five “heresies” :

  1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  2. Humans do not have general purpose minds, and neither will AIs.
  3. Emulation of human thinking in other media will be constrained by cost.
  4. Dimensions of intelligence are not infinite.
  5. Intelligences are only one factor in progress.

This is all fine, but notice that even if all five “myths” are false, and all five “heresies” are true, superintelligence could still exist.  For example, superintelligence need not be “like our own” or “human” or “without limit”–it only needs to outperform us.

The most interesting item on Kelly’s lists is heresy #1, that intelligence is not a single dimension, so “smarter than humans” is a meaningless concept. This is really two claims, so let’s consider them one at a time.

First, is intelligence a single dimension, or are there different aspects or skills involved in intelligence?  This is an old debate in human psychology, on which I don’t have an informed opinion. But whatever the nature and mechanisms of human intelligence might be, we shouldn’t assume that machine intelligence will be the same.

So far, AI practice has mostly treated intelligence as multi-dimensional, building distinct solutions to different cognitive challenges. Perhaps this is fundamental, and machine intelligence will always be a bundle of different capabilities. Or perhaps there will be a future unification of some sort, to create a single capability that can outperform people on all or nearly all cognitive tasks. At this point it seems like an open question whether machine intelligence is inherently multi-dimensional.

The second part of Kelly’s claim is that, assuming intelligence is multi-dimensional, “smarter than humans” is a meaningless concept. This, to put it bluntly, is not correct.

To see why, consider that playing center field in baseball requires multi-dimensional skills: running, throwing, distinguishing balls from strikes, hitting accurately, hitting with power, and so on. Yet every single major league center fielder is vastly better than I am at playing center field, because they dominate me by far in every one of the component skills.

Like playing center field, intelligence may be multi-dimensional, and yet one entity can be more intelligent than another by being superior in every dimension.

What this suggests about the future of machine intelligence is that we may live for quite a while in a state where machines are better than us at some aspects of intelligence and we are better than them at others. Indeed, that is the case now, and has been for years.

If machine intelligence remains multi-dimensional, then machines will surpass our intelligence not at a single point in time, but gradually, and in more and more dimensions of intelligence.

Comments

  1. Ed,

    Even though this is an old post now, I only happened to see it yesterday. But, coincidentally this topic has been on my mind of late; and so it was an interesting read, and I really wanted to respond, hope you see this. Of the five “heresies” you mention from Kevin Kelly, I have to note my own line of thinking.

    1. Intelligence is not single dimensional (but “dimensions” isn’t even a good word to use here)
    2. “purpose” is rather irrelevant; it is not what will separate humans from machines, but rather motive will be key.
    3. I have no opinion here
    4. I would say dimensions (again not a good word) of intelligence ARE INDEED infinite
    5. Define progress.

    Of course I would agree that defining something as “smarter” is a meaningless concept; specifically because intelligence is infinite. Someone may be smarter than me when it comes to computers (as a very low paid computer specialist) no doubt in my mind. And yet I am light years ahead of most of the population when it comes to being tech savvy. But, yet, you throw baseball at me (no pun intended); and I couldn’t even tell the difference between center-field and outfield position and as bad as you think you are baseball, I would be willing to bet I am worse.

    So, in that regard I would agree with you in that machines might become superior to humans in some ways and continually get better in more and more ways over time. But there are some TYPES (I wouldn’t use dimensions, but “types” might be a better word) of intelligence that humans have that machines will never have; no matter how hard we try.

    Coincidentally one of the reasons this topic is on my mind was because of a dream I had a little over a week ago. I had an amazing dream where I was watching a live comedy sketch which included a little boy who would say and do some pretty odd things; but kept saying something to the effect of “I am trapped by what I see.” At the beginning of the dream I had no idea it was a robot; toward the end I learned that it was a robot and he felt “trapped” by the sensory input (visual input [sight], audio input [hearing], tactile input [touch], and even olfactory input [smell]) in that his actions/reactions would be dictated by the input; and he could not make his own decisions — despite his creators trying to get him to learn how to use the input as a springboard to motivate his action, not dictate it.

    How does this relate to Artificial Intelligence when compared to Human Intelligence. Let me take an example of two car accidents (one true, another made up to make a point).

    In May 1016, a self driving car drove right into a semi, it was found that the car’s camera and analysis (the AI driving the car) was not able to distinguish the difference between the white semi and the brightly lit sky. In essence the AI failed to “see” the semi, so it did not initiate the car’s break system.

    Now compare that to an accident where a young man (say in his 20’s) on a long dive from California to Philadelphia to meet up with a woman he first met in Philly two years prior… in this example the man fails to “see” or “comprehend” the semi because his mind is wondering in thinking about this young women who is going to see again.

    One thing is certain; a self-driving car will never be distracted by thinking of the red corvette it “saw” years earlier. A human mind, however may be. Why? “Emotional motivation” would be a good word; no matter how complex AI gets it will never be intelligent to recognize or even comprehend emotional motivation; hence it will also never learn something called emotional intelligence…. “Emotional intelligence (EI) is the capability of individuals to recognize their own and other people’s emotions, discern between different feelings and label them appropriately, use emotional information to guide thinking and behavior, and manage and/or adjust emotions to adapt to environments or achieve one’s goal(s).”

    Emotional Intelligence is one very key difference; and that is just one type of infinite types of intelligence that we could label. We may be able to make machines which approximate many types of intelligence; but there will always be some that we can’t make a machine even match let alone surpass.

    Coincidentally, just a couple days ago [still well after my dream] I saw an old Star Trek episode titled “What Are Little Girls Made Of?” which delved into this very thing with androids… forgot the episode even existed hadn’t seen it for years; when originally aired in 1966 this stuff was pure fantasy (hardly even science fiction)… today it is nearly reality. And, yet, the end is pretty much the same as my own thoughts on this issue. It ended with the android trying to prove how human he was; and kept stumbling because everything he could think of only showed how machine he was.

    As a man of faith I would separate machine intelligence from human intelligence by using the word “soul” but when not speaking to a religious audience I might rather use the phrase “human equation.” Either way there will always be something that machines will never have and hence why they will never be completely “superior” to human intelligence.

  2. Thank you for another clear and compelling think piece. Being biased against most pronouncements of Kelly’s, I’m largely in agreement. Nevertheless, I want to challenge, mildly, your views on heresy #1. You write, “whatever the nature and mechanisms of human intelligence might be, we shouldn’t assume that machine intelligence will be the same.” If the nature and mechanisms of what we refer to in humans as “intelligence” are not the same as those in machines to which we also ascribe intelligence, then aren’t we working with two distinct notions of intelligence? This is to my mind the source of skepticism about AI. To proclaim its superiority its proponents need to alter the rules of the game.

    The center fielder analogy is helpful, but not entirely satisfying (due in good part to my ignorance). In a way, it begs the question. We pretty much know the component skills of center fielding. Do we have as good a grip on intelligence? We can measure any professional ball player’s skills against your own with a high degree of objectivity. Same for intelligence?

    Put another way, if we wanted to get you in shape to compete with the best professional center fielders, we would focus our attention on improving your fitness, capabilities, judgment, etc. We would train you, in the expectation that improving you would result in your being better equipped to do the job of center fielder. (Yet even if you never played a game of baseball, you would in some respects be objectively in better shape than you were before.) AI researchers “build[] distinct solutions to different cognitive challenges,” but aren’t those “cognitive challenges” actually problems in the world, independent of machine intelligence, that we want computers to solve? (Can AI researchers surmount “cognitive challenges” for machine intelligence without having in mind some goal–purpose, utility–independent of AI per se?)