February 19, 2018

Why the Singularity is Not a Singularity

This is the first in a series of posts about the Singularity, that notional future time when machine intelligence explodes in capability, changing human life forever. Like many computer scientists, I’m a Singularity skeptic. In this series I’ll be trying to express the reasons for my skepticism–and workshopping ideas for an essay on the topic that I’m working on. Your comments and feedback are even more welcome that usual!

[Later installments in the series are here: 2 3 4]

What is the Singularity?  It is a notional future moment when technological change will be so rapid that we have no hope of understanding its implications. The Singularity is seen as a cultural event horizon beyond which humanity will become … something else that we cannot hope to predict. Singularity talk often blends into theories about future superintelligence posing an existential risk to humanity.

The essence of Singularity theory was summarized in an early (1965) paper by the British mathematician I.J. Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Vernor Vinge was the first to describe this as a “singularity”, adopting a term from mathematics that applies when the growth rate of a quantity goes to infinity. The term was further popularized by Ray Kurzweil’s book, “The Singularity is Near.”

Exponential Growth

The Singularity theory is fundamentally a claim about the future growth rate of machine intelligence. Before evaluating that claim, let’s first review some concepts useful for thinking about growth rates.

A key concept is exponential growth, which means simply that the increase in something is proportional to how big that thing already is. For example, if my bank account grows at 1% annually, this means that the every year the bank will add to my account 1% of the current balance. That’s exponential growth.

Exponential growth can happen at different speeds. There are two natural ways to characterize the speed of exponential growth. The first is a growth rate, typically stated as a percentage per some time unit. For example, my notional bank account has a growth rate of 1% per year. The second natural measure is the doubling time–how long it will take the quantity to double. For my bank account, that works out to about 70 years.  

A good way to tell if a quantity is growing exponentially is to look at how its growth is measured. If the natural measure is a growth rate in percent per time, or a doubling time, then that quantity is growing exponentially. For example, economic growth in most countries is measured as a percent increase in (say) GDP, which tells us that GDP tends to grow exponentially over the long term–with short-term ups and downs, of course. If a country’s GDP is growing at 3% per year, that corresponds to a doubling time of about 23 years.

Exponential growth is very common in nature and in human society. So the fact that a quantity is growing exponentially does not in itself make that quantity special nor does it give that quantity unusual, counterintuitive dynamics.

The speed and capacity of computers has grown exponentially, which is not remarkable. What is remarkable is the growth rate in computing capacity. A rule of thumb called “Moore’s Law” states that the speed and capacity of computers will have a doubling time of 18 months, which corresponds to a growth rate of 60% per year.  Moore’s Law has held true for roughly fifty years–that’s 33 doublings, or roughly a ten-billion-fold increase in capacity.

The Singularity is Not a Literal Singularity

As a first step in considering the plausibility of the Singularity hypothesis, let’s consider the prospect of a literal singularity–where the rate of improvement in machine intelligence literally becomes infinite at some point in the future. This requires that machine intelligence grows faster than any exponential, so that the doubling time gets smaller and smaller, and eventually goes to zero.

I don’t know of any theoretical basis for expecting a literal singularity.  There is virtually nothing in the natural or human world that grows super-exponentially over time–and even “ordinary” super-exponential growth does not yield a literal singularity. In short, it’s hard to see how the AI “Singularity” could possibly be a true mathematical singularity.

So if the Singularity is not literally a singularity, what is it?  The next post will start with that question.

AI and Policy Event in DC, December 8

Princeton’s Center for Information Technology Policy (CITP) recently launched an initiative on Artificial Intelligence, Machine Learning, and Public Policy.  On Friday, December 8, 2017, we’ll be in Washington DC talking about AI and policy.

The event is at the National Press Club, at 12:15-2:15pm on Friday, December 8.  Lunch will be provided for those who register in advance.

The agenda includes:

  • Ed Felten, with a background briefing on AI and the AI policy landscape,
  • Arvind Narayanan on AI and fairness,
  • Olga Russakovsky on diversifying the AI workforce,
  • Chloe Bakalar on AI and ethics, and
  • Nick Feamster on AI and freedom of expression.

For those who can stay longer, we’ll have a roundtable discussion with the speakers, starting at 2:30.

 

 

On Encryption, Archiving, and Accountability

As Elites Switch to Texting, Watchdogs Fear Loss of Accountability“, says a headline in today’s New York Times. The story describes a rising concern among rule enforcers and compliance officers:

Secure messaging apps like WhatsApp, Signal and Confide are making inroads among lawmakers, corporate executives and other prominent communicators. Spooked by surveillance and wary of being exposed by hackers, they are switching from phone calls and emails to apps that allow them to send encrypted and self-destructing texts. These apps have obvious benefits, but their use is causing problems in heavily regulated industries, where careful record-keeping is standard procedure.

Among those “industries” is the government, where laws often require that officials’ work-related communications be retained, archived, and available to the public under the Freedom of Information Act. The move to secure messaging apps frustrates these goals.

The switch to more secure messaging is happening, and for good reason, because old-school messages are increasingly vulnerable to compromise–the DNC and the Clinton campaign are among the many organizations that have paid a price for underestimating these risks.

The tradeoffs here are real. But this is not just a case of choosing between insecure-and-compliant or secure-and-noncompliant. The new secure apps have three properties that differ from old-school email: they encrypt messages end-to-end from the sender to the receiver; they sometimes delete messages quickly after they are transmitted and read; and they are set up and controlled by the end user rather than the employer.

If the concern is lack of archiving, then the last property–user control of the account, rather than employer control–is the main problem. And of course that has been a persistent problem even with email. Public officials using their personal email accounts for public business is typically not allowed (and when it happens by accident, messages are supposed to be forwarded to official accounts so they will be archived), but unreported use of personal accounts has been all too common.

Much of the reporting on this issue (but not the Times article) makes the mistake of conflating the personal-account problem with the fact that these apps use encryption. There is nothing about end-to-end encryption of data in transit that is inconsistent with archiving. The app could record messages and then upload them to an archive–with this upload also protected by end-to-end encryption as a best practice.

The second property of these apps–deleting messages shortly after use–has more complicated security implications. Again, the message becoming unavailable to the user shortly after use need not conflict with archiving. The message could be uploaded securely to an archive before deleting it from the endpoint device.

You might ask why the user should lose access to a message when that message is still stored in an archive. But this makes some sense as a security precaution. Most compromises of communications happen through the user’s access, for example because an attacker can get the user’s login credentials by phishing. Taking away the user’s access, while retaining access in a more carefully guarded archive, is a reasonable security precaution for sensitive messages.

But of course the archive still poses a security risk. Although an archive ought to be more carefully protected than a user account would be, the archive is also a big, high-value target for attackers. The decision to create an archive should not be taken lightly, but it may be justified if the need for accountability is strong enough and the communications are not overly sensitive.

The upshot of all of this is that the most modern, secure approaches to secure communication are not entirely incompatible with the kind of accountability needed for government and some other users.  Accountable versions of these types of services could be created. These would be less secure than the current versions, but more secure than old-school communications. The barriers to creating these are institutional, not technical.