Redefining The Singularity

techno-human.jpg

The technological singularity has quickly become one of the most controversial concepts. It represents a theoretical future period in time when superintelligence emerges through technological means. During a recent conference on the future of artificial intelligence (A.I.) futurist Anders Sandberg proposed that this concept has three major commonalities:

The term was popularized by computer scientist Vernor Vinge in 1993. He recently expounded on the creation of the concept and the reasoning behind it:

the spectacular feature of A.I. was not making something as smart as a human, but creating minds that were more intelligent than humans. That would be a different type of technological advance. That would change the thing that is the top creative element in technological progress, and since it would be beyond human intelligence, there is a certain unknowability about what would happen beyond that point. Therefore, I came up with the metaphor with the singularity as it is used with blackholes in general relativity reflecting this fact that there is not much information you can imagine beyond the point in time when super-human intelligence comes into place.

Several theorists have hypothesized about how the singularity will happen, when it will happen, and how it will change human nature. In 2007, artificial intelligence expert Ben Goertzel published a paper in Artificial Intelligence outlining the main scenarios proposed by futurists thus far. They included everything from a Sysop scenario where a highly powerful benevolent A.I. effectively becomes a “system operator” to a Skynet scenario where A.I. is created, improves itself, and malevolently enslaves or annihilates humanity. I am definitely most closely aligned with the Kurzweilian scenario. I believe that humanity will create advanced A.I. that can create better, more advanced A.I.. However, I also believe we will intimately merge with technology. By the end of this process humanity will essentially be post-biological in nature. I suspect that it will not be an abrupt or particularly chaotic transition. It will happen gradually over the span of decades (in some ways it has already started happening).

Either way, I am writing this post because I would like to start an important discussion on the term “singularity.” Although I have referred to myself as a “singultarian” and count myself as a Kurzweilian-defender, I find the term singularity problematic. As Vinge stated the term singularity is used to suggest unknowability beyond a certain technological event horizon. However, I posit that this “technological event horizon” is not an actual future reality. I believe that there will come a time when humans are no longer the “top creative element of technological progress” but a “singularity” will not happen. What I mean is that if we keep using the term “singularity” it may start to metaphorically resemble the carrot and stick idiom:

carrot-and-stick-2.jpg

If humans start artificially enhancing their own intelligence in the 2030s and developing relationships with advanced A.I., the approaching decades (e.g., 2040s-2050s) currently predicted to play host to the singularity will start to become clearer to us than they currently are (i.e. they will not be a technological singularity).

Vernor Vinge has admitted this much stating that:

If you became one of the supersmart creatures, things would not be any more unintelligible to you than the current world is to un-enhanced humans.

Furthermore, we cannot remain intellectually comfortable with the term singularity if we are starting to make predictions of a post-singularity world. Several futurists, including Ray Kurzweil, have already started proposing probable post-singularity developments. But making these predictions completely contradicts the metaphorical validity of the term. If the singularity metaphor proved useful we should find ourselves facing a literal information blackhole. But I don’t think that is what we find ourselves facing.

As a futurist, I feel like we need a new term to better describe what we mean when we say technological singularity. I do not yet know what term would fit best. The term “infinitely self-generating technology” has a nice ring to it. This is a feature of the singularity identified by Mike Rugnetta of PBS Idea Channel. However, I can already think of a host of reasons why that term is problematic.

What do you think? Is the technological singularity a useful concept?

Discuss this on Hubski or let me know what you think on Twitter!

 
81
Kudos
 
81
Kudos

Read this next

The Real Culture Wars

When we think of culture, we tend to think about material products of human civilization and/or variation of traditions, rituals, and beliefs between different human populations. And of course, these are products of human culture. But... Continue →