Can Futurists Predict the Year of the Singularity?

The end of the world as we know it is near. And that’s a good thing, according to many of the futurists who are predicting the imminent arrival of what’s been called the technological singularity.

The technological singularity is the idea that technological progress, particularly in artificial intelligence, will reach a tipping point to where machines are exponentially smarter than humans. It has been a hot topic of late.

Well-known futurist and Google engineer Ray Kurzweil (co-founder and chancellor of Singularity University) reiterated his bold prediction at Austin’s South by Southwest (SXSW) festival this month that machines will match human intelligence by 2029 (and has said previously the Singularity itself will occur by 2045). That’s two years before SoftBank CEO Masayoshi Son’s prediction of 2047, made at the Mobile World Congress (MWC) earlier this year.

Author of the seminal book on the topic, The Singularity Is Near, Kurzweil said during the SXSW festival that “what’s actually happening is [machines] are powering all of us. …They’re making us smarter. They may not yet be inside our bodies, but by the 2030s, we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.”

That merger of man and machine—sometimes referred to as transhumanism—is the same concept that Tesla and SpaceX CEO Elon Musk talks about when discussing development of a neural lace. For Musk, however, an interface between the human brain and computers is vital to keep our species from becoming obsolete when the singularity hits.

Musk is also the driving force behind Open AI, a billion-dollar nonprofit dedicated to ensuring the development of artificial general intelligence (AGI) is beneficial to humanity. AGI is another term for human-level intelligence. What most people refer to as AI today is weak or narrow artificial intelligence—a machine capable of “thinking” within a very narrow range of concepts or tasks.

Futurist Ben Goertzel, who among his many roles is chief scientist at financial prediction firm Aidyia Holdings and robotics company Hanson Robotics (and advisor to Singularity University), believes AGI is possible well within Kurzweil’s timeframe. The singularity is harder to predict, he says on his personal website, estimating the date anywhere between 2020 and 2100.

“Note that we might achieve human-level AGI, radical health-span extension and other cool stuff well before a singularity—especially if we choose to throttle AGI development rate for a while in order to increase the odds of a beneficial singularity,” he writes.

Meanwhile, billionaire Son of SoftBank, a multinational telecommunications and Internet firm based in Japan, predicts superintelligent robots will surpass humans in both number and brain power by 2047.

He is putting a lot of money toward making it happen. The investment arm of SoftBank, for instance, recently bankrolled $100 million in a startup called CloudMinds for cloud-connected robots, transplanting the “brain” from the machine to the cloud. Son is also creating the world’s biggest tech venture capitalist fund to the tune of $100 billion.

“I truly believe it’s coming, that’s why I’m in a hurry—to aggregate the cash, to invest,” he was quoted as saying at the MWC.

History of prediction

Kurzweil, Son, Goertzel and others are just the latest generation of futurists who have observed that humanity is accelerating toward a new paradigm of existence, largely due to technological innovation.

There were some hints that philosophers as early as the 19th century, during the upheavals of the Industrial Revolution, recognized that the human race was a species fast-tracked for a different sort of reality. It wasn’t until the 1950s, however, when the modern-day understanding of the singularity first took form.

Mathematician John von Neumann had noted that “the ever-accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

In the 1960s, following his work with Alan Turing to decrypt Nazi communications, British mathematician I.J. Goode invoked the singularity without naming it as such.

He wrote, “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.”

Science fiction writer and retired mathematics and computer science professor Vernor Vinge is usually credited with coining the term “technological singularity.” His 1993 essay, The Coming Technological Singularity: How to Survive in the Post-Human Era predicted the moment of technological transcendence would come within 30 years.

Vinge explains in his essay why he thinks the term “singularity”—in cosmology, the event where space-time collapses and a black hole forms—is apt: “It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.”

Prediction an inexact science

But is predicting the singularity even possible?

A paper by Stuart Armstrong et al suggests such predictions are a best guess at most. A database compiled by the Machine Intelligence Research Institute (MIRI), a nonprofit dedicated to social issues related to AGI, found 257 AI predictions from the period 1950-2012 in the scientific literature. Of these, 95 contained predictions giving timelines for AI development.

“The AI predictions in the database seem little better than random guesses,” the authors write. For example, the researchers found that “there is no evidence that expert predictions differ from those of non-experts.” They also observed a strong pattern that showed most AI prognostications fell within a certain “sweet spot”—15 to 25 years from the moment of prediction.

Others have cast doubt that the singularity is achievable in the time frames put forth by Kurzweil and Son.

Paul Allen, co-founder of Microsoft and Institute of Artificial Intelligence, among other ventures, has written that such a technological leap forward is still far in the future.

“[I]f the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress,” he writes, referring to the concept that past rates of progress can predict future rates as well.

Extinction or transcendence?

Futurist Nikola Danaylov, who manages the Singularity Weblog, says he believes a better question to ask is whether achieving the singularity is a good thing or a bad thing.

“Is that going to help us grow extinct like the dinosaurs or is it going to help us spread through the universe like Carl Sagan dreamed of?” he tells Singularity Hub. “Right now, it’s very unclear to me personally.”

Danaylov argues that the singularity orthodoxy of today largely ignores the societal upheavals already under way. The idea that “technology will save us” will not lift people out of poverty or extend human life if technological breakthroughs only benefit those with money, he says.

“I’m not convinced [the singularity is] going to happen in the way we think it’s going to happen,” he says. “I’m sure we’re missing the major implications, the major considerations.

“We have tremendous potential to make it a good thing,” he adds.

Image Credit: Shutterstock

Peter Rejcek
Peter Rejcekhttps://www.peterrejcek.com/
Formerly the world’s only full-time journalist covering research in Antarctica, Peter became a freelance writer and digital nomad in 2015. Peter’s focus for the last decade has been on science journalism, but his interests and expertise include travel, outdoors, cycling, and Epicureanism (food and beer). Follow him at @poliepete.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured