From Here to Human-Level Artificial General Intelligence in Four (Not All That) Simple Steps

In the 15 years since I first introduced the term “artificial general intelligence” (AGI), the AI field has advanced tremendously. We now have self-driving cars, automated face recognition and image captioning, machine translation and expert AI game-players, and so much more.

However, these achievements remain essentially in the domain of “narrow AI”—AI that carries out tasks based on specifically-supplied data or rules, or carefully-created training situations. AIs that can generalize to unanticipated domains and confront the world as autonomous agents are still part of the road ahead.

The question remains: what do we need to do to get from today’s narrow AI tools, which have become mainstream in business and society, to the AGI envisioned by futurists and science fiction authors?

The Diverse Proto-AGI Landscape

While there is a tremendous diversity of perspectives and no shortage of technical and conceptual ideas on the path to AGI, there is nothing resembling an agreement among experts on the matter.

For example, Google DeepMind’s chief founder Demis Hassabis has long been a fan of relatively closely brain-inspired approaches to AGI, and continues to publish papers in this direction. On the other hand, the OpenCog AGI-oriented project that I co-founded in 2008 is grounded in a less brain-oriented approach—it involves neural networks, but also heavily leverages symbolic-logic representations and probabilistic inference, and evolutionary program learning.

The bottom line is, just as we have many different workable approaches to manned flight—airplanes, helicopters, blimps, rockets, etc.—there may be many viable paths to AGI, some of which are more biologically inspired than others. And, somewhat like the Wright brothers, today’s AGI pioneers are proceeding largely via experiment and intuition, in part because we don’t yet know enough useful theoretical laws of general intelligence to proceed with AGI engineering in a mainly theory-guided way; the theory of AGI is evolving organically alongside the practice.

Four (Not Actually So) Simple Steps From Here to AGI

In a talk I gave recently at Josef Urban’s AI4REASON lab in Prague (where my son Zar is doing his PhD, by the way) I outlined “Four Simple Steps to Human-Level AGI.” The title was intended as dry humor, as actually none of the steps are simple at all. But I do believe they are achievable within our lifetime, maybe even in the next 5-10 years. Better yet, each of the four steps is currently being worked on by multiple teams of brilliant people around the world, including but by no means limited to my own teams at SingularityNET, Hanson Robotics, and OpenCog.

The good news is, I don’t believe we need radically better hardware, nor radically different algorithms, nor new kinds of sensors or actuators. We just need to use our computers and algorithms in a slightly more judicious way by doing the following.

1) Make cognitive synergy practical

We have a lot of powerful AI algorithms today, but we don’t use them together in sufficiently sophisticated ways, so we lose much of the synergetic intelligence that could come from using them together. By contrast, the different components in the human brain are tuned to work together with exquisite feedback and interplay. We need to make systems that enable richer and more thorough coordination of different AI agents at various levels into one complex, adaptive AI network.

For instance, within the OpenCog architecture, we seek to realize this by making different learning and reasoning algorithms work together on the Atomspace Hypergraph, which allows for the creation of hybrid networks consisting of symbolic and subsymbolic segments. Our probabilistic logic engine, which handles facts and beliefs, our evolutionary program learning engine, which handles how-to knowledge, our deep neural nets for handling perception—all of these cooperate together in updating the same set of hypergraph nodes and links.

On a different level, in the SingularityNET blockchain-based AI network, we work toward cognitive synergy by allowing different AI agents using different internal algorithms to make requests of each other and share information and results. The idea is that the network of AI agents, using a customized token for exchanging value, can become an overall cognitive economy of minds with an emergent-level intelligence going beyond the intelligence of the individual agents. This is a modern blockchain-based realization of AI pioneer Marvin Minsky’s idea of intelligence as a “society of mind.”

2)  Bridge symbolic and subsymbolic AI

I believe AGI will most effectively be achieved via bridging of the algorithms used for low-level intelligence, such as perception and movement (e.g., deep neural networks), with the algorithms used for high-level abstract reasoning (such as logic engines).

Deep neural networks have had amazing successes lately in processing multiple sorts of data, including images, video, audio, and to a lesser extent, text. However, it is becoming increasingly clear that these particular neural net architectures are not quite right for handling abstract knowledge. Cognitive scientist and AI entrepreneur Gary Marcus has written articulately on this; SingularityNET AI researcher Alexey Potapov has recently reported on his experiments probing the limits of the generalization ability of current deep neural net frameworks.

My own intuition is that the shortest path to AGI will be to use deep neural nets for what they’re best at and to hybridize them with more abstract AI methods like logic systems, in order to handle more advanced aspects of human-like cognition.

3) Whole-organism architecture

Humans are bodies as much as minds, and so achieving human-like AGI will require embedding AI systems in physical systems capable of interacting with the everyday human world in nuanced ways.

The “whole organism architecture” (WHOA!!!) is a nice phrase introduced by my collaborator in robotics and mayhem, David Hanson. Currently, we are working with his beautiful robotic creation Sophia, whose software development I have led as a platform for experimenting with OpenCog and SingularityNET AI.

General intelligence does not require a human-like body, nor any specific body. However, if we want to create an AGI that manifests human-like cognition in particular and that can understand and relate to humans, then this AGI will needs to have a sense of the peculiar mix of cognition, emotion, socialization, perception, and movement that characterizes human reality. By far the best way for an AGI to get such a sense is for it to have the ability to occupy a body that at least vaguely resembles the human body.

The need for whole organism architecture ties in with the importance of experiential learning for AGI. In the mind of a human baby, all sorts of data are mixed up in a complex way, and the goals and objectives need to be figured out along with the categories, structures, and dynamics in the world. Even the distinction between self and other and the notion of a persistent object have to be learned. Ultimately, an AGI will need to do this sort of foundational learning for itself as well.

While it is not necessarily wrong to supply one’s AGI system with data from texts and databases, one still needs to build a system that interacts with, perceives, and explores the world autonomously and builds its own model of itself and the world. The semantics of everything it learns is then grounded in its own observations. If it learns about something abstract, like language or math, it has to be able to ground the semantics of that in its own life, as well as in the abstraction.

Experiential learning does not require robotics. But whole-organism robotics does provide an extremely natural venue for moving beyond today’s training-by-example AIs to experiential learning.

4)  Scalable meta-learning

AGI needs not just learning but also learning how to learn. An AGI will need to apply its reasoning and learning algorithms recursively to itself so as to automatically improve its functionality.

Ultimately, the ability to apply learning to improve learning should allow AGIs to progress far beyond human capability. At the moment, meta-learning remains a difficult but critical research pursuit. At SingularityNET, for instance, we are just now beginning to apply OpenCog’s AI to recognize patterns in its own effectiveness over time, so as to improve its own performance.

Toward Beneficial General Intelligence

If my perspective on AGI is correct, then once each of these four aspects is advanced beyond the current state, we’re going to be there—AGI at the human level and beyond.

I find this prospect tremendously exciting, and just a little scary. I am also aware that some observers, including big names like Stephen Hawking and Elon Musk, have expressed the reverse sentiment: more fear than excitement. I think nearly everyone who is serious about AGI development has put a lot of thought into the mitigation of the relevant risks.

One conclusion I have come to via my work on AI and robotics is: if we want our AGIs to absorb and understand human culture and values, the best approach will be to embed these AGIs in shared social and emotional contexts with people. I feel we are doing the right thing in our work with Sophia at Hanson Robotics; in recent experiments, we used Sophia as a meditation guide.

I have also been passionate in the last few years about working to ensure AI develops in a way that is egalitarian and participatory across the world economy, rather than in a manner driven mainly by the bottom lines of large corporations or the military needs of governments.  Put simply: I would rather have a benevolent, loving AI become superintelligent than a killer military robot, an advertising engine, or an AI hedge fund. This has been part of my motivation in launching the SingularityNET project—to use the power of AI and blockchain together to provide an open marketplace in which anyone on the planet can provide or utilize the world’s most powerful AI, for any purpose. If an AGI emerges from a participatory “economy of minds” of this nature, it is more likely to have an ethical and inclusive mindset coming out of the gate.

We are venturing into unknown territory here, not only intellectually and technologically, but socially and philosophically as well. Let us do our best to carry out this next stage of our collective voyage in a manner that is wise and cooperative as well as clever and fascinating.

Image Credit: Yurchanka Siarhei /

Ben Goertzel
Ben Goertzel
Dr. Ben Goertzel is the CEO of the decentralized AI network SingularityNET, a blockchain-based AI platform company, and the chief scientist at Hanson Robotics. Dr. Goertzel also serves as Chairman of the Artificial General Intelligence Society and the OpenCog Foundation. Dr. Goertzel is one of the world’s foremost experts in Artificial General Intelligence, a subfield of AI oriented toward creating thinking machines with general cognitive capability at the human level and beyond. He also has decades of expertise applying AI to practical problems in areas ranging from natural language processing and data mining to robotics, video gaming, national security, and bioinformatics. He has published 20 scientific books and 140+ scientific research papers, and is the main architect and designer of the OpenCog system and associated design for human-level general intelligence.
Don't miss a trend
Get Hub delivered to your inbox