Thinking Like a Human: What It Means to Give AI a Theory of Mind

Last month, a team of self-taught AI gamers lost spectacularly against human professionals in a highly-anticipated galactic melee. Taking place as part of the International Dota 2 Championships in Vancouver, Canada, the game showed that in broader strategic thinking and collaboration, humans still remain on top.

The AI was a series of algorithms developed by the Elon Musk-backed non-profit OpenAI. Collectively dubbed the OpenAI Five, the algorithms use reinforcement learning to teach themselves how to play the game—and collaborate with each other—from scratch.

Unlike chess or Go, the fast-paced multi-player Dota 2 video game is considered much harder for computers. Complexity is only part of it—the key here is for a group of AI algorithms to develop a type of “common sense,” a kind of intuition about what others are planning on doing, and responding in kind towards a common goal.

“The next big thing for AI is collaboration,” said Dr. Jun Wang at University College London. Yet today, even state-of-the-art deep learning algorithms flail in the type of strategic reasoning needed to understand someone else’s incentives and goals—be it another AI or human.

What AI needs, said Wang, is a type of deep communication skill that stems from a critical human cognitive ability: theory of mind.

Theory of Mind as a Simulation

By the age of four, children usually begin to grasp one of the fundamental principles in society: that their minds are not like other minds. They may have different beliefs, desires, emotions, and intentions.

And the critical part: by picturing themselves in other peoples’ shoes, they may begin to predict other peoples’ actions. In a way, their brains begin running vast simulations of themselves, other people, and their environment.

By allowing us to roughly grasp other peoples’ minds, theory of mind is essential for human cognition and social interactions. It’s behind our ability to communicate effectively and collaborate towards common goals. It’s even the driving force behind false beliefs—ideas that people form even though they deviate from the objective truth.

When theory of mind breaks down—as sometimes in the case of autism—essential “human” skills such as story-telling and imagination also deteriorate.

To Dr. Alan Winfield, a professor in robotic ethics at the University of West England, theory of mind is the secret sauce that will eventually let AI “understand” the needs of people, things, and other robots.

“The idea of putting a simulation inside a robot… is a really neat way of allowing it to actually predict the future,” he said.

Unlike machine learning, in which multiple layers of neural nets extract patterns and “learn” from large datasets, Winston is promoting something entirely different. Rather than relying on learning, AI would be pre-programmed with an internal model of itself and the world that allows it to answer simple “what-if” questions.

For example, when navigating down a narrow corridor with an oncoming robot, the AI could simulate turning left, right, or continuing its path and determine which action will most likely avoid collision. This internal model essentially acts like a “consequence engine,” said Winston, a sort of “common sense” that helps instruct its actions by predicting those of others around it.

In a paper published early this year, Winston showed a prototype robot that could in fact achieve this goal. By anticipating the behavior of others around it, a robot successfully navigated a corridor without collisions. This isn’t anything new—in fact, the “mindful” robot took over 50 percent longer to complete its journey than without the simulation.

But to Winston, the study is a proof-of-concept that his internal simulation works: [it’s] “a powerful and interesting starting point in the development of artificial theory of mind,” he concluded.

Eventually Winston hopes to endow AI with a sort of story-telling ability. The internal model that the AI has of itself and others lets it simulate different scenarios, and—crucially—tell a story of what its intentions and goals were at that time.

This is drastically different than deep learning algorithms, which normally cannot explain how they reached their conclusions. The “black box” model of deep learning is a terrible stumbling block towards building trust in these systems; the problem is especially notable for care-giving robots in hospitals or for the elderly.

An AI armed with theory of mind could simulate the mind of its human companions to tease out their needs. It could then determine appropriate responses—and justify those actions to the human—before acting on them. Less uncertainty results in more trust.

Theory of Mind In a Neural Network

DeepMind takes a different approach: rather than a pre-programmed consequence engine, they developed a series of neural networks that display a sort of theory of mind.

The AI, “ToMnet,” can observe and learn from the actions of other neural networks. ToMNet is a collective of three neural nets: the first leans the tendencies of other AIs based on a “rap sheet” of their past actions. The second forms a general concept of their current state of mind—their beliefs and intentions at a particular moment. The output of both networks then inputs to the third, which predicts the AI’s actions based on the situation. Similar to other deep learning systems, ToMnet gets better with experience.

In one experiment, ToMnet “watched” three AI agents maneuver around a room collecting colored boxes. The AIs came in three flavors: one was blind, in that it couldn’t compute the shape and layout of the room. Another was amnesiac; these guys had trouble remembering their last steps. The third could both see and remember.

After training, ToMnet began to predict the flavor of an AI by watching its actions—the blind tend to move along walls, for example. It could also correctly predict the AI’s future behavior, and—most importantly—understand when an AI held a false belief.

For example, in another test the team programmed one AI to be near-sighted and changed the layout of the room. Better-sighted agents rapidly adapted to the new layout, but the near-sighted guys stuck to their original paths, falsely believing that they were still navigating the old environment. ToMnet teased out this quirk, accurately predicting the outcome by (in essence) putting itself in the near-sighted AI’s digital shoes.

To Dr. Alison Gopnik, a developmental psychologist at UC Berkeley who was not involved in the study, the results show that neural nets have a striking ability to learn skills on their own by observing others. But it’s still far too early to say that these AI had developed an artificial theory of mind.

ToMnet’s “understanding” is deeply entwined with its training context—the room, the box-picking AI and so on—explained Dr. Josh Tenenbaum at MIT, who did not participate in the study. Compared to children, the constraint makes ToMnet far less capable of predicting behaviors in radically new environments. It would also struggle modeling the actions of a vastly different AI or a human.

But both Winston’s and DeepMind’s efforts show that computers are beginning to “understand” each other, even at if that understanding is still rudimentary.

And as they continue to better grasp each others’ minds, they are moving closer to dissecting ours—messy and complicated as we may be.

Image Credit: Immersion Imagery / Shutterstock.com

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, "Will AI Replace Us?" (Thames & Hudson) was published in 2019.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured