Why Google DeepMind Is Putting AI on the Psychologist’s Couch

Artificial intelligence can now carry out many of the same cognitive tasks humans can, but we still don’t really understand how AIs think. Google DeepMind plans to train long-standing tests of human cognitive skills on machine minds to learn how they work.

A long-standing problem in AI research has been the fact that deep neural networks are “black boxes.” You can’t tell how these algorithms work just by looking at their code. They teach themselves by training on data and there’s no simple flow diagram a human can follow. The way these networks reach decisions is encoded in the weights of thousands of simulated neurons.

But they’re not the only inscrutable thinking machines. Simply poking around in the human brains they are modeled on yields few clues as to how people reason either, and so over the years, cognitive psychologists have developed tests designed to probe our mental faculties.

Now DeepMind has built a virtual 3D laboratory called Psychlab that will let machines take these tests too, and they’ve open-sourced it so any AI researcher can put their algorithms through their paces.

Psychlab is built on the company’s DeepMind Lab platform, designed for testing virtual agents in 3D environments. It recreates the set-up a human taking part in an experiment would see by providing the subject with a first-person view of a virtual computer monitor that displays a variety of classic cognitive tests.

These include measures of the ability to search for objects in a scene, detect change, remember a growing list of items, track moving objects, or recall stimulus-response pairings. Typically a human would use a mouse to respond to on-screen tasks, but virtual agents use the direction of their gaze.

By mimicking the environment a human would see, the researchers say, humans and AIs can effectively take the same tests. That should not only make it easier to draw direct comparisons between the two, but also allow results to be connected with the existing academic literature in cognitive psychology.

Being able to draw on the accumulated wisdom of 150 years of psychology research could be hugely useful in understanding how the latest AI systems work, the researchers say in a paper published on the arXiv to coincide with the release of the tool.

In recent years there’s been an increasing focus on deep-reinforcement learning AI systems that can carry out complicated tasks in simulated 3D environments. The complex nature of these environments and the variety of strategies these systems can employ to solve problems makes it hard to tease out what combination of cognitive abilities underlies their performance.

But by subjecting a state-of-the-art deep reinforcement learning agent called UNREAL to a variety of tests in Psychlab, they were able to uncover details about how its perceptual system worked, and even use this to improve its performance.

It turns out UNREAL has considerably worse acuity, or keenness of vision, than humans, which means it learns faster when presented with larger objects. Key to human acuity is a dense cluster of photoreceptors at the center of the retina called the fovea, which gives us particularly sharp vision at the center of our visual field.

By adding a simple model of the fovea to UNREAL, the researchers were able to improve the agent’s performance not just on the Psychlab experiments, but also on other standard DeepMind Lab tasks.

DeepMind aren’t the only ones applying cognitive tests to AI. University of Michigan researchers have been subjecting reinforcement learning agents to maze navigation tasks that have long been used to test memory and learning in rats.

The mazes were built in the 3D world of Minecraft, and the AI were set increasingly complex tasks and given different rewards to find out what cognitive skills were important for successfully navigating the experiment. They found that being able to retrieve memories based on the context in which they were stored was key for solving their tests.

As AI continues to improve and develop higher-order cognitive skills such as reasoning, emotional intelligence, and planning, more sophisticated psychological tests could become a crucial way for us to understand the ways their mental processes differ from ours, which they almost certainly will.

That could be an important tool to ensure everyone gets along in a future where humans and AI have to coexist.

Image Credit: frankie’s / Shutterstock.com

Edd Gent
Edd Genthttp://www.eddgent.com/
I am a freelance science and technology writer based in Bangalore, India. My main areas of interest are engineering, computing and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured