Site icon Singularity Hub

An AI for Image Recognition Spontaneously Gained a ‘Number Sense’

colorful donuts in a box numerosity of artificial intelligence

Many of us struggle with mathematical concepts, yet we’re all equipped with an innate “number sense,” or numerosity. Thanks to a strange group of “number neurons” buried in the visual cortex, human newborns, monkeys, cows, and other animals have the curious superpower to glance at a scene and intuitively gauge how much stuff is in it—long before knowing how to count.

Now AI has learned to do the same on its own.

In a study published last week in Science Advances, a team led by Dr. Andreas Nieder at the University of Tübingen in Germany found that a biologically-inspired deep neural network spontaneously gained a number sense—when it was only trained to recognize objects in an image.

Similar to the human brain, the artificial visual cortex gradually developed units with a specific sense of abstract quantity. Without the team ever explicitly teaching the AI what a number is, the network could discriminate between large and small amounts within an image, with each of its number units precisely “tuned” to a particular number. When challenged on cognitive tests for numerosity—derived from those for humans and monkeys—the AI even made mistakes reminiscent of biological brains.

It seems that “the spontaneous emergence of the number sense is based on mechanisms inherent to the visual system,” the authors concluded.

The study is the latest example of using machine intelligence to probe human intelligence. “We can now have hypotheses about how things happen in the brain and can go back and forth from artificial networks to real networks,” said Nieder.

The Number Mystery

Although we often associate numbers with counting and math, most of us can eyeball two boxes of donuts and intuitively gauge which box has more—especially when the numbers are relatively small.

By four months old, human infants can already discriminate two items from three. With age and experience, our sense of numerosity grows stronger, although even adults still struggle with larger numbers—imagine trying to distinguish a crowd of 100,000 from 125,000, for example.

Yet despite its critical role in numerical information processing, scientists have long scratched their heads at how the abstract sense emerges from number neurons. The classic cognitive psychology approach is to recruit toddlers and see how they respond to different quantities in colorful pictures; the neuroscience solution is to directly measure the electrical chattering of number neurons in animals.

Nieder and team decided on a drastically different method: cognitive psychology is increasingly relevant for engineering smarter AI. So why not turn the tables and study cognition using AI?

The Artificial Visual Network

Deep learning networks have vastly different architectures. Here, the team went for a hierarchical convolutional neural network (HCNN), which has deep biological roots. These models, popular and successful in computer vision applications, were originally inspired by the discovery of cells in the early visual cortex back in the 1950s.

Similar to how we process vision, HCNN has multiple layers that extract increasingly abstract information from an image. The data is then pooled and fed to a classification layer, which spits out the final conclusion:  that’s a miniature schnauzer—as opposed to the standard or giant schnauzer breeds.

Further resembling its human visual counterpart, the HCNN naturally “evolves” neurons tailored to particular aspects of an image after training to maximize the network’s performance for a given task. That is, some artificial neurons will fire when it “sees” a line, a face, or some weirdly dream-like shapes that can’t be easily classified.

The team first trained their HCNN using ImageNet, a dataset of around 1.2 million natural photos classified into 1,000 categories. As with previous object recognition algorithms, the AI learned to accurately classify the main star of each image: for example, a wolf spider, a box turtle, or a necklace.

Then came the fun part: inspired by a monkey study of numerosity, the team generated images with different numbers of white dots, ranging from 1 to 30, on black backgrounds. They then showed the HCNN network—after discarding the classification layer—336 of these dotted images and analyzed the responses of different units hidden in the layers.

A Little Extra

Without explicit training, the network developed units sensitive to numbers, which amounted to roughly 10 percent of all computational units.

Their behavior was “virtually identical to those of real neurons,” the team said. Each unit automatically learned to “fire” only to a particular number, becoming increasingly silent as the image deviated from that target. For example, an artificial neuron tuned to “four” would spike in activity when it “sees” four dots, but barely peep at an image with ten dots.

What’s more, many more units seem to prefer smaller numbers, between zero and five, over relatively larger ones such as 20 or more—a result representing another sanity check for the team. Scientists have previously also witnessed this curious distribution in real neurons, suggesting that the AI is capturing fundamental properties of real numerosity.

“This could be an explanation that the wiring of our brain, of our visual system at the very least, can give rise to representing the number of objects in a scene spontaneously,” said Nieder. “This was very exciting for us to see.”

The artificial number neurons weren’t just for show. When challenged with a matching task, similar to those used in human and monkey trials, their individual performances correlated with success. In each trial, the team presented two images of dot patterns to the AI. The responses were then recorded and sent to a second network, which judged whether the two images contained the same number of dots. After training, the success rate was 81 percent—an accuracy similar to humans challenged with analogous tests.

Even the AI’s network failures eerily resembled those of humans. When challenged with two numbers close to each other, for example, the network made more errors, similar to how we struggle telling 29 and 30 apart but have no trouble distinguishing between 15 and 30. When given two sets of numbers with equal distance apart—1 and 5 versus 25 and 30, for example—the AI struggled with the larger set of numbers, just as we do.

“The workings of the visual system seem to be sufficient to arrive at a visual sense of number…

The basic number sense may not depend on the development of a certain specialized domain but seem to capitalize on already existing cortical networks,” the authors concluded.

AI For the Mind

The study highlights how AI can help understand complex emergent properties in the brain.

“What’s cool about this study is they’re measuring things that are probed by vision but are usually not thought of as visual things purely, like numerosity,” said cognitive scientist Dr. James DiCarlo at MIT, who did not participate in the study.

Contrary to previous beliefs, the authors found that the deep learning net displayed far higher levels of abstraction and the ability to generalize its learning—necessary steps towards smarter AI that can transfer learning from one task to another. Findings of precisely tuned units in this study could potentially be further exploited to make AI more generalizable.

Going forward, the team wants to add reinforcement learning, which mimics higher cognition in humans, to see if it further improves the numerosity matching task. They also want to explore the mechanisms behind counting, which deals with numbers over time, not space. How children develop this key cognitive skill is still unknown; intelligent machines may hold the answer.

Image Credit: FabrikaSimf / Shutterstock.com

Exit mobile version