US Bets $100 Million on Machines That Think More Like Humans

6,888 4 Loading

“One small step for man, one giant leap for mankind.”

When Neil Armstrong stepped onto the dusty surface of the moon on July 20, 1969, it was a victory for NASA and a victory for science.

Backed by billions and led by NASA, the Apollo project is hardly the only government-organized science initiative to change the world. Two decades earlier, the Manhattan Project produced the first atomic bomb and restructured modern warfare; three decades later, the Human Genome Project published the blueprint to our DNA, heralding the dawn of population genomics and personalized medicine.

A macro-level map of the brain's white matter. Image credit: The Human Connectome Project

A macro-level map of the brain's white matter. Image credit: The Human Connectome Project

In recent years, big science has become increasingly focused on the brain. And now the government is pushing forward what is perhaps the most high-risk, high-reward project of our time: brain-like artificial intelligence.

And they’re betting $100 million that they’ll succeed in next decade.

Led by IARPA and a part of the larger BRAIN Initiative, the MICrONS (Machine Intelligence from Cortical Networks) project seeks to revolutionize machine learning by reverse engineering algorithms of the mammalian cortex.

It’s a hefty goal, but one that may sound familiar. The controversial European Union-led Human Brain Project (HBP), a nearly $2 billion investment, also sits at the crossroads between brain mapping, computational neuroscience and machine learning.

But MICrONS is fundamentally different, both technically and logistically.

Rather than building a simulation of the human brain, which the HBP set out to do, MICrONS is working in reverse. By mapping out the intricate connections that neurons form during visual learning and observing how they change with time, the project hopes to distill sensory computation into mathematical “neural codes” that can be fed into machines, giving them the power to identify, discriminate, and generalize visual stimulation.

The end goal: smarter machines that can process images and video at human-level proficiency. The bonus: better understanding of the human brain and new technological tools to take us further.

It’s a solid strategy. And the project’s got an all-star team to see it through. Here’s what to expect from this “Apollo Project of the Brain.”

The Billion-Dollar Problem

Much of today’s AI is inspired by the human brain.

Take deep reinforcement learning, a strategy based on artificial neural networks that’s transformed AI in the last few years. This class of algorithm powers much of today’s technology: self-driving cars, Go-playing computers, automated voice and facial recognition — just to name a few.

Nature has also inspired new computing hardware, such as IBM’s neuromorphic SyNAPSE chip, which mimics the brain’s computing architecture and promises lightning-fast computing with minimal energy consumption.

As sophisticated as these technologies are, however, today’s algorithms are embarrassingly brittle and fail at generalization.

When trained on the features of a person’s face, for example, machines often fail to recognize that face if it’s partially obscured, shown at a different angle or under different lighting conditions.

In stark contrast, humans have a knack for identifying faces. What’s more, we subconsciously and rapidly build a model of what constitutes a human face, and can easily tell whether a new face is human or not — unlike some photo-tagging systems.

Even a rough idea of how the brain works has given us powerful AI systems. MICrONS takes the logical next step: instead of guessing, let’s figure out how the brain actually works, find out what AI’s currently missing, and add it in.

See the World in a Grain of Sand

To understand how a computer works, you first need to take it apart, see the components, trace the wiring.

Then you power it up, and watch how those components functionally interact.

The same logic holds for a chunk of brain tissue.

MICrONS plans to dissect one cubic millimeter of mouse cortex at nanoscale resolution. And it’s recruited Drs. David Cox and Jeff Lichtman, both neurobiologists from Harvard University to head the task.

Last July, Lichtman published the first three-dimensional complete reconstruction of a crumb-sized cube of mouse cortex. The effort covered just 1,500 cubic microns, roughly 600,000 times smaller than MICrONS’ goal.

It’s an incredibly difficult multi-step procedure. First, the team uses a diamond blade to slice the cortex into thousands of pieces. Then the pieces are rolled onto a strip of special plastic tape at a rate of 1,000 sections a day. These ultrathin sections are then imaged with a scanning electron microscope, which can capture synapses in such fine detail that tiny vesicles containing neurotransmitters in the synapses are visible.

Mapping the tissue to this level of detail is like “creating a road map of the U.S. by measuring every inch,” says MICrON’s project manager Jacob Volgestein to Scientific American.

Lichtman’s original reconstruction took over six long years.

That said, the team is optimistic. According to Dr. Christof Koch, president of the Allen Institute for Brain Science, various technologies involved in the process will speed up tremendously, thanks to new tools developed under the BRAIN Initiative.

Lichtman and Cox hopes to make tremendous headway in the next five years.

Function Follows Form

Simply mapping the brain’s static roadmap isn’t enough.

Not all neurons and their connections are required to learn a new skill or piece of information. What’s more, the learning process physically changes how neurons are wired to each other. It’s a dynamic system, one in constant flux.

To really understand how neurons are functionally connected, we need to see them in action.

We’re hoping to observe the activity of 100,000 neurons simultaneously while a mouse is learning a new visual task, explained Cox. It’s like wire tapping the brain: the scientists will watch neural computations happen in real time as the animal learns.

To achieve the formidable task, Cox plans on using two-photon microscopy, which relies on fluorescent proteins that only glow in the presence of calcium. When a neuron fires, calcium rushes into the cell and activates those proteins, and their light can be observed with a laser-scanning microscope. This gives scientists a direct visual of neural network activation.

The technique’s been around for a while. But so far, it’s only been used to visualize tiny portions of neural networks. If Cox successfully adapts it for wide-scale imaging, it may well be revolutionary for functional brain mapping and connectomics.

From Brain to Machine

Meanwhile, MICrONS project head Dr. Tai Sing Lee at Carnegie Mellon University is taking a faster — if untraveled — route to map the mouse connectome.

According to Scientific American, Lee plans to tag synapses with unique barcodes — a short chain of random nucleotides, the molecules that make up our DNA. By chemically linking these barcodes together across synapses, he hopes to quickly reconstruct neural circuits.

If it works, the process will be much faster than nanoscale microscopy and may give us a rough draft of the cortex (one cubic millimeter of it) within the decade.

$100-million-project-thinking-machines-4 As a computer scientist and machine learning expert, Lee’s formidable skills will likely come into play during the next phase of the project: making sense of all the data and extracting information useful for developing new algorithms for AI.

Going from neurobiological data to theories to computational models will be the really tough part. But according to Cox, there is one guiding principle that’s a good place to start: Bayesian inference.

During learning, the cortex actively integrates past experiences with present learning, building a constantly shifting representation of the world that allows us to predict incoming data and possible outcomes.

It’s likely that whatever algorithms the teams distill are Bayesian in nature. If they succeed, the next step is to thoroughly test their reverse-engineered models.

Vogelstein acknowledges that many current algorithms already rely on Bayesian principles. The crucial difference between what we have now and what we may get from mapping the brain is implementation.

There are millions of choices that a programmer makes to translate Bayesian theory into executable code, says Vogelstein. Some will be good, others not so much. Instead of guessing those parameters and features in software as we have been doing, it makes sense to extract those settings from the brain and narrow down optimal implementations to a smaller set that we can test.

Using this data-based ground-up approach to brain simulation, MICrONS hopes to succeed where HBP stumbled.

“We think it’s a critical challenge,” says Vogelstein. If MICrONS succeeds, it may “achieve a quantum leap in machine learning.”

For example, we may finally understand how the brain learns and generalizes with only one example. Cracking one-shot learning would circumvent the need for massive training data sets. This sets up the algorithms for functioning in real-world scenarios, which often can’t produce sufficient training data or give the AI enough time to learn.

Finally achieving human-like vision would also allow machines to parse complex sceneries, such as those captured by surveillance cameras.

Think of the implications for terrorism and cybersecurity. AI systems powered by brain-like algorithms will likely have “transformative impact for the intelligence community as well as the world more broadly,” says Volgeinstein.

Lichtmas is even more optimistic.

“What comes out of it — no matter what — it’s not a failure,” he says, “The brain [is] … really complicated, and no one’s ever really seen it before, so let’s take a look. What’s the risk in that?”


Image credit: Shutterstock.com

Shelly Fan

Shelly Xuelai Fan is a neuroscientist at the University of California, San Francisco, where she studies ways to make old brains young again. In addition to research, she's also an avid science writer with an insatiable obsession with biotech, AI and all things neuro. She spends her spare time kayaking, bike camping and getting lost in the woods.

Discussion — 4 Responses

  • DSM March 13, 2016 on 3:43 pm

    If they map the network and fail to replicate the behaviour of an actual brain it will not be a failure because then they will realise that they need to learn how to better emulate the behaviour of the neurons as they perform in concert (rather than in isolation), having done that they will get the result they were after.

  • rtryon March 13, 2016 on 3:47 pm

    In 83 years I have not discovered any reason to believe that my every decision and instant action followed a logic pattern in any consistent manner. The last strawberry on the table might keep me from responding yes when after eating it I say no! Not that the logic was driven by the strawberry, just the extra moment of re-thinking the problem changed the answer! How can a robotic cpu maybe with a robotic body that eats only electricity, end up doing the same type of thing? Its not always a habit to re-think or delay for a strawberry. Its just a spurious interrupt that is of the kind that causes timing to influence result!
    In short, the cpu may know all of this, but it can’t predict when the moment and inputs drives one in one direction or the other. Of course, a lightening bolt might do the same thing if timed just so!

    • DSM rtryon March 13, 2016 on 4:56 pm

      Logic is a useful idea, invented by people who were blind to the more subtle interactions that underlay the operation of minds, not to mention the universe itself, and therefore the relationship between minds and the material world.

      You may find this story interesting (see URL below), the human won because he forced the machine to deal with the sort of conceptual jump across the “game territory” that a human mind is very capable of, and a logical machine is very weak at. These quantum leaps are an essential characteristic of the nature of human minds. They can be emulated, if AI researchers are prepared to admit that they exist. 🙂

      http://www.abc.net.au/news/2016-03-13/human-go-champ-lee-se-dol-scores-victory-over-google%27s-alphago/7243434

      I wonder if humour is essentially the pleasure we experience when external stimuli cause our brains to make such leaps and perceive the connections between previously unconnected concepts. Laughter is a side effect of the reward mechanism that re-enforces one shot learning, i.e. sudden insight. Current AI designs cannot ever know the joy of “Ah Ha!” or “eureka!”

  • Vishram Naik March 20, 2016 on 9:48 am

    AI exceeding us will be a paradigm shift step in the evolution of Intelligent Life.How will Darwin’s and allied theories be applied to AI?It will not emerge from a organic struggle that organic life faced over the billions of years that life has diversified into. The time period of last 400 years of Science and technology is just a very minuscule period visa a vis the period the brain has got to this stage and the contribution of a relative handful of people who have
    fundamentally altered our world ,our knowledge of nature and laid the foundation of our inevitable moving out in the cosmos.The more we know how the brain works the more easier it will be to map it into the world of AI and it can get expedited quite independently if AI begins to make self connections like a evolutionary brain.