Machines Teaching Each Other Could Be the Biggest Exponential Trend in AI

“The easier it is to communicate, the faster change happens.” – James Burke, Science Historian

During an October 2015 press conference announcing the autopilot feature of the Tesla Model S, which allowed the car to drive semi-autonomously, Tesla CEO Elon Musk said each driver would become an “expert trainer” for every Model S. Each car could improve its own autonomous features by learning from its driver, but more significantly, when one Tesla learned from its own driver—that knowledge could then be shared with every other Tesla vehicle.

As Fred Lambert with Electrik reported shortly after, Model S owners noticed how quickly the car’s driverless features were improving. In one example, Teslas were taking incorrect early exits along highways, forcing their owners to manually steer the car along the correct route. After just a few weeks, owners noted the cars were no longer taking premature exits.

“I find it remarkable that it is improving this rapidly,” said one Tesla owner.

Intelligent systems, like those powered by the latest round of machine learning software, aren’t just getting smarter: they’re getting smarter faster. Understanding the rate at which these systems develop can be a particularly challenging part of navigating technological change.

Ray Kurzweil has written extensively on the gaps in human understanding between what he calls the “intuitive linear” view of technological change and the “exponential” rate of change now taking place. Almost two decades after writing the influential essay on what he calls “The Law of Accelerating Returns”—a theory of evolutionary change concerned with the speed at which systems improve over time—connected devices are now sharing knowledge between themselves, escalating the speed at which they improve.

[Learn more about thinking exponentially and the Law of Accelerating Returns.]

“I think that this is perhaps the biggest exponential trend in AI,” said Hod Lipson, professor of mechanical engineering and data science at Columbia University, in a recent interview.

“All of the exponential technology trends have different ‘exponents,’” Lipson added. “But this one is potentially the biggest.”

According to Lipson, what we might call “machine teaching”—when devices communicate gained knowledge to one another—is a radical step up in the speed at which these systems improve.

“Sometimes it is cooperative, for example when one machine learns from another like a hive mind. But sometimes it is adversarial, like in an arms race between two systems playing chess against each other,” he said.

Lipson believes this way of developing AI is a big deal, in part, because it can bypass the need for training data.

“Data is the fuel of machine learning, but even for machines, some data is hard to get—it may be risky, slow, rare, or expensive. In those cases, machines can share experiences or create synthetic experiences for each other to augment or replace data. It turns out that this is not a minor effect, it actually is self-amplifying, and therefore exponential.”

Lipson sees the recent breakthrough from Google’s DeepMind, a project called AlphaGo Zero, as a stunning example of an AI learning without training data. Many are familiar with AlphaGo, the machine learning AI which became the world’s best Go a player after studying a massive training data-set comprised of millions of human Go moves. AlphaGo Zero, however, was able to beat even that Go-playing AI, simply by learning the rules of the game and playing by itself—no training data necessary. Then, just to show off, it beat the world’s best chess playing software after starting from scratch and training for only eight hours.

Now imagine thousands or more AlphaGo Zeroes instantaneously sharing their gained knowledge.

This isn’t just games though. Already, we’re seeing how it will have a major impact on the speed at which businesses can improve the performance of their devices.

One example is GE’s new industrial digital twin technology—a software simulation of a machine that models what is happening with the equipment. Think of it as a machine with its own self-image—which it can also share with technicians.

A steam turbine with a digital twin, for instance, can measure steam temperatures, rotor speeds, cold starts, and other data to predict breakdowns and warn technicians to prevent expensive repairs. The digital twins make these predictions by studying their own performance, but they also rely on models every other steam turbine has developed.

As machines begin to learn from their environments in new and powerful ways, their development is accelerated by communicating what they learn with each other. The collective intelligence of every GE turbine, spread across the planet, can accelerate each individual machine’s predictive ability. Where it may take one driverless car significant time to learn to navigate a particular city—one hundred driverless cars navigating that same city together, all sharing what they learn—can improve their algorithms in far less time.

As other AI-powered devices begin to leverage this shared knowledge transfer, we could see an even faster pace of development. So if you think things are developing quickly today, remember we’re only just getting started.

Image Credit: igor kisselev / Shutterstock.com

Aaron Frank
Aaron Frank
Aaron Frank is a researcher, writer, and consultant who has spent over a decade in Silicon Valley, where he most recently served as Principal Faculty at Singularity University. Over the past ten years he has built, deployed, researched, and written about technologies relating to augmented and virtual reality and virtual environments. As a writer, his articles have appeared in Vice, Wired UK, Forbes, and VentureBeat. He routinely advises companies, startups, and government organizations with clients including Ernst & Young, Sony, Honeywell, and many others. He is based in San Francisco, California.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured