It can read lips and create new food recipes. It can win at chess, Jeopardy and the game Go. Every major technology company appears to be integrating it into how they organize and operate their business. And it seems like just about every new app in existence claims its software uses some sort of machine learning to make life even better.

Artificial intelligence is splashed across headlines like never before. The AI revolution is here, and the most obvious question to ask as 2016 draws to an end is: what’s next?

We recently asked James Hendler this question. Hendler is director of the Rensselaer Institute for Data Exploration and Applications and one of the developers of the semantic web. He recently co-wrote, with Alice M. Mulvehill, the book Social Machines: The Coming Collision of Artificial Intelligence, Social Networking, and Humanity.

The book is less about predictions and more about setting expectations about what AI can and can’t do. The problem, as Hendler sees it, is that many people view AI with Terminator trepidation or as a utopian dream, while completely taking humanity out of the equation.

“People want to paint this technology in black and white,” he explains. “It needs humans in the loop, and humans are better at dealing with the grays.”

To borrow a slightly used political slogan: we—humans and AI—are stronger together. That’s Hendler’s a priori when discussing the future of artificial intelligence.

Packaging AI for mass programming

“I think the thing that excites me short-term is how much of AI technology [is being] made accessible at a much simpler level for programmers to use. It’s no longer a specialist thing,” Hendler says.

A class he is currently teaching on AI cognitive computing illustrates this point. Undergraduates are doing projects, like creating a chatbot able to answer questions about the Harry Potter universe, in a matter of weeks. A few years ago, such a feat would have been fodder for a PhD thesis.

What’s changed?

It’s no longer necessary to build deep learning, computer vision or natural language components from scratch. Just download an open source package and integrate it into your system with some tweaking. It’s a bit like playing with WordPress, though Hendler prefers to talk about the nascent days of the internet. In the early 1990s, with some basic understanding of HTML, it was possible to build a website thanks to a sort of pre-packaged code that could be installed on a machine.

“AI has been packaged in a usable way,” Hendler says. “[It’s] more like putting the pieces together and finding what works than doing the basic research into what those components are, at least for the more applied side to the technology space.”

Opening the doors to innovation

In the short term, Hendler says, that opens up the game to players of all sizes.

“We’re going to see a huge amount of innovation in small companies using existing techniques for deep learning, vision and language tasks,” he says. “The heavyweights—Microsoft, Google, Facebook—will invest heavily in the technology they do but in new directions.”

Meanwhile, academia and government will continue to play roles in the evolution of AI-related technologies. Hendler uses the example of autonomous vehicles, first developed by universities like Stanford to win the DARPA Grand Challenge. Google then further matured the technology. Now it seems every car company on the planet is working to put robotic cars on the road.

While there is still a need to develop new AI technologies to solve problems, Hendler says the near-term focus will be on the sorts of business cases that can be made with existing tools.

“I think that kind of innovation is where you see entrepreneurs and startups starting to focus now. I think we’re going to see a tremendous amount of that,” Hendler says.

Solving developed and developing world problems

And what might the casual technology user see from AI in 2017 and beyond? In this case, more may mean less, as technology slips seamlessly into the background.

“It’s not going to be as obvious as you buy something and the whole world is different,” Hendler says.

Take Siri, Apple’s ubiquitous virtual assistant. Siri’s competence at performing increasingly complex tasks is constantly improving, but it still (and often) defaults back to a web search for the answer. One day not too far into the future, one could imagine asking Siri or one of her counterparts a question like, “Show me a photo of my kids from lunch today,” and the machine quickly and correctly pulling out the results.

In fact, we see some of the startups Hendler mentions already on the cusp of such achievements. A company called Snips uses an AI technique called “context aware” to build a sort of memory, almost an alter ego, on a user’s mobile device, by sorting through data like contacts, emails, calendars, photos and so on. It learns what is important in the user’s life over time, serving as the single portal to all the apps and information stored on the device.

“It’s about using this artificial intelligence to make technology disappear in a way that you can just go about your day and not care about it anymore,” says Rand Hindi, CEO and founder of Snips, during a TEDx talk in 2015.

Of course, these are developed world problems—making technology disappear. Hendler is optimistic that projects to improve conditions in developing countries will involve the appearance of AI in the near future. In particular, he and others are working with IBM to bring literacy to one billion people in the next five years.

“You’re talking about being able to significantly change the lives of huge amounts of people, especially in countries where literacy rates are currently low,” he says. “That’s where those people will see technology suddenly come into their lives in a way it never has before.”

Education is key

Upheavals and massive disruptions—both good and bad—are ahead in a world increasingly powered by artificial intelligence and related technologies.

On one side of the argument are people like the 1.8 million truck drivers who could feasibly be put out of work in less than a generation by self-driving vehicles. On the flip side are the potential savings in industries like medicine, where AI is already being employed on a large scale with IBM’s Watson, the poster child—computer—for those high-tech services. Consider that health care accounts for 17.5 percent of US GDP, according to the Centers for Medicare and Medicaid Services.

Hendler says government needs to be involved to help manage these changes without setting up roadblocks to innovation. Education will be key to the AI revolution, he maintains, so people will understand where computers excel and where they struggle.

“That’s where we need people to be smarter, and for technical people to help policy makers to understand those differences and where they lie,” he says. “It’s understanding those differences that will be so important.”

Banner Image Credit: Rob Bulmahn/Flickr

Formerly the world’s only full-time journalist covering research in Antarctica, Peter became a freelance writer and digital nomad in 2015. Peter’s focus for the last decade has been on science journalism, but his interests and expertise include travel, outdoors, cycling, and Epicureanism (food and beer). Follow him at @poliepete.

Follow Peter: