The AI Conversation Has Exploded This Decade With Big Advances

Discussion of artificial intelligence has skyrocketed since the end of the last decade, according to a new analysis looking at public perception of the technology.

The paper by Stanford computer science PhD Ethan Fast and Eric Horvitz, technical fellow and a managing director at Microsoft Research, looked at more than three million articles published in the New York Times between January 1986 and May 2016. The study is under review to become a conference paper at the Thirty-First AAAI Conference on Artificial Intelligence.

According to the authors, no other collection of text aimed at a general audience extends so far into the past, making it a good proxy for public opinion. They found that from 1985 to 2009 AI was discussed in somewhere between 5 and 10 out of every 10,000 articles. But between 2009 and the present day this figure shot up to roughly 25.

While they make it clear their work does not establish causality, this large uptick coincides with the deep learning revolution that has underpinned many of the major advances in AI in the last decade. This upheaval was the result of a renaissance in the use of neural networks for machine learning during the 2000s.

This technology underpins some of AI’s most noteworthy recent accomplishments. Google’s research in this area is behind Android’s speech recognition capabilities and the AlphaGo program that beat a world champion at the mind-bendingly difficult game Go. It powers Microsoft’s real-time Skype Translate service and Facebook uses it both to automatically recognize faces in photos and analyze the meaning behind users’ posts.

To carry out their analysis the authors first searched for the words “artificial intelligence,” “AI,” or “robot” in the database of New York Times stories. They ended up with 8,000 paragraphs mentioning the technology, which were then manually annotated by human workers on the Amazon crowdsourcing platform Mechanical Turk.

These annotations covered relevance to the topic, levels of pessimism or optimism about AI, and which specific hopes and concerns were discussed — such as fears of losing control of AI or hopes for its impact on healthcare. The annotations were then used to train a machine learning classifier that could quantitatively analyze the impressions of AI covered in the articles.

As well as recording a significant increase in interest in the field, they also discovered that recent advances appear to have caused a shift in the conversation around AI. The analysis found a substantial increase in concerns about the possibility of losing control of AI technology, ethical questions surrounding AI and its negative impact on human employment. The authors say these trends suggest increasing belief that we could soon be capable of building dangerous AI systems.

Discussion of the technological singularity — the idea that creating an artificial super intelligence could trigger runaway technological growth with unimaginable consequences for life on Earth — has also increased significantly. Tellingly, coverage that views this development negatively has risen to nearly double the level of coverage that views it in positive terms.

But at the same time, hopes for the technology’s potential for healthcare and education are on the rise. And concerns about a lack of progress have steadily diminished since the start of the analysis, which roughly coincided with the second so-called “AI Winter” in 1987, when optimism, funding and the market for AI technology collapsed. The analysis also found a fall in overall coverage that coincided with the start of this period.

AI researchers regularly complain about the media’s tendency for doom-mongering when it comes to their field. But interestingly, the analysis found that coverage of the technology has been consistently more positive than negative, with around two to three times as many positive mentions in total over the 30-year period.

To cross-validate their results, the authors carried out a separate analysis of posts on the popular online community Reddit. They took a machine learning classifier trained to predict the presence of concerns around loss of control of AI using their annotated New York Times data and applied it to all Reddit posts mentioning AI between 2010 and 2015. The results followed a similar pattern to those from the previous analysis, which the authors suggest provides tentative evidence that attitudes among Reddit users shifted in line with the New York Times coverage.

These findings may not seem particularly surprising to some, but the authors say accurately tracking public perception of AI is vitally important to the field. Concerns about the technology have already led to calls for draconian regulation that could stifle research. High expectations could also lead to the same kind of hype bubble that preceded previous AI winters.

Studying sentiment towards AI is difficult because people’s understanding of the topic can diverge significantly. The authors say their approach provides a framework for capturing both public engagement and sentiment towards this topic, and can continue to be applied to new articles to keep a running tab on public perception of AI.


Image Credit: Shutterstock

Edd Gent
Edd Genthttp://www.eddgent.com/
Edd is a freelance science and technology writer based in Bangalore, India. His main areas of interest are engineering, computing, and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured