Sophia’s uncanny-valley face, made of Hanson Robotics’ patented Frubber, is rapidly becoming an iconic image in the field of artificial intelligence. She has been interviewed on shows like 60 Minutes, made a Saudi citizen, and even appeared before the United Nations. Every media appearance sparks comments about how artificial intelligence is going to completely transform the world. This is pretty good PR for a chatbot in a robot suit.
But it’s also riding the hype around artificial intelligence, and more importantly, people’s uncertainty around what constitutes artificial intelligence, what can feasibly be done with it, and how close various milestones may be.
There are various definitions of artificial intelligence.
For example, there’s the cultural idea (from films like Ex Machina, for example) of a machine that has human-level artificial general intelligence. But human-level intelligence or performance is also seen as an important benchmark for those that develop software that aims to mimic narrow aspects of human intelligence, for example, medical diagnostics.
The latter software might be referred to as narrow AI, or weak AI. Weak it may be, but it can still disrupt society and the world of work substantially.
Then there’s the philosophical idea, championed by Ray Kurzweil, Nick Bostrom, and others, of a recursively-improving superintelligent AI that eventually compares to human intelligence in the same way as we outrank bacteria. Such a scenario would clearly change the world in ways that are difficult to imagine and harder to quantify; weighty tomes are devoted to studying how to navigate the perils, pitfalls, and possibilities of this future. The ones by Bostrom and Max Tegmark epitomize this type of thinking.
This, more often than not, is the scenario that Stephen Hawking and various Silicon Valley luminaries have warned about when they view AI as an existential risk.
Those working on superintelligence as a hypothetical future may lament for humanity when people take Sophia seriously. Yet without hype surrounding the achievements of narrow AI in industry, and the immense advances in computational power and algorithmic complexity driven by these achievements, they may not get funding to research AI safety.
Some of those who work on algorithms at the front line find the whole superintelligence debate premature, casting fear and uncertainty over work that has the potential to benefit humanity. Others even call it a dangerous distraction from the very real problems that narrow AI and automation will pose, although few actually work in the field. But even as they attempt to draw this distinction, surely some of their VC funding and share price relies on the idea that if superintelligent AI is possible, and as world-changing as everyone believes it will be, Google might get there first. These dreams may drive people to join them.
Yet the ambiguity is stark. Someone working on, say, MIT Intelligence Quest or Google Brain might be attempting to reach AGI by studying human psychology and learning or animal neuroscience, perhaps attempting to simulate the simple brain of a nematode worm. Another researcher, who we might consider to be “narrow” in focus, trains a neural network to diagnose cancer with higher accuracy than any human.
Where should something like Sophia, a chatbot that flatters to deceive as a general intelligence, sit? Its creator says: “As a hard-core transhumanist I see these as somewhat peripheral transitional questions, which will seem interesting only during a relatively short period of time before AGIs become massively superhuman in intelligence and capability. I am more interested in the use of Sophia as a platform for general intelligence R&D.” This illustrates a further source of confusion: people working in the field disagree about the end goal of their work, how close an AGI might be, and even what artificial intelligence is.
Stanford’s Jerry Kaplan is one of those who lays some of the blame at the feet of AI researchers themselves. “Public discourse about AI has become untethered from reality in part because the field doesn’t have a coherent theory. Without such a theory, people can’t gauge progress in the field, and characterizing advances becomes anyone’s guess.” He would prefer a less mysticism-loaded term like “anthropic computing.” Defining intelligence is difficult enough, but efforts like Stanford’s AI index go some way towards establishing a framework for tracking progress in different fields.
The ambiguity and confusion surrounding AI is part of a broader trend. A combination of marketing hype and the truly impressive pace of technology can cause us to overestimate our own technological capabilities or achievements. In artificial intelligence, which requires highly valued expertise and expensive hardware, the future remains unevenly distributed. For all the hype over renewables in the last 30 years, fossil fuels have declined from providing 88 percent of our energy to 85 percent.
We can underestimate the vulnerabilities. How many people have seen videos of Sophia or Atlas or heard hype about AlphaGo? Okay, now how many know that some neural networks can be fooled by adversarial examples that could be printed out as stickers? Overestimating what technology can do can leave you dangerously dependent on it, or blind to the risks you’re running.
At the same time, there is a very real risk that technological capacities and impacts are underestimated, or missed entirely. Take the recent controversy over social media engineering in the US election: no one can agree over the impact that automated “bots” have had. Refer to these algorithms as “artificial intelligence,” and people will think you’re a conspiracy theorist. Yet they can still have a societal impact.
Those who work on superintelligence argue that development could accelerate rapidly, that we could be in the knee of an exponential curve. Given that the problem they seek to solve (“What should an artificial superintelligence optimize?”) is dangerously close to “What should the mind of God look like?”, they might need all the time they can get.
We urgently need to move away from an artificial dichotomy between techno-hype and techno-fear; oscillating from one to the other is no way to ensure safe advances in technology. We need to communicate with those at the forefront of AI research in an honest, nuanced way and listen to their opinions and arguments, preferably without using a picture of the Terminator in the article.
Those who work with AI and robotics should ensure they don’t mislead the public. We need to ensure that policymakers have the best information possible. Luckily, groups like OpenAI are helping with this.
Algorithms are already reshaping our society; regardless of where you think artificial intelligence is going, a confused response to its promises and perils is no good thing.
Image Credit: vs148 / Shutterstock.com