Artificial intelligence has been all over headlines for nearly a decade, as systems have made quick progress in long-standing AI challenges like image recognition, natural language processing, and games. Tech companies have sown machine learning algorithms into search and recommendation engines and facial recognition systems, and OpenAI’s GPT-3 and DeepMind’s AlphaFold promise even more practical applications, from writing to coding to scientific discoveries.
Indeed, we’re in the midst of an AI spring, with investment in the technology burgeoning and an overriding sentiment of optimism and possibility towards what it can accomplish and when.
This time may feel different than previous AI springs due to the aforementioned practical applications and the proliferation of narrow AI into technologies many of us use every day—like our smartphones, TVs, cars, and vacuum cleaners, to name just a few. But it’s also possible that we’re riding a wave of short-term progress in AI that will soon become part of the ebb and flow in advancement, funding, and sentiment that has characterized the field since its founding in 1956.
AI has fallen short of many predictions made over the last few decades; 2020, for example, was heralded by many as the year self-driving cars would start filling up roads, seamlessly ferrying passengers around as they sat back and enjoyed the ride. But the problem has been more difficult than anticipated, and instead of hordes of robot taxis, the most advanced projects remain in trials. Meanwhile, some in the field believe the dominant form of AI—a kind of machine learning based on neural networks—may soon run out of steam absent a series of crucial breakthroughs.
In a paper titled Why AI Is Harder Than We Think, published last week on the arXiv preprint server, Melanie Mitchell, a computer science professor at Portland State University currently at the Santa Fe Institute, argues that AI is stuck in an ebb-and-flow cycle largely because we don’t yet truly understand the nature and complexity of human intelligence. Mitchell breaks this overarching point down into four common misconceptions around AI, and discusses what they mean for the future of the field.
1. Progress in narrow intelligence is progress towards general intelligence
Impressive new achievements by AI are often accompanied by an assumption that these same achievements are getting us closer to reaching human-level machine intelligence. But not only, as Mitchell points out, are narrow and general intelligence as different as climbing a tree versus landing on the moon, but even narrow intelligence is still largely reliant on an abundance of task-specific data and human-facilitated training.
Take GPT-3, which some cited as having surpassed “narrow” intelligence: the algorithm was trained to write text, but learned to translate, write code, autocomplete images, and do math, among other tasks. But although GPT-3’s capabilities turned out to be more extensive than its creators may have intended, all of its skills are still within the domain in which it was trained: that is, language—spoken, written, and programming.
Becoming adept at a non-language-related skill with no training would signal general intelligence, but this wasn’t the case with GPT-3, nor has it been the case with any other recently-developed AI: they remain narrow in nature and, while significant in themselves, shouldn’t be conflated with steps toward the thorough understanding of the world required for general intelligence.
2. What’s easy for humans should be easy for machines
Is AI smarter than a four-year-old? In most senses, the answer is no, and that’s because skills and tasks that we perceive as being “easy” are in fact much more complex than we give them credit for, as Moravec’s Paradox notes.
Four-year-olds are pretty good at figuring out cause and effect relationships based on their interactions with the world around them. If, for example, they touch a pot on the stove and burn a finger, they’ll understand that the burn was caused by the pot being hot, not by it being round or silver. To humans this is basic common sense, but algorithms have a hard time making causal inferences, especially without a large dataset or in a different context than the one they were trained in.
The perceptions and choices that take place at a subconscious level in humans sit on a lifetime’s worth of experience and learning, even at such an elementary level as “touching hot things will burn you.” Because we reach a point where this sort of knowledge is reflexive, not even requiring conscious thought, we see it as “easy,” but it’s quite the opposite. “AI is harder than we think,” Mitchell writes, “because we are largely unconscious of the complexity of our own thought processes.”
3. Human language can describe machine intelligence
Humans have a tendency to anthropomorphize non-human things, from animals to inanimate objects to robots and computers. In so doing, we use the same words we’d use to discuss human activities or intelligence—except these words don’t quite fit the context, and in fact can muddle our own understanding of AI. Mitchell uses the term “wishful mnemonics,” coined by a computer scientist in the 1970s. Words like “reading,” “understanding,” and “thinking” are used to describe and evaluate AI, but these words don’t give us an accurate depiction of how the AI is functioning or progressing.
Even “learning” is a misnomer, Mitchell says, because if a machine truly “learned” a new skill, it would be able to apply that skill in different settings; finding correlations in datasets and using the patterns identified to make predictions or meet other benchmarks is something, but it’s not “learning” in the way that humans learn.
So why all the fuss over words, if they’re all we have and they’re getting the gist across? Well, Mitchell says, this inaccurate language can not only mislead the public and the media, but can influence the way AI researchers think about their systems and carry out their work.
4. Intelligence is all in our heads
Mitchell’s final point is that human intelligence is not contained solely in the brain, but requires a physical body.
This seems self-explanatory; we use our senses to absorb and process information, and we interact with and move through the world in our bodies. Yet the prevailing emphasis in AI research is on the brain: understanding it, replicating various aspects of its form or function, and making AI more like it.
If intelligence lived just in the brain, we’d be able to move closer to reaching human-level AI by, say, building a neural network with the same number of parameters as the brain has synaptic connections, thereby duplicating the brain’s “computing capacity.”
Drawing this sort of parallel may apply in cases where “intelligence” refers to operating by a set of rules to work towards a defined goal—such as winning a game of chess or modeling the way proteins fold, both of which computers can already do quite well. But other types of intelligence are far more shaped by and subject to emotion, bias, and individual experience.
Going back to the GPT-3 example: the algorithm produces “subjective” intelligence (its own writing) using a set of rules and parameters it created with a huge dataset of pre-existing subjective intelligence (writing by humans). GPT-3 is hailed as being “creative,” but its writing relies on associations it drew between words and phrases in human writing—which is replete with biases, emotion, pre-existing knowledge, common sense, and the writer’s unique experience of the world, all experienced through the body.
Mitchell argues that the non-rational, subjective aspects of the way humans think and operate aren’t a hindrance to our intelligence, but are in fact its bedrock and enabler. Leading artificial general intelligence expert Ben Goertzel similarly advocates for “whole-organism architecture,” writing, “Humans are bodies as much as minds, and so achieving human-like AGI will require embedding AI systems in physical systems capable of interacting with the everyday human world in nuanced ways.”
Where to From Here?
These misconceptions leave little doubt as to what AI researchers and developers shouldn’t do. What’s less clear is how to move forward. We must start, Mitchell says, with a better understanding of intelligence—no small or straightforward task. One good place AI researchers can look, though, is in other disciplines of science that study intelligence.
Why are we so intent on creating an artificial version of human intelligence, anyway? It has evolved over millions of years and is hugely complex and intricate, yet still rife with its own shortcomings too. Perhaps the answer is that we’re not trying to build an artificial brain that’s as good as ours; we’re trying to build one that’s better, and that will help us solve currently unsolvable problems.
Human evolution took place over the course of about six million years. Meanwhile, it’s been 65 years since AI became a field of study, and it’s writing human-like text, making fake faces, holding its own in debates, making medical diagnoses, and more. Though there’s much left to learn, it seems AI is progressing pretty well in the grand scheme of things—and the next step in taking it further is deepening our understanding of our own minds.
Image Credit: Rene Böhmer on Unsplash