AI is the talk of the town these days. But despite the technology’s impressive accomplishments—or perhaps because of them—not all of that talk is positive. There was a New York Times tech columnist’s piece about his unsettling interaction with ChatGPT in February; an open letter calling for a moratorium on AI research in March; “godfather of AI” Geoffrey Hinton’s dramatic resignation from Google and warning about the dangers of AI; and just this week, OpenAI CEO Sam Altman’s testimony before Congress, in which he said his “worst fear is we cause significant harm to the world” and encouraged legislation around the technology (though he also argued that generative AI should be treated differently, which would be convenient for his company).
It seems these warnings (along with all the other media circulating on the topic) have reached the American public loud and clear, and people don’t quite know what to think—but many are getting nervous. A poll carried out last week by Reuters revealed that more than half of Americans believe AI poses a threat to humanity’s future.
The poll was conducted online between May 9 and May 15, with 4,415 adults participating, and the results were published yesterday. More than two-thirds of respondents expressed concern about possible negative impacts of AI, while 61 percent believe it could be a threat to civilization.
“It’s telling such a broad swatch of Americans worry about the negative effects of AI,” said Landon Klein, director of US policy at the Future of Life Institute, the organization behind the previously mentioned open letter. “We view the current moment similar to the beginning of the nuclear era, and we have the benefit of public perception that is consistent with the need to take action.”
One nebulous aspect of the poll, and of many of the headlines about AI we see on a daily basis, is how the technology is defined. What are we referring to when we say “AI”? The term encompasses everything from recommendation algorithms that serve up content on YouTube and Netflix, to large language models like ChatGPT, to models that can design incredibly complex protein architectures, to the Siri assistant built into many iPhones.
IBM’s definition is simple: “a field which combines computer science and robust datasets to enable problem-solving.” Google, meanwhile, defines it as “a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more.”
It could be that peoples’ fear and distrust of AI comes partly from a lack of understanding of it, and a stronger focus on unsettling examples than positive ones. The AI that can design complex proteins may help scientists discover stronger vaccines and other drugs, and could do so on a vastly accelerated timeline.
In fact, biotechnology and medicine are two fields for which AI holds enormous promise, be it by modeling millions of proteins, coming up with artificial enzymes, powering brain implants that help disabled people communicate, or helping diagnose conditions like Alzheimer’s.
Sebastian Thrun, a computer science professor at Stanford who founded Google X, pointed out that there’s not enough public awareness of the potential for positive impact AI has. “The concerns are very legitimate, but I think what’s missing in the dialogue in general is why are we doing this in the first place?” he said. “AI will raise peoples’ quality of life, and help people be more competent and more efficient.”
While 61 percent of the poll’s respondents said AI could be a risk to humanity, only 22 percent said it won’t be a risk; the other 17 percent weren’t sure.
However, the (sort of?) good news is that AI isn’t the biggest thing Americans are losing sleep over. The top worry at the moment is, unsurprisingly, the economy (82 percent of respondents fear a looming recession), with crime coming in second (77 percent said they support increasing police funding to fight crime).
If an AI solution came along that could, say, point out economic strategies humans haven’t yet thought of, would that make people less wary of it?
Given everything else the tech can do, this doesn’t seem like such a long shot.