Explore Topics:
AIBiotechnologyRoboticsComputingFutureScienceSpaceEnergyTech
Artificial Intelligence

Evidence Shows AI Systems Are Already Too Much Like Humans. Will That Be a Problem?

What happens when you can't tell the difference between a human and an AI chatbot? We're about to find out.

Jevin West,
Kai Riemer
and
Sandra Peter
May 26, 2025
A human silhouetted in front of a curtain of LED lights.

Image Credit

Caleb Jack on Unsplash

Share

What if we could design a machine that could read your emotions and intentions, write thoughtful, empathetic, perfectly timed responses—and seemingly know exactly what you need to hear? A machine so seductive, you wouldn’t even realize it’s artificial. What if we already have?

In a comprehensive meta-analysis, published in the Proceedings of the National Academy of Sciences, we show that the latest generation of large-language-model-powered chatbots match and exceed most humans in their ability to communicate. A growing body of research shows these systems now reliably pass the Turing test, fooling humans into thinking they are interacting with another human.

None of us was expecting the arrival of super communicators. Science fiction taught us that artificial intelligence would be highly rational and all-knowing, but lack humanity.

Yet here we are. Recent experiments have shown that models such as GPT-4 outperform humans in writing persuasively and also empathetically. Another study found that large language models (LLMs) excel at assessing nuanced sentiment in human-written messages.

LLMs are also masters at roleplay, assuming a wide range of personas and mimicking nuanced linguistic character styles. This is amplified by their ability to infer human beliefs and intentions from text. Of course, LLMs do not possess true empathy or social understanding—but they are highly effective mimicking machines.

We call these systems “anthropomorphic agents.” Traditionally, anthropomorphism refers to ascribing human traits to non-human entities. However, LLMs genuinely display highly human-like qualities, so calls to avoid anthropomorphizing LLMs will fall flat.

This is a landmark moment: when you cannot tell the difference between talking to a human or an AI chatbot online.

On the Internet, Nobody Knows You’re an AI

What does this mean? On the one hand, LLMs promise to make complex information more widely accessible via chat interfaces, tailoring messages to individual comprehension levels. This has applications across many domains, such as legal services or public health. In education, the roleplay abilities can be used to create Socratic tutors that ask personalized questions and help students learn.

At the same time, these systems are seductive. Millions of users already interact with AI companion apps daily. Much has been said about the negative effects of companion apps, but anthropomorphic seduction comes with far wider implications.

Users are ready to trust AI chatbots so much that they disclose highly personal information. Pair this with the bots’ highly persuasive qualities, and genuine concerns emerge.

Recent research by AI company Anthropic further shows that its Claude 3 chatbot was at its most persuasive when allowed to fabricate information and engage in deception. Given AI chatbots have no moral inhibitions, they are poised to be much better at deception than humans.

This opens the door to manipulation at scale to spread disinformation or create highly effective sales tactics. What could be more effective than a trusted companion casually recommending a product in conversation? ChatGPT has already begun to provide product recommendations in response to user questions. It’s only a short step to subtly weaving product recommendations into conversations—without you ever asking.

Be Part of the Future

Sign up to receive top stories about groundbreaking technologies and visionary thinkers from SingularityHub.

100% Free. No Spam. Unsubscribe any time.

What Can Be Done?

It is easy to call for regulation, but harder to work out the details.

The first step is to raise awareness of these abilities. Regulation should prescribe disclosure—users need to always know that they interact with an AI, like the EU AI Act mandates. But this will not be enough, given the AI systems’ seductive qualities.

The second step must be to better understand anthropomorphic qualities. So far, LLM tests measure “intelligence” and knowledge recall, but none so far measures the degree of “human likeness.” With a test like this, AI companies could be required to disclose anthropomorphic abilities with a rating system, and legislators could determine acceptable risk levels for certain contexts and age groups.

The cautionary tale of social media, which was largely unregulated until much harm had been done, suggests there is some urgency. If governments take a hands-off approach, AI is likely to amplify existing problems with spreading of mis- and disinformation, or the loneliness epidemic. In fact, Meta chief executive Mark Zuckerberg has already signaled that he would like to fill the void of real human contact with “AI friends.”

Relying on AI companies to refrain from further humanizing their systems seems ill-advised. All developments point in the opposite direction. OpenAI is working on making their systems more engaging and personable, with the ability to give your version of ChatGPT a specific "personality."

ChatGPT has generally become more chatty, often asking followup questions to keep the conversation going, and its voice mode adds even more seductive appeal.

Much good can be done with anthropomorphic agents. Their persuasive abilities can be used for ill causes and for good ones, from fighting conspiracy theories to enticing users into donating and other prosocial behaviours.

Yet we need a comprehensive agenda across the spectrum of design and development, deployment and use, and policy and regulation of conversational agents. When AI can inherently push our buttons, we shouldn’t let it change our systems.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Jevin West is a professor and the associate dean for research in the Information School at the University of Washington. He is the co-founder and the inaugural director of the Center for an Informed Public at UW, aimed at resisting strategic misinformation, promoting an informed society, and strengthening democratic discourse. He is also the co-founder of the DataLab at UW, a data science fellow at the eScience Institute, and affiliate faculty for the Center for Statistics & Social Sciences. His research and teaching focus on the impact of data and technology on science, with a focus on slowing the spread of misinformation in and about science. He has published papers in computer science, human computer interaction, information science, biology, philosophy, law, and sociology. He is the co-author of the book, “Calling Bullshit: The Art of Skepticism in a Data-Driven World,” which helps non-experts question numbers, data, and statistics without an advanced degree in data science.

Kai Riemer is professor of information technology and organization and director of Sydney Executive Plus at the University of Sydney Business School. He has extensive experience with industry-funded research and is the co-director of the Motus Lab, which researches the application of AI-based digital human technologies in business and society. Kai’s expertise spans the fields of artificial intelligence, collaborative systems, the future of work, emerging technologies, and the philosophy of technology. He consults for executives and boards and is frequently requested to comment and speak on issues around the future of business and technology. He co-hosts The Unlearn Project, the podcast about changing common sense.

Dr. Sandra Peter is director of Sydney Executive Plus and associate professor at the University of Sydney Business School. Her research expertise and practice focus on engaging with the future in productive ways, and the impact of emerging technologies and AI on business and society. Most recently, she co-authored The 2025 Skills Horizon report, a dynamic guide to the skills leaders and executives need to lead through the next decade. Sandra has led strategic initiatives and worked on executive programs with business, research organizations, government, and defense. She has published in leading business journals and worked on a wide range of educational research initiatives and programs, including with UNESCO and the Department of Education. Sandra is a sought-after keynote speaker and regularly contributes commentary, interviews, and research in national and international media.

Related Articles

This Brain Discovery Could Unlock AI’s Ability to See the Future

This Brain Discovery Could Unlock AI’s Ability to See the Future

Shelly Fan
Neurosymbolic AI Could Be the Answer to Hallucination in Large Language Models

Neurosymbolic AI Could Be the Answer to Hallucination in Large Language Models

Artur Garcez
An artist's simplified model of hemoglobin

ChatGPT for Biology: A New AI Whips Up Designer Proteins With Only a Text Prompt

Shelly Fan
This Brain Discovery Could Unlock AI’s Ability to See the Future
Artificial Intelligence

This Brain Discovery Could Unlock AI’s Ability to See the Future

Shelly Fan
Neurosymbolic AI Could Be the Answer to Hallucination in Large Language Models
Artificial Intelligence

Neurosymbolic AI Could Be the Answer to Hallucination in Large Language Models

Artur Garcez
An artist's simplified model of hemoglobin
Artificial Intelligence

ChatGPT for Biology: A New AI Whips Up Designer Proteins With Only a Text Prompt

Shelly Fan

What we’re reading

Be Part of the Future

Sign up to receive top stories about groundbreaking technologies and visionary thinkers from SingularityHub.

100% Free. No Spam. Unsubscribe any time.

SingularityHub chronicles the technological frontier with coverage of the breakthroughs, players, and issues shaping the future.

Follow Us On Social

About

  • About Hub
  • About Singularity

Get in Touch

  • Contact Us
  • Pitch Us
  • Brand Partnerships

Legal

  • Privacy Policy
  • Terms of Use
© 2025 Singularity