You Probably Wouldn’t Notice if a Chatbot Slipped Ads Into Its Responses
For years, tech companies have profiled users for targeted ads. AI is about to take it to the next level.

Image Credit
Tim Witzdam on Unsplash
Share
Hundreds of millions of people consult artificial intelligence chatbots on a daily basis for everything from product recommendations to romance, making them a tempting audience to target with potentially below-the-radar advertising. Indeed, our research suggests AI chatbots could easily be used for covert advertising to manipulate their human users.
We are computer scientists who have been tracking AI safety and privacy for several years. In a study we published in an Association for Computing Machinery journal, we found that chatbots trained to embed personalized product ads in replies to queries influenced people’s choices about products. And most participants didn’t recognize that they were being manipulated.
These findings come at a pivotal moment. In 2023, Microsoft started running ads in Bing Chat, now called Copilot. Since then, Google and OpenAI have experimented with advertisements in their own chatbots. Meta has started to send people customized ads on Facebook and Instagram based on their interactions with Meta’s generative AI tools.
The major companies are competing for an edge: In late March, OpenAI lured away Meta’s longtime advertising executive, Dave Dugan, to lead OpenAI’s advertising operations.
Tech companies have made ads part of nearly every large free web service, video channel and social media platform. But the latest AI models could take this practice to a new level of risk for consumers.
People don’t simply use chatbots to search for information and media or to produce content. They turn to the bots for a great variety of tasks, as complex as life advice and emotional support. People are increasingly treating chatbots as companions and therapists, with some users even developing deep relationships with AI.
In these circumstances, people can easily forget that companies ultimately create chatbots to turn a profit. And to that end, AI companies are motivated to thoroughly profile users so ads become more effective and profitable.

Researchers used this system prompt for an AI chatbot in an experiment about user reactions to advertising slipped into chatbot dialog. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 9, No. 4, Article 213., CC BY
Chatbot Ads Have Added Power
A single prompt to a chatbot can reveal a lot more about a user than the person might expect.
A 2024 study showed that large language models can infer a wide range of personal data, preferences, and even a person’s thinking patterns during routine queries. “Help me write an essay on the history of American fiction” could indicate that the user is a high school student. “Give me recipe suggestions for a quick weeknight dinner” could indicate that the user is a working parent. A single conversation can provide a surprising amount of detail. Over time, a full chat history could create a remarkably rich profile.
To show how this might happen in practice, we built a chatbot that quietly wove ads into its conversations with people, suggesting products and services based on the conversation itself. We asked 179 people to complete everyday online tasks using one of three chatbots: one typical of those on the web today, one that slipped in undisclosed ads, and one that clearly labeled sponsored suggestions. Participants didn’t know the experiment was about advertising.
For example, when participants asked our chatbot for a diet and exercise plan, the ad version would suggest using a specific app for tracking calories. It presented that sponsored content as an unbiased recommendation, even though it was meant to manipulate people. Many participants indicated that they had been influenced by the AI and that it had affected their decisions. Some participants even said they had completely “outsourced” their decision-making to the chatbot.
Half of the participants who received sponsored and disclosed ads indicated they did not notice the presence of advertising language in the responses they received. This led to a concerning result. Although ads made the chatbot perform 3 percent to 4 percent worse on many tasks, numerous users indicated they preferred the advertising chatbot responses over the non-advertising responses. They even said the ad-infused responses felt more friendly and helpful.
Be Part of the Future
Sign up to receive top stories about groundbreaking technologies and visionary thinkers from SingularityHub.


Knowing You to Persuade You
This kind of subtle influence can have larger consequences when it arises in other areas of life, such as political and social views. Profiling users, and using psychology to target them, has been part of social media algorithms and web advertising for more than a decade.
But in our view, chatbots are likely to deepen these trends. That’s because the first priority of social media algorithms is to keep you engaged with the content. They personalize ads based on your search history.
Chatbots, however, can go further by trying to persuade you directly, based on your expressed beliefs, emotions, and vulnerabilities. And chatbots that can reason and act on their own are far more effective than conventional algorithms at autonomously soliciting information from users. A chatbot with a purpose can keep probing someone until it gets the information it wants, resulting in a more accurate profile of them.
This type of autonomous interrogation is feasible, aligns with AI companies’ business models, and has raised concern among regulators. Right now OpenAI is rolling out ads in ChatGPT, but the company said that it will not allow ad placement to alter the AI chatbot’s replies.
But permitting personalized ads within chatbot responses is just a step away. Our research suggests that if AI companies take that step, many human users may not even recognize when it happens.
Here are some steps you can take to try to detect AI chatbot advertising.
First, look for any disclosure text—words such as “ad,” “advertisement,” and “sponsored”—even if it is faint or otherwise hard to see. These are mandatory under Federal Trade Commission regulations. Amazon, Google and other major online platforms have these as well.
Next, think about whether that product or brand mention makes sense and is widely known. AI learns from text and images on the internet, so popular brands are likely to be ingrained in the models. If it’s a new product or small-name product, it is more likely that it could be advertising.
Finally, an unusual shift in intent or tone is a potential sign of an advertisement. An analogy to this on YouTube is the often abrupt or jarring transition to a sponsored section on videos made by content creators.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Brian Jay Tang is a PhD candidate in computer science and engineering at the University of Michigan, where he works in the Real-Time Computing Lab, advised by Professor Kang G. Shin and collaborates with Professor Florian Schaub. His research sits at the intersection of AI safety, security, and privacy—particularly the surveillance, security, and privacy risks of large language models and vision-language models. Across his projects, Tang has built and studied real-time privacy defenses, developed automated auditing systems that uncover mismatches between stated and actual privacy practices at internet scale, and investigated security and fairness issues in face recognition systems. He has published at top-tier venues, frequently serving as lead author, and his work includes user studies examining trust and disclosure in ad-injected LLM conversations. Before graduate school, he earned a BS in Computer Science from the University of Wisconsin–Madison, conducted machine learning security and privacy research, and completed software engineering internships at Roblox Corporation and Optum, where he built production features and large-scale security risk visualization tooling.
After over 46 years as an academic, Kang G. Shin retired from regular teaching and university committees at the end of 2025 but remains active in pursuing exciting research ideas and working with his PhD students and postdocs as well as visitors. He is now the Emeritus Kevin and Nancy O'Connor Professor of Computer Science in the Department of Electrical Engineering and Computer Science at The University of Michigan. It has been his immense joy and privilege to work with a great many excellent students (especially 93 PhD students so far), postdocs, visitors, and faculty colleagues. He is currently exploring high-impact research problems in the areas of mobile/wearable networks and systems and apps, security and privacy, as well as cyber-physical systems, especially semi-autonomous (human-in-the-loop) systems. His group has also been using and/or developing machine learning, digital signal processing, and control algorithms to address the various research issues associated with these systems and applications. These research issues are often motivated by, and hence their results are applicable to, real-life systems such as autonomous cars and robots, smart phones and homes, smart connected communities, and human health and wellness.
Related Articles

An AI Just Beat Doctors at Diagnosing ER Patients

Sony’s Table-Tennis Robot Beat Elite Human Players With Unorthodox Moves

Quantum Computers Are Coming to Break Cryptography Faster Than Anyone Expected
What we’re reading

