Study: AI Chatbots Choose Friends Just Like Humans Do
GPT-4, Claude, and Llama sought out popular peers, connected with others via existing friends, and gravitated towards those similar to them.

Image Credit
Cash Macanaya on Unsplash
Share
As AI wheedles its way into our lives, how it behaves socially is becoming a pressing question. A new study suggests AI models build social networks in much the same way as humans.
Tech companies are enamored with the idea that agents—autonomous bots powered by large language models—will soon work alongside humans as digital assistants in everyday life. But for that to happen, these agents will need to navigate the humanity’s complex social structures.
This prospect prompted researchers at Arizona State University to investigate how AI systems might approach the delicate task of social networking. In a recent paper in PNAS Nexus, the team reports that models such as GPT-4, Claude, and Llama seem to behave like humans by seeking out already popular peers, connecting with others via existing friends, and gravitating towards those similar to them.
“We find that [large language models] not only mimic these principles but do so with a degree of sophistication that closely aligns with human behaviors,” the authors write.
To investigate how AI might form social structures, the researchers assigned AI models a series of controlled tasks where they were given information about a network of hypothetical individuals and asked to decide who to connect to. The team designed the experiments to investigate the extent to which models would replicate three key tendencies in human networking behavior.
The first tendency is known as preferential attachment, where individuals link up with already well-connected people, creating a kind of “rich get richer” dynamic. The second is triadic closure, in which individuals are more likely to connect with friends of friends. And the final behavior is homophily, or the tendency to connect to others that share similar attributes.
The team found the models mirrored all of these very human tendencies in their experiments, so they decided to test the algorithms on more realistic problems.
They borrowed datasets that captured three different kinds of real-world social networks—groups of friends at college, nationwide phone-call data, and internal company data that mapped out communication history between different employees. They then fed the models various details about individuals within these networks and got them to reconstruct the connections step by step.
Be Part of the Future
Sign up to receive top stories about groundbreaking technologies and visionary thinkers from SingularityHub.


Across all three networks, the models replicated the kind of decision making seen in humans. The most dominant effect tended to be homophily, though the researchers reported that in the company communication settings they saw what they called “career-advancement dynamics”—with lower-level employees consistently preferring to connect to higher-status managers.
Finally, the team decided to compare AI’s decisions to humans directly, enlisting more than 200 participants and giving them the same task as the machines. Both had to pick which individuals to connect to in a network under two different contexts—forming friendships at college and making professional connections at work. They found both humans and AI prioritized connecting with people similar to them in the friendship setting and more popular people in the professional setting.
The researchers say the high level of consistency between AI and human decision making could make these models useful for simulating human social dynamics. This could be helpful in social science research but also, more practically, for things like testing how people might respond to new regulations or how changes to moderation rules might reshape social networks.
However, they also note this means agents could reinforce some less desirable human tendencies as well, such as the inclination to create echo chambers, information silos, and rigid social hierarchies.
In fact, they found that while there were some outliers in the human groups, the models were more consistent in their decision making. That suggests that introducing them to real social networks could reduce the overall diversity of behavior, reinforcing any structural biases in those networks.
Nonetheless, it seems future human-machine social networks may end up looking more familiar than one might expect.
Related Articles

AI Companies Are Betting Billions on AI Scaling Laws. Will Their Wager Pay Off?

Are Animals and AI Conscious? Scientists Devise New Theories for How to Test This

Is the AI Bubble About to Burst? What to Watch for as the Markets Wobble
What we’re reading
