In our polarized times, finding ways to get people to agree with each other is more important than ever. New research suggests AI can help people with different views find common ground.
The ability to effectively make collective decisions is crucial for an open and free society. But it is a skill that’s atrophied in recent decades, driven in part by the polarizing effects of technology like social media.
New research from Google DeepMind suggests technology could also present a solution. In a recent paper in Science, the company showed that an AI system using large language models could act as mediator in group discussions and help find points of agreement on contentious issues.
“This research demonstrates the potential of AI to enhance collective deliberation,” wrote the authors. “The AI-mediated approach is time-efficient, fair, scalable, and outperforms human mediators on key dimensions.”
The researchers were inspired by philosopher Jürgen Habermas’ theory of communicative action, which proposes that, under the right conditions, deliberation between rational people will lead to agreement.
They built an AI tool that could summarize and synthesize the views of a small group of humans into a shared statement. The language model was asked to maximize the overall approval rating from the group as a whole. Group members then critiqued the statement, and the model used this to produce a fresh draft—a feedback loop that was repeated multiple times.
To test the approach, the researchers recruited around 5,000 people in the UK through a crowdsourcing platform and split them into groups of six. They asked these groups to discuss contentious issues like whether the voting age should be lowered to 16. They also trained one group member to write group statements and compared these against the machine-derived ones.
The team found participants preferred the AI summaries 56 percent of the time, suggesting the technology was doing a good job capturing group opinion. The volunteers also gave higher ratings to the machine-written statements and endorsed them more strongly.
More importantly, the researchers determined that after going through the AI mediation process a measure of group agreement increased by about eight percent on average. Participants also reported their view had moved closer to the group opinion after 30 percent of the deliberation rounds.
This suggests the approach was genuinely helping groups find common ground. One of the key attributes of the AI-generated group statements, the authors noted, was that they did a good job incorporating the views of dissenting voices while respecting the majority position.
To really put the approach to the test, the researchers recruited a demographically representative sample of 200 participants in the UK to take part in a virtual “citizen’s assembly,” which took place over three weekly one-hour sessions. The group deliberated over nine contentious questions, and afterwards, the researchers again found a significant increase in group agreement.
The technology still falls somewhat short of a human mediator, DeepMind’s Michael Henry Tessler told MIT Tech Review. “It doesn’t have the mediation-relevant capacities of fact-checking, staying on topic, or moderating the discourse.”
Nonetheless, Christopher Summerfield, research director at the UK AI Safety Institute, who led the project, told Science the technology was “ready to go” for real-world deployment and could help add some nuance to opinion polling.
But others think that without crucial steps like starting a deliberation with the presentation of expert information and allowing group members to directly discuss the issues, the technology could allow ill-informed and harmful views to make it into the group statements. “I believe in the magic of dialogue under the right design,” James Fishkin, a political scientist at Stanford University, told Science. “But there’s not really much dialogue here.”
While that is certainly a risk, any technology that can help lubricate discussions in today’s polarized world should be welcomed. It might take a few more iterations, but dispassionate AI mediators could be a vital tool for re-establishing some common purpose in the world.
Image Credit: Mohamed Hassan / Pixabay