‘Droidspeak’: AI Agents Now Have Their Own Language Thanks to Microsoft

Getting AIs to work together could be a powerful force multiplier for the technology. Now, Microsoft researchers have invented a new language to help their models talk to each other faster and more efficiently.

AI agents are the latest buzzword in Silicon Valley. These are AI models that can carry out complex, multi-step tasks autonomously. But looking further ahead, some see a future where multiple AI agents collaborate to solve even more challenging problems.

Given that these agents are powered by large language models (LLMs), getting them to work together usually relies on agents speaking to each other in natural language, often English. But despite their expressive power, human languages might not be the best medium of communication for machines that fundamentally operate in ones and zeros.

This prompted researchers from Microsoft to develop a new method of communication that allows agents to talk to each other in the high-dimensional mathematical language underpinning LLMs. They’ve named the new approach Droidspeak—a reference to the beep and whistle-based language used by robots in Star Wars—and in a preprint paper published on the arXiv, the Microsoft team reports it enabled models to communicate 2.78 times faster with little accuracy lost.

Typically, when AI agents communicate using natural language, they not only share the output of the current step they’re working on, but also the entire conversation history leading up to that point. Receiving agents must process this big chunk of text to understand what the sender is talking about.

This creates considerable computational overhead, which grows rapidly if agents engage in a repeated back-and-forth. Such exchanges can quickly become the biggest contributor to communication delays, say the researchers, limiting the scalability and responsiveness of multi-agent systems.

To break the bottleneck, the researchers devised a way for models to directly share the data created in the computational steps preceding language generation. In principle, the receiving model would use this directly rather than processing language and then creating its own high-level mathematical representations.

However, it’s not simple transferring the data between models. Different models represent language in very different ways, so the researchers focused on communication between versions of the same underlying LLM.

Even then, they had to be smart about what kind of data to share. Some data can be reused directly by the receiving model, while other data needs to be recomputed. The team devised a way of working this out automatically to squeeze the biggest computational savings from the approach.

Philip Feldman at the University of Maryland, Baltimore County told New Scientist that the resulting communication speed-ups could help multi-agent systems tackle bigger, more complex problems than possible using natural language.

But the researchers say there’s still plenty of room for improvement. For a start, it would be helpful if models of different sizes and configurations could communicate. And they could squeeze out even bigger computational savings by compressing the intermediate representations before transferring them between models.

However, it seems likely this is just the first step towards a future in which the diversity of machine languages rivals that of human ones.

Image Credit: Shawn Suttle from Pixabay

Edd Gent
Edd Genthttp://www.eddgent.com/
I am a freelance science and technology writer based in Bangalore, India. My main areas of interest are engineering, computing and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured