The prospect of companies or other organizations run by AI looks increasingly plausible. Researchers say we need to update our laws and the way we train AI to account for this eventuality.
Recent breakthroughs in large language models are forcing us to reimagine what capabilities are purely human. There is some debate around whether these algorithms can understand and reason the same way humans do, but in a growing number of cognitive tasks, they can achieve similar or even better performance than most humans.
This is driving efforts to incorporate these new skills into all kinds of business functions, from writing marketing copy to summarizing technical documents or even assisting with customer support. And while there may be fundamental limits to how far up the corporate ladder AI can climb, taking on managerial or even C-suite roles is creeping into the realm of possibility.
That’s why legal experts in AI are now calling for us to adapt laws given the possibility of AI-run companies and for developers of the technology to alter how they train AI to make the algorithms law-abiding in the first place.
“A legal singularity is afoot,” Daniel Gervais, a law professor at Vanderbilt University, and John Nay, an entrepreneur and Stanford CodeX fellow, write in an article in Science. “For the first time, nonhuman entities that are not directed by humans may enter the legal system as a new ‘species’ of legal subjects.”
While non-human entities like rivers or animals have sometimes been granted the status of legal subjects, the authors write, one of the main barriers to their full participation in the law is the inability to use or understand language. With the latest batch of AI, that barrier has either been breached already or will be soon, depending on who you ask.
This opens the prospect, for the first time, of non-human entities directly interacting with the law. Indeed, the authors point out that lawyers already use AI-powered tools to help them do their jobs, and recent research has shown that LLMs can carry out a wide range of legal reasoning tasks.
And while today’s AI is still far from being able to run a company by itself, they highlight that in some jurisdictions there are no rules requiring that a corporation be overseen by humans, and the idea of an AI managing the affairs of a business is not explicitly barred by law.
If such an AI company were to arise, it’s not entirely clear how the courts would deal with it. The two most common consequences for breaches of the law are financial penalties and imprisonment, which do not translate particularly well to a piece of disembodied software.
While banning AI-controlled companies is a possibility, the authors say it would require massive international legislative coordination and could stifle innovation. Instead, they argue that the legal system should lean into the prospect and work out how best to deal with it.
One important avenue is likely to be coaxing AI to be more law-abiding. This could be accomplished by training a model to predict which actions are consistent with particular legal principles. That model could then teach other models, trained for different purposes, how to take actions in line with the law.
The ambiguous nature of the law, which can be highly contextual and often must be hashed out in court, makes this challenging. But the authors call for training methods that imbue what they call “spirit of the law” into algorithms rather than more formulaic rules about how to behave in different situations.
Regulators could make this kind of training for AIs a legal requirement, and the authorities could also develop their own AI designed to monitor the behavior of other models to ensure they’re in compliance with the law.
While the researchers acknowledge some may scoff at the idea of allowing AIs to directly control companies, they argue it’s better to bring them into the fold early so we can work through potential challenges.
“If we don’t proactively wrap AI agents in legal entities that must obey human law, then we lose considerable benefits of tracking what they do, shaping how they do it, and preventing harm,” they write.