What Does Ethical AI Look Like? Here’s What the New Global Consensus Says

Elon Musk usually isn’t one for advocating regulation and oversight.

But when it comes to AI, he doesn’t mince words. AI is humanity’s “biggest existential threat,” he once proclaimed to some controversy. While that statement may be overblown, the fears aren’t: AI will be the next technological force that transforms the face of society—for better or worse—much as the industrial revolution once did.

The potential threats of AI are many, and most people agree that ethical AI that benefits humanity as a whole is critical for this technological quantum leap.

But what exactly does “ethical AI” mean?

Ethics is a shifting, amorphous concept that can rapidly change among different cultures, societies, and values. Although there’s been numerous attempts at drafting an ethical AI guideline, what remains unclear is if everyone—regardless of sector, socioeconomic status, culture, or religion—is in agreement. What’s ethical for a WEIRD society may not be so to an Eastern, communist one. Privacy from face recognition may be tantamount to western societies that view freedom as a human right, but not matter as much for Chinese citizens who are accustomed to surveillance—or even welcome it if it means they’ll benefit from an AI-powered social credit system.

And here’s the inherent problem to discussing ethics in AI. Like any other technology, AI knows no boundaries, and a global effort is needed to govern its growth as the technology becomes ever more powerful and intertwined with society.

So how different are our views?

In a study published last week in Nature Machine Learning, a team from ETH Zurich, Switzerland, took a birds-eye view at AI ethics guidelines around the globe. It’s not a uniform sample: wealthy nations, including the US, EU, and Japan, are represented far more than poorer regions such as Africa, Central Asia, and Latin America. Yet even with this skewed sample two conclusions emerged.

One, there’s a strong global convergence towards five ethical principles, including transparency, justice and fairness, non-maleficence, responsibility, and privacy. Two, people can’t really agree on what any of those words mean when it comes to policy.

The AI Threat

Compared to internal combustion engines that revolutionized the early 1900s, AI comes with far greater inherent threats. A famous one is the fear of super-intelligence: computers that outperform humans on any cognitive domain, and spur themselves to exponentially become faster, smarter, and more efficient.

But perhaps more relevant is humanity’s use of AI, either the machine learning tools we already have or those that will arise in the near future. In a world already rampant with wealth disparity, one that’s shaking up the concept of privacy, freedom, and democracy, AI is rabidly tainted with the preconceptions and biases of society. If deployed without oversight and caution, scientists fear that AI may amplify our own biased notions, rather than acting as the great equalizer as once hoped.

Musk isn’t the only one worried. At the government level, the European Commission established the High-Level Expert Group on AI, the UK House of Lords formed the Select Committee on AI, and a global AI coalition sprung up as part of the Organization for Economic Co-operation and Development. These efforts extend far from sheer bureaucracy: Google and SAP are among those that publicly released AI guidelines and principles, and non-profit organizations such as the Association of Computing Machinery and Amnesty International have all published “soft-law” documents for guidance towards ethical AI.

“Are these various groups converging on what ethical AI should be, and the ethical principles that will determine the development of AI? If they diverge, what are their differences and can these differences be reconciled?” the team asked.

Five Emerging Principles

As a first perspectives map, the team scoured 84 online high-quality resources on the topic of AI ethics, retrieved before the end of April this year. The documents covered five languages and were issued by both private and public sectors, with private companies being the main publisher, followed by governmental agencies.

Using several levels of content analysis, the team eventually distilled eleven overarching ethical values, with transparency as the most frequently mentioned principle. Yet no single ethical concern was common for the entire stack of documents, though some common underlying themes emerged.

One is transparency, or the ability to understand the decisions of AI. It’s “the most prevalent principle in the current literature,” the authors said. Most of these touch on increasing interpretability or other acts of communication, with the main goal of reducing harm. Other guidelines highlight transparency as a way to foster trust, for legal reasons, or to bolster open dialogue and the principles of democracy.

Yet when it comes to how to pull back the AI veil, guidelines vastly differ: some point to opening the source code, where others believe communicating topics such as evidence for AI and its limitations or responsibility for AI and investments are more impactful.

Another is non-maleficence, a strange way of putting “do no harm.” The general call here is for safety and security, in a way that AI will never cause foreseeable or unintentional harm, including discrimination, violation of privacy, or bodily harm. The principle intersects with another, that of justice, which monitors AI to prevent or reduce bias. Here, several documents also brought up the concept of fairness—reduced bias for race or gender, for example, and equal access to the technology—as well as reducing harm from AI taking over jobs.

Yet as before, guidelines vastly differ on who becomes the overseer of justice. One proposed solution is technological standards, for example, training HR algorithms on datasets that span different races. Other ideas include raising public awareness, auditing, or establishing new laws. Some even propose taking explicit position against military applications, embodied by the revolt of Google employees protesting against Pentagon work earlier this year.

The final two principles, responsibility and privacy, fall along similar lines. “Ethical AI sees privacy both as a value to uphold and as a right to be protected,” the authors concluded, while acknowledging that documents vastly differed on how to get there, though new or enforced privacy laws are a popular idea. Responsibility goes hand in hand with transparency and trust, though neither term is usually defined.

In addition to understanding AI reasoning or reducing legal liability, some guidelines also underlined the responsibility of whistleblowing in the case of potential harm and to promote diversity.

The Longest Road

The final conclusion is mixed.

First, the good. An unbiased search of regularity guidelines found a nearly similar amount between the public and private sectors. In other words, regardless of motive, everyone involved in AI is considering its ethical implications. Around the globe, with the Western hemisphere most reflected, people seem to agree on some key guiding principles.

The bad, of course, is that many parts of the globe are underrepresented in the discussion, and the solutions proposed to meet the ethical challenges diverge significantly. Given the overall socioeconomic state around the world it’s perhaps not surprising; yet the results further emphasize the need to consciously involve underrepresented populations going forward.

In the end, the authors stressed that like any other transformative global issue, ethical guidelines for AI should be as inclusive as possible without sacrificing basic values.

“While global consensus might be desirable it should not come at the cost of obliterating cultural and moral pluralism,” they said. Differences and arguments will arise; similar to the World Trade Organization (WTO) Court, it may help to develop ways to adjudicate disagreements in AI ethics and their implementation.

It’s hard for humanity to agree on anything. But as we summon the demon of artificial general intelligence, “for the benefit of all mankind” should be taking on an entirely new level of meaning.

Image Credit: Gerd Altman / Pixabay

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, "Will AI Replace Us?" (Thames & Hudson) was published in 2019.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured