Where Should AI Ethics Come From? Not Medicine, New Study Says

As fears about AI’s disruptive potential have grown, AI ethics has come to the fore in recent years. Concerns around privacy, transparency and the ability of algorithms to warp social and political discourse in unexpected ways have resulted in a flurry of pronouncements from companies, governments, and even supranational organizations on how to conduct ethical AI development.

The majority have focused on outlining high-level principles that should guide those building these systems. Whether by chance or by design, the principles they have coalesced around closely resemble those at the heart of medical ethics. But writing in Nature Machine Intelligence, Brent Mittelstadt from the University of Oxford points out that AI development is a very different beast to medicine, and a simple copy and paste won’t work.

The four core principles of medical ethics are respect for autonomy (patients should have control over how they are treated), beneficence (doctors should act in the best interest of patients), non-maleficence (doctors should avoid causing harm) and justice (healthcare resources should be distributed fairly).

The more than 80 AI ethics reports published are far from homogeneous, but similar themes of respect, autonomy, fairness, and prevention of harm run through most. And these seem like reasonable principles to apply to the development of AI. The problem, says Mittelstadt, is that while principles are an effective tool in the context of a discipline like medicine, they simply don’t make sense for AI.

Doctors have a clear common goal of promoting the health of the patient, and their interests are given top billing when it comes to ethical decision-making. This central point of solidarity has resulted in a shared history of norms and standards that lead to a broadly homogeneous professional culture and ethical framework. This has further been formalized in professional codes of conduct and regulatory frameworks, with strict penalties for those who fall short.

AI has no equivalent to a patient, and the goals and priorities of AI developers can be very different depending on how they are applying AI and whether they are working in the private or public sphere. AI practitioners are not expected to commit to public service in the way doctors and lawyers are, and there are few mechanisms for holding them to account either professionally or legally.

In reality, trying to create professional standards for the field similar to those found in other disciplines would make little sense, says Mittelstadt. AI developers come from varied disciplines and professional backgrounds, each with their own histories and cultures. “Reducing the field to a single vocation or type of expertise would be an oversimplification,” he writes.

AI systems are also created by large interdisciplinary teams in multiple stages of development and deployment, which makes tracking the ethical implications of an individual developer’s decisions almost impossible, hampering our ability to create standards to guide those choices.

As a result, AI ethics has focused on high-level principles, but at this level of abstraction they are too vague to actually guide action. Ideas like fairness or dignity are not universally agreed on, and therefore each practitioner is left to decide how to implement them.

The truly difficult part of ethics—actually translating normative theories, concepts, and values into good practices AI practitioners can adopt—is kicked down the road like the proverbial can,” Mittelstadt writes.

In medicine this is achieved through painstaking work by ethics review committees, licensing schemes, and the creation of institutional policies, none of which exist in AI. That means that despite the high-minded language of AI ethics principles, the chances of them being translated into action are slim.

Without complementary punitive mechanisms and governance bodies to step in when self-governance fails, a principled approach runs the risk of merely providing false assurances of ethical or trustworthy AI,” writes Mittelstadt.

Instead, he calls for a bottom-up approach to AI ethics that focuses on smaller sub-fields of AI, developing ethical principles and resolving challenging novel cases. He also suggests licensing developers of high-risk AI applications in the public sphere, like facial recognition or policing, introducing the threat of punitive professional consequences for unethical behavior.

Ethical responsibility should also be retrained on organizations rather than individual professionals, which will make it possible to focus on the legitimacy of particular applications and their underlying business rather than blaming everything on individual developers. And finally, ethics should be seen as a process that should be built into everything developers do—rather than a solution that can be resolved with a technical fix.

Image Credit: Image by D1_TheOne from Pixabay

Edd Gent
Edd Genthttp://www.eddgent.com/
I am a freelance science and technology writer based in Bangalore, India. My main areas of interest are engineering, computing and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured