AI Won’t Replace Doctors, It Will Augment Them

The future of medicine is a physician-patient-AI golden triangle, one in which machines augment clinical care and diagnostics—one with the patient at its heart.

That is the takeaway message of DeepMind researcher Dr. Alan Karthikesalingam, who presented his vision of AI-enabled healthcare Monday at Singularity University’s Exponential Medicine conference in San Diego.

You’ve probably heard of DeepMind: it’s the company that brought us the jaw-dropping Go-playing AI agent AlphaGo. It’s also the company that pioneered a powerful deep learning approach called deep reinforcement learning, which can train AI to solve increasingly complex problems without explicitly telling them what to do.

“It’s clear that there’s been remarkable progress in the underlying research of AI,” said Karthikesalingam. “But I think we’re also at an interesting inflection point where these algorithms are having concrete, positive applications in the real world.”

And what better domain than healthcare to apply the fledgling technology in transforming human lives?

Exponential Medicine Summit 2018 Alan Karthikesalingam
Dr. Alan Karthikesalingam at Exponential Medicine

Caution and Collaboration

Of course, healthcare is vastly more complicated than a board game, and Karthikesalingam acknowledges that any use of AI in medicine needs to be approached with a hefty dose of humility and realism.

Perhaps more than any other field, medicine puts safety first and foremost. Since the birth of medicine, it’s been healthcare professionals acting as the main gatekeepers to ensure new treatments and technology can demonstrably benefit patients. And for now, doctors are an absolutely critical cog in the healthcare machinery.

The goal of AI is not to replace doctors, stressed Karthikesalingam. Rather, it is to optimize physician performance, releasing them from menial tasks, and providing alternative assessments or guidance that may have otherwise slipped their notice.

This physician-guided approach is reflected by a myriad of healthcare projects that DeepMind is dipping its toes into.

A collaboration with Moorfields Eye Hospital, one of the “best eye hospitals in the world,” yielded an AI that could diagnose eye disease and perform triage. The algorithm could analyze detailed scans of the eye to identify early symptoms and prioritize patient cases based on severity and urgency.

It’s the kind of work that normally requires over twenty years of experience to perform well. When trained, the algorithm had a success rate similar to that of experts, and importantly, didn’t misclassify a single urgent case.

Roughly 300 million people worldwide suffer from eyesight loss, but if caught early the symptoms can be preventable in 80 to 90 percent of cases. As technologies that image the back of the eye become increasingly sophisticated, patients may have access to methods to scan their own eyes with the use of smartphones or other portable devices. Combined with AI that diagnoses eye disease, the outcome could dramatically reduce personal and socio-economic burden for the entire world.

“This was an incredibly exciting result for our team. We saw here that our algorithm was able to allocate urgent cases correctly, with a test set of just over a thousand cases,” said Karthikesalingam.

Another early collaborative success for DeepMind is in the field of cancer. Eradicating tumors with radiation requires physicians to draw out the targeted organs and tissues on a millimeter level—a task that can easily takes four to eight (long, boring) hours.

Working with University College London, DeepMind developed an algorithm that can perform clinically-applied segmentation of organs. In one example, the AI could tease out the delicate optic nerve—the information highway that shuttles data from the eyes to the brain—from medical scans, thereby allowing doctors to treat surrounding tissues without damaging eyesight.

Interpretable and Scarce

“There’s a real potential for AI to be a useful tool for clinicians that benefits patients,” said Karthikesalingam.

But perhaps the largest challenge in the next five to ten years is bringing AI systems into the real world of healthcare. For algorithms to cross the chasm between proof-of-concept to useful medical associates, they need an important skill outside of diagnosis: the ability to explain themselves.

The doctors need to be able to scrutinize the decisions of deep learning AI—not to the point of mathematically understanding the inner workings of the neural networks, but at least having an idea of how a decision was made.

You may have heard of the “black box” problem in artificial neural networks. Because of the way they are trained, researchers can observe the input (say, MRI images) and output decision (cancer, no cancer) without any insight into the inner workings of the algorithm.

DeepMind is building an additional layer into its diagnostic algorithms. For example, in addition to spitting out an end result, the eye disease algorithm also tells the doctor how confident (or not) it is in its own decision when looking through various parts of an eye scan.

“We find this to be particularly exciting because it means that doctors will be able to assess the algorithm’s diagnosis and reach their own conclusions,” said Karthikesalingam.

Even deep learning’s other problem—it’s need for millions of training data—is rapidly becoming a non-issue. Compared to online images, medical data is relatively hard to come by and expensive. Nevertheless, recent advances in deep reinforcement learning are drastically slashing the amount of actual training data needed. DeepMind’s organ segregation algorithm, for example, was trained on only 650 images—an extremely paltry set that makes the algorithm much more clinically applicable.

Towards the Future

“At DeepMind we strongly believe that AI will not replace doctors, but hopefully will make their lives easier,” said Karthikesalingam.

The moonshot for the next five years isn’t developing better AI diagnosticians. Rather, it’s bringing algorithms into the clinic in such a way that AI becomes deeply integrated into clinical practice.

Karthikesalingam pointed out that the amount of AI research that actually crosses into practice will depend not just on efficacy, but also trust, security and privacy.

For example, the community needs to generate standard medical image datasets to evaluate a variety of algorithmic diagnosticians on equal footing. Only when backed by ample, reproducible evidence can AI systems be gradually accepted into the medical community and by patients.

“In the end, what we’re doing is all about patients,” said Karthikesalingam. “I think this is perhaps the most important part of all. Patients are ultimately who we hope to benefit from all the exciting progress in AI. We’ve got to start placing them at the heart of everything we do.”

Image Credit: HQuality / Shutterstock.com

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Dr. Shelly Xuelai Fan is a neuroscientist-turned-science-writer. She's fascinated with research about the brain, AI, longevity, biotech, and especially their intersection. As a digital nomad, she enjoys exploring new cultures, local foods, and the great outdoors.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured