Membership Signup
Singularity University

Paging Dr. Watson: AI Jeopardy! Soon To Be Physician’s Assistant

Is there an AI doctor in the house?

Will you ever be treated by Dr. Watson? Not Sherlock Holmes’s right-hand man, but the AI Jeopardy! champion who’s poised to be a sidekick for future physicians. IBM and Nuance Healthcare have teamed up with Columbia University and the University of Maryland to build a medical Watson that’s fine-tuned to address the queries of doctors. The goal is to enhance decision-making and eventually put Watson on every medical center’s computational cloud. But is this the first sign of transitioning healthcare labor from humans to machines? Don’t hold your breath. Given the person-centered focus of treatment, we’re nowhere near having AI that can match the full capabilities of the human medical workforce. But don’t count Watson out either. As a physician’s assistant, AI could be a godsend to America’s healthcare system by facilitating accurate diagnosis.

The video above describes the natural limitations of doctors and how Watson’s powerful AI could supplement clinical cognition. Dr. Herbert Chase, a professor at Columbia University, says that diagnosis can be exceedingly complex (0:33), so pairing symptoms to a condition won’t always lead you to the right answer. Another limitation he points out is that physicians have been unable to keep up with the rapid growth of medical knowledge, which has been doubling every five to seven years (1:35). The ever-rising tide of biomedical literature is simply too much for the human brain to learn. Due to these limitations, Dr. Chase cites the high incidence of delayed diagnosis. As Watson demonstrated on Jeopardy!, the AI could be up to the task on all these fronts. In a fraction of a second, Watson can comb through terabytes of data and formulate an answer. When lives are at stake, the speed and accuracy of a medical Watson could be an invaluable addition to patient care.

Hypothetical model of a medical Watson. Nuance's medical language analysis will beef up Watson, and doctors at Columbia and the University of Maryland will see if the system is clinically relevant and user-friendly.

We won’t see a prototype for almost two years, but here’s how a medical Watson might work. Much like his strategy on Jeopardy!, Watson would masterfully dissect natural language. Unlike the game show, this would be handled by Nuance’s front-end speech recognition software, which is specifically tailored for medical jargon. Also, the question would be processed on the medical center’s computational cloud, so clinicians could pose questions remotely. With this approach, there’s no need to wait for laptops with the computing power of IBM’s Blue Gene.

As illustrated in the diagram to the right, an internist could ask, “My patient, Jane, has had digestive issues and has also lost interest in bowling, her favorite hobby. Could these be linked?” Using the Nuance software, IBM’s Blue Gene supercomputers would focus on the words “lost interest.” After a cursory search of the DSM, the computer would recognize this as a symptom of depression. Then Watson would scan hundreds of journals, looking for articles where “depression” and “digestive problems” co-occur. He would eventually come across articles like this one, suggesting that coincident depressive and digestive symptoms are associated with celiac disease, an under-diagnosed autoimmune disorder. Once Watson finds enough articles supporting this hypothesis, the answer would emerge from the cloud to be read by the physician, who would follow-up with lab tests for confirmation. After adopting a gluten-free diet to prevent a relapse, Jane could be back to bowling in no time, all thanks to her physician and the Watson computer. Without Watson, the doctor could have been dancing around the diagnosis for weeks before finally getting it right.

This sounds like an impressive system on its own, but I think IBM and Nuance could do even better. To accommodate the exponential growth of medical knowledge, Dr. Watson must be able to seamlessly integrate new information with existing data. Furthermore, thinking like a scientist and maintaining a computationally-based skepticism would optimize Watson’s accuracy. A medical Watson might adjust an article’s weight according to the number of citations or ignore outdated or unsubstantiated information. Also, the designers could improve Watson’s accuracy by considering epidemiology. For example, Watson could boost its confidence score if an infection was also found in surrounding clinics. If Watson adopts even one of these features, I will be even more impressed than I was during the Jeopardy! performance.

If IBM and Nuance successfully install a practical Watson, how might Watson stack up against other clinical decision support systems? The major competitor will be Qualibria, a joint venture by GE and Intermountain Healthcare. This clinical software is the final product of a prototype we’ve covered before. The purpose of Qualibria is to convert a hospital’s existing health information (i.e. electronic health records or EHRs) into an ongoing clinical trial. In a recent ComputerWorld article, the CIO of Intermountain Healthcare questioned the usefulness of Watson in the clinic. He suggested that Watson’s analytics could be incompatible with the unstructured health information found in EHRs and other hospital data sources. We’ll see if Watson proves him wrong.

Watson and Qualibria aren’t the only players in town. We saw an artificial neural network (ANN) for diagnosis, which is still in the experimental stages. The advantage of this method is that it can adapt to new problems based on trial and error, much like the brains of human doctors. Unfortunately, the system must be “trained” to optimize functioning, and ANN has only been tested on a handful of conditions, such as endocarditis and heart murmurs. There’s also SimulConsult, a system that can be updated by registered physicians, so it’s a bit like clinical crowdsourcing. However, it’s limited to only certain kinds of disorders. Among Qualibria and others players, Watson has his work cut out for him. See how Watson could match up to his competitors (and human doctors) in the table below.

Comparison table of decision support systems and physicians. Dr. House's accuracy takes the cake, but he lacks an empathy chip. Can't win 'em all, even for $480,000 an episode.

Even though Watson packs a computational punch, there’s no reason to suspect AI will replace doctors in the near term. And IBM agrees. In the video at the beginning of this article, the IBMers make it perfectly clear that Watson is intended to be only an assistant. However, that hasn’t stopped people from speculating. After his historical loss to Watson on Jeopardy!, Ken Jennings made a bold prediction about AI replacing human workers.

Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad [Rutter] and I were the first knowledge-industry workers put out of work by the new generation of “thinking” machines. “Quiz show contestant” may be the first job made redundant by Watson, but I’m sure it won’t be the last.

The  implications of this statement parallel issues we’ve covered before, like Martin Ford’s hypothesis that advanced AI will cause structural unemployment for even the most highly paid, cognitively demanding jobs. If machines have a better price-performance ratio than people, there’s nothing keeping the higher-ups from adopting automation. But actually building an autonomous AI physician responsible for human life? Easier said than done.

Let’s first identify hospital tasks that are within reach of state-of-the-art AI. Systems that automatically prioritize patients or robots that roam hospital hallways to collect vital signs seem attainable. Also, if Watson becomes a complex question generator (not just an answerer), machines could even perform the initial clinical interview for some patients. It would be relatively uncomplicated to generate standard questions about diet, family history, and health behaviors. With a little algorithmic ingenuity, AI workers may even pursue more in-depth lines of questioning if patients give particular answers or alert doctors or nurses when the patient requires human attention.

But let’s not get ahead of ourselves, techno-optimists. This is only a fraction of the physician’s skill set, and there are certain clinical competencies far beyond any current AI. Think of situations where a patient presents symptoms undocumented in the medical literature. Physicians  rely on intuition, experience, and imagination to guide them in these cases, and so far, fluid intelligence for AI is only theoretical.  Sorry Watson, but “What is Toronto?” is not an acceptable response when lives are at stake. Furthermore, good physicians also have a keen emotional intelligence. Imagine a robot performing the most challenging task in any doctor’s career, informing a patient they have a terminal illness. It’s not as simple as just saying the words. The exchange must sound sincere and sensitive to the emotional needs of the patient and family. If this emotionally intelligent human-machine dialogue ever passes a Turing Test, it will likely be the zenith of AI. I think that this achievement is so far off that the most common illnesses afflicting patients will be cured by the time it comes to pass. For the foreseeable future, people will be running the healthcare show.

At the dawn of my own medical career, I’m not worried about AI in the clinic at all. In fact, I find it to be a rather exciting prospect, and I hope most doctors will similarly view AI as a partner, not a competitor. I imagine pacing the halls of the hospital, stroking my chin with one arm behind my back, while a mobile AI unit follows closely behind. I’m working on a difficult case and bouncing ideas off Dr. Watson, much like Sherlock Holmes solving a mystery. But I’m jumping the gun. My very own Watson will have to wait. Eight years of medical training, here I come.

<Image credits:  IBM (modified), Nuance Communications, University of Maryland, Columbia University, Microsoft Clip Art>

<Video credits: IBM>

<Sources:  IBM, ComputerWorld>


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>


  • Hillary Svank says:

    Everyone always points out the “Toronto” example, but is this really just a case of human’s lack of comprehensive world knowledge making us assume we know more than Watson?

    Yes, Watson got the answer wrong. But there *are* around a dozen towns and cities in the USA named “Toronto”. His answer wasn’t as silly as people think. (And I bet most humans don’t know about all those american “Torontos”)

    Plus, maybe others have had better doctor experiences than me, but my doctor isn’t even right 70% of the time, much less the 80-85% accuracy claimed by Watson’s team.

    • Jeremy Ford says:

      Misdiagnosis and delayed diagnosis are significant issues in today’s healthcare system. I recommend How Doctors Think by Dr. Jerome Groopman for further insight.


      Good point on there being Torontos in the United States. I think there are eight. However, none of them have a major airport. Also, Dr. Ferrucci, the Watson project’s PI, believed that Watson thought the city didn’t have to be in the US. Without seeing what’s happening inside the Blue Gene, no one can be sure of Watson’s thought process or the silliness of the answer. Not saying I would perform any better, but Watson wasn’t even close on this question. This was demonstrated by the Watson’s 14% confidence, at least indicating that he was unsure of himself. A physician who saw a 14% confidence rating for a diagnosis probably wouldn’t take this guess very seriously.

  • Lawrence says:

    Why with all the advances in technology including Watson do you need 8 years of medical training? If undergraduate work trained you in asking questions then shouldn’t you need just a couple of years of training to cover specific medical jargon and patient empathy?

    • Jeremy Ford says:

      Hmmm . . . good question. I’m open to the idea of abridged training because of medical AI. Less debt for me. However, I doubt medical schools and residency programs would be willing to adjust their curricula and training protocols just yet. :-)

      • Lawrence says:

        It’s the same issue I believe you raised with respect to robot cars. We need to adjust societies expectations and assignment of responsibility. If AI said the patient had a cold and it turned out to be Ebola whom do you sue :)
        All kidding aside technology also infiltrates education. Look at TED and the Khan academy. Assuming you go to medical school in the next 5 years why buy printed books, wont iPads and eBooks be more practical. Wont schools start publishing textbooks on demand with createspace and kindle books.
        The question becomes what do we want from our professional people; probably accountability, creative questioning, and empathy.
        We will have fewer and fewer “trivial” jobs and so must educate people to be more than cheap labor. What will it mean “to be a doctor”?

        • bob-the-critic says:

          I’m guessing eventually it won’t mean anything. This is what all those who advocate the singularity don’t seem to get. There won’t be doctors, drivers, writers, or physicists because A.I. will be able to do all of these things so much better.

          • Wiley Peyote says:

            Well, the point is that the distinction between AI and humans will be arbitrary. Doctors and other professionals are pillars of a relatively stable cultural framework. As cultural paradigms break down as technology abounds, the “jobs” of today will be a distant memory.

      • bob-the-critic says:

        Ah, yea. Why learn anything? Why go to school at all? Since computers will soon be able to do all of the thinking for you, you shouldn’t have to. Of course, if you don’t have to know as much/think as much, you certainly don’t need to be paid as much…

        • Lawrence says:

          Why learn anything applies at any time in history. Our self justification is not our job. Let’s assume that we have an AI that is an incredible caring being how will that be different from my meeting an incredible caring person. The elephant in the room is what do we all do when all we now do can be done by something else. Again substitute someone for something and … In every age we’ve been confronted with this issue. Personally I’m optimistic and look forward to the next step and the step after that.

  • Joey1058 says:

    The one thing I always found amusing in the few years I worked in a hospital environment, was the pack of interns that invariably roamed the halls. Usually one or two interns tagging with an M.D. A Watson A.I. would be no different from this, I think. Each learning from the other. Ideally, there will eventually be an open source database that any medical facility can tap into, be it Watson, SimulConsult, ANN, or the others. No matter what, I would have no problem with Dr. Watson tagging along.

  • Shai-Hulud says:

    There’s another DDSS called ISABEL to compete with.

    It would be great when EMR/EHR adoption is more universal and standards more concrete. The AI would be able to crawl and mine through all that data for statistical analyses for internal audits of practices, best treatment strategies, epidemiological and public health strategies etc., in addition to updating its internal database of evidence based treatment. It’d be an expert physician/epidemiologist/auditor (yet in an advisory role) that never tires, forgets and builds a massive ego (hopefully).

  • Homer500 says:

    This will mark the dawn of a new era in healthcare. Not only will Watson and other comp systems improve diagnosis and treatment, but they will lower costs. And why stop at only considering a patient’s symptoms? Why not have the computer also process any images that might be available — CT, MRI, ultrasound etc? Combining info from diagnostic imaging with a search of the medical literature will make for a very powerful tool.

    I may be more optimistic than most, but I see this trend eventually reducing the number of human physicians required, much like AI systems have already affected the payrolls of legal research teams. I could see medical care transitioning to this: Instead of a doctor seeing each patient individually, use an AI system together with a human tech (who specializes in being a liason between patient and computer). Perhaps one doctor could supervise the care of 10 patients simultaneously in this way. And the benefit? Each tech might only require 2 years of specialized training, and be paid $50K per year instead of $180K.

    We have similar models in place currently, except they solely involve people. A dentist doesn’t clean your teeth — he has a hygienist do that. A surgeon doesn’t prep you and set you on the table — she shows up once you’re all ready for the procedure. In this way, the most highly-paid personnel are only present as much as is necessary. In fact, physician’s assistants and nurse practitioners routinely do the work of physicians in many clinics. And the benefit? You guessed it — lower costs.

    And Jeremy, you consider that the most challenging task in any doctor’s career is to inform a patient of a terminal illness (sounds a bit Lifetime-ish, but whatever). If I were receiving that news, I’d want it to come from a human. I don’t think anyone would disagree on that. But delivering that message in a compassionate and complete way would still only occupy a small part of the working day. Let the AI systems peform most of the analysis and research. The docs or other humans can have the conversations with patients. Don’t fret — physicians will probably still be paid well; it’s just that we won’t need as many of them.

    • Jeremy Ford says:

      Hey, $50k/yr sounds good to me. Hopefully they’ll lower tuition to reflect the cost-reduction from AI.

      Also, imagine combining real-time diagnostic imaging with the Da Vinci surgical robot.

      • Homer500 says:

        Yes, real-time diagnostic imaging, a Da Vinci robot with haptic sensors, and advanced AI that can correlate the images with known models of disease. I could see robotic surgery becoming huge within a decade.

  • dyinman says:

    OK, there needs to be a correction.

    The title is not “physician’s assistant”, it’s “Physician Assistant”. Non-possessive. Not enough credit is given to those medical professionals.

  • Jeremy Ford says:

    Firstly, I agree. PAs are definitely under appreciated, and along with nurse practitioners, I believe they should play a greater role in primary healthcare.

    Clarification: Watson is not poised to become a Physician Assistant (PA). Watson, in its current form, could not fill that role. However, the AI will be an assistant for physicians, a physician’s assistant (lower-case, possessive), if you will.

Singularity Hub Newsletter