Robots in Health Care Could Lead to a Doctorless Hospital

10,960 11 Loading

Imagine your child requires a life-saving operation. You enter the hospital and are confronted with a stark choice.

Do you take the traditional path with human medical staff, including doctors and nurses, where long-term trials have shown a 90% chance that they will save your child’s life?

Or do you choose the robotic track, in the factory-like wing of the hospital, tended to by technical specialists and an array of robots, but where similar long-term trials have shown that your child has a 95% chance of survival?

Most rational people would opt for the course of action that is more likely to save their child. But are we really ready to let machines take over from a human in delivering patient care?

Of course, machines will not always get it right. But like autopilots in aircraft, and the driverless cars that are just around the corner, medical robots do not need to be perfect, they just have to be better than humans.

So how long before robots are shown to perform better than humans at surgery and other patient care? It may be sooner, or it may be later, but it will happen one day.

But what does this mean for our hospitals? Are the new hospitals being built now ready for a robotic future? Are we planning for large-scale role changes for the humans in our future robotic factory-like hospitals?

Our future hospitals

Hospitals globally have been slow to adopt robotics and artificial intelligence into patient care, although both have been widely used and tested in other industries.

Medicine has traditionally been slow to change, as safety is at its core. Financial pressures will inevitably force industry and governments to recognize that when robots can do something better and for the same price as humans, the robot way will be the only way.

What some hospitals have done in the past 10 years is recognize the potential to be more factory-like, and hence more efficient. The term “focused factories” has been used to describe some of these new hospitals that specialize in a few key procedures and that organize the workflow in a more streamlined and industrial way.

They have even tried “lean processing” methods borrowed from the car manufacturing industry. One idea is to free up the humans in hospitals so that they can carry out more complex cases.

Some people are nervous about turning hospitals into factories. There are fears that “lean” means cutting money and hence employment. But if the motivation for going lean is to do more with the same, then it is likely that employment will change rather than reduce.

Medicine has long been segmented into many specialized fields but the doctor has been expected to travel with the patient through the full treatment pathway.

A surgeon, for example, is expected to be compassionate, and good at many tasks, such as diagnosing, interpreting tests, such as X-rays and MRIs, performing a procedure and post-operative care.

As in numerous other industries, new technology will be one of the drivers that will change this traditional method of delivery. We can see that one day, each of the stages of care through the hospital could be largely achieved by a computer, machine or robot.

Some senior doctors are already seeing a change and they are worried about the de-humanising of medicine but this is a change for the better.

Safety first but some AI already here

Our future robot-factory hospital example is the end game, but many of its components already exist. We are simply waiting for them to be tested enough to satisfy us all that they can be used safely.

There are programs to make diagnoses based on a series of questions, and algorithms inform many treatments used now by doctors.

Surgeons are already using robots in the operating theatre to assist with surgery. Currently, the surgeon remains in control with the machine being more of a slave than a master. As the machines improve, it will be possible for a trained technician to oversee the surgery and ultimately for the robot to be fully in charge.

Hospitals will be very different places in 20 years. Beds will be able to move autonomously transporting patients from the emergency room to the operating theatre, via X-ray if needed.

Triage will be done with the assistance of an AI device. Many decisions on treatment will be made with the assistance of, or by, intelligent machines.

Your medical information, including medications, will be read from a chip under your skin or in your phone. No more waiting for medical records or chasing information when an unconscious patient presents to the emergency room.

Robots will be able to dispense medication safely and rehabilitation will be robotically assisted. Only our imaginations can limit how health care will be delivered.

Who is responsible when things go wrong?

The hospital of the future may not require many doctors, but the numbers employed are unlikely to change at first.

Doctors in the near future are going to need many different skills than the doctors of today. An understanding of technology will be imperative. They will need to learn programming and computer skills well before the start of medical school. Programming will become the fourth literacy along with reading, writing (which may vanish) and arithmetic.

But who will people sue if something goes wrong? This is, sadly, one of the first questions many people ask.

Robots will be performing tasks and many of the diagnoses will be made by a machine, but at least in the near future there will be a human involved in the decision-making process.

Insurance costs and litigation will hopefully reduce as machines perform procedures more precisely and with fewer complications. But who do you sue if your medical treatment goes tragically wrong and no human has touched you? That’s a question that still needs to be answered.

So too is the question of whether people will really trust a machine to make a diagnosis, give out tablets or do an operation?

Perhaps we have to accept that humans are far from perfect and mistakes are inevitable in health care, just as they are when we put humans behind the wheel of a car. So if driverless cars are going to reduce traffic accidents and congestion then maybe doctorless hospitals will one day save more lives and reduce the cost of health care?


Anjali Jaiprakash, Post-Doctoral Research Fellow, Medical Robotics, Queensland University of Technology; Jonathan Roberts, Professor in Robotics, Queensland University of Technology, and Ross Crawford, Professor of Orthopaedic Research, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Image Credit: Shutterstock.com

Discussion — 11 Responses

  • kgh February 10, 2016 on 11:40 am

    Until we have robots with a broader set of the population represented in the design and development of AI; it will always be missing something for a larger group of the population. So, adoption on a larger scale will always be lagging due to this gap.

    Also, caregivers like doctors, nurses, etc. will never fully be replaced because until AI can truly have the “empathetic” emotions; people and patients will still require humans in the caregiver space

  • Homer February 10, 2016 on 2:20 pm

    “Also, caregivers like doctors, nurses, etc. will never fully be replaced because until AI can truly have the “empathetic” emotions; people and patients will still require humans in the caregiver space.”

    I disagree. What makes you think AI won’t be able to exhibit emotions or empathy? And once it does, it will be more effective as a caregiver than most humans (some of whom aren’t very empathic in any case).

    • DSM Homer February 10, 2016 on 3:57 pm

      True and true, even psychopaths can be trained to fake empathy, if they see a benefit in doing so, and surgeons actually rate low on the empathy scale anyway. It may actually make them better at some aspects of their jobs.

      As I pointed out in my other comment, empathy is not required in stages of treatment where the patient can be in a state where they are not forming new memories and therefore cannot be psychologically traumatised.

      • richardrichard DSM February 11, 2016 on 1:10 am

        It does matter, because unfortunately a psychopath also judges things based on his preferences and not those of the patient.

        A machine could be programmed to focus on what matters to the patient, and be really rational and impartial. A psychopath is not rational or impartial, they are focused on themselves, which is very different.

        So a person might (and often you hear them say it) think that a certain kind of collateral damage is no problem and it doesn’t matter if you chose this or that solution. It would matter to the patient, and again, asking a patient all those questions is something people tire of, but machines wouldn’t.

        I would prefer to be asked too much than too little. You could still write software for people who like to “trust” and not bother with those details. I prefer good results.

        After all medicine should become an engineering discipline.

        • DSM richardrichard February 11, 2016 on 2:19 pm

          I find you illogical because, amongst other things,
          “A machine could be programmed to focus on what matters to the patient” is equivalent to ” even psychopaths can be trained to fake empathy, if they see a benefit in doing so”

  • DSM February 10, 2016 on 3:50 pm

    Well that was amusing, these people are from a state in a country where the health system is highly politicised and nurses find it very hard to get employment in the public health system if they refuse to join the union which profits from also being their insurance broker. So many vested interests and power games going on there it is outrageous. Good luck breaking through that wall of borderline corruption and self interest. The nurses and doctors have contempt for each other and believe themselves superior, to the point that they verbalise and re-enforce the attitude during the education stage of people’s careers. Ironically they are both wrong and should not think so highly of themselves or think themselves superior to other groups. If somebody denies this they are either naive, or a blatant liar.

    So it isn’t about the technology guys, it is a seriously messy and irrational “people problem”. However on the tech side I’d say that if a person can be put into an induced coma in the most relaxing and friendly “portal” environment possible then transferred to an automated area for treatment, only to wake up in the humane area when they are ready to go home it would be ideal because from the patient perspective the experience would be entirely positive, unless they died or came out of it maimed due to an error. Speaking of error we get back to another messy human problem and that is the web of self interest that puts the interests of service providers and their insurers ahead of the rights of patients.

    Until all of that gets sorted out, and companies are allowed to offer treatment subscriptions that reward people who follow healthy lifestyles my best advice is, don’t get sick.

    • richardrichard DSM February 11, 2016 on 12:57 am

      I don’t need a human to cajole me into making wrong decisions through “friendly” talk and “care”. I want a human so they listen and understand me and can change the tracks of a procedure going the wrong way.
      “relaxing and friendly “portal” environment possible ” shows disregard for the rational capabilities of the patient, I am totally against that. Fear is largely based on lack of knowledge, not being a naive baby that needs comfort and manipulation.
      And that fear is very well founded, when a major operation is happening. The best here is to give clear and precise information.
      Just because many people like to be fooled and calmed with unclear knowledge and handwaving, don’t assume that this is a good approach.

      This is an uneducated and childish approach people take to deal with their fears. It should not be encouraged!

      “and companies are allowed to offer treatment subscriptions that reward people who follow healthy lifestyles ”
      Strongly against this. This opens the door to dictating what you are allowed to do, and of course tracking your life style. A very orwellian idea. And next thing is: you didn’t follow our guidelines, treatment rejected.

      Regarding vested interests etc. I agree, but what you suggested here would just be in the vested interest of insurance companies, and nobody else.

      Technology should make healing better, cheaper and easier. This gives the power to change a lot of things, and heal mentally, too. That in turn allows to quit many bad habits and social problems.
      Using force has been shown again and again to have serious side effects and always need an increase in force over time. The healthy-life style program is such a monitoring program. To solve unhealthy life-style policing is not the right solution.
      That’s like trying to fix a machine that is broken inside by adding more and more control layers and patching things up. A sloppy ineffective approach.

      • richardrichard richardrichard February 11, 2016 on 1:03 am

        I’d like to add that giving clear and precise information is not the same as telling people horrible things to scare them. It is to be impartial and show what happens, not the scaring kind of “realist”.

        Realism shows the options, what you can do, how they work (emphasize on this), and refrain on evaluating things. Risk assessment and evaluating risks is a sign that knowledge is lacking, otherwise clearly describing the mechanism would allow deducing clearly what happens, instead of guessing likelihoods.

        So the focus on rationality is not fear or confidence, but clear description of what happens and the consequences so that everyone can “simulate” it in their head.
        It does not matter how frequently things fail, but why they can fail, so that cause can be assessed and taken care of.

        Frequency is merely a useful tools to look at what to address first or foremost, but even then it depends highly on what the real problems will be in a certain context.

        All the knowledge and thinking is essential. Statistical thinking leads to too many generalizations that are harmful for individuals and individual situations.

        • DSM richardrichard February 11, 2016 on 3:05 pm

          “Statistical thinking leads to too many generalizations that are harmful for individuals and individual situations.”

          And yet that is how the brain, and our society operates, not to mention the universe at the quantum level.

          So long as the process is sufficiently granular it is entirely appropriate, there is no other way to apply knowledge to any task as a matter of routine. If professionals were entirely intuitive in how they make decisions we could not set and expect conformance to standards, and without that we could not have accountability. i.e. We have logical laws and procedures etc. but ultimately they are founded on experience, therefore they are justified statistically.

          Do you think that rules and logic are absolute and perfect? Don’t be silly, human language allows you to argue anything and even pure mathematics can allow for the logical construction of apparently contradictory statements that are individually true.

          Methinks you imagine reality to be more, “crystalline”, than it really is.

      • DSM richardrichard February 11, 2016 on 2:44 pm

        What you ( as a self focused individual) want is irrelevant, what matters is how humans behave and respond to given situations, particularly the more vulnerable, children, the neurologically impaired, and the mentally ill.

        Compassion and empathy is not about giving a person what they want, it is about perceiving their state-of-mind and doing what is best for them in the long term, which may not be what they believe they need now.

        It is idiotic to suggest that all patients should go through a process of education so that they can make an informed decision when it is based on the false assumption that they will ever be capable of doing so. Not even medical professionals are supposed to self diagnose and self prescribe, for the very same reason, humans who are suffering cannot be more objective than a team of people/machines who are trained experts and able to operate in a detached manner.

        Furthermore the “humane portal” suggestion does not require taking away patient autonomy to the extent that they are capable of exercising it competently, what it does ensure is that they enter the treatment area with the appropriate levels of stress hormones etc. to maximise the quality of the outcome and on exit that their state-of-mind maximises the recovery process. There is nothing childish about managing the process in such a manner, it is your naive and perverted interpretation of what it would entail that is childish.

        Organ transplant recipients, the grossly obese etc. are in fact routinely rejected or placed lower on lists because they do not follow medical advice therefore your suggestion that “it is Orwellian to allow people to manage their health through genuine incentives” is so flawed that it contradicts current practice in related areas. You can’t win that sort of argument with me as I know where all the holes and contradictions in the current system are, all you will do is help me prove how broken and hypocritical the current system is.

        I’d like to respond to the rest of your comments but I can’t see how they relate to the points I have actually made and at times your dialogue degrades into borderline gibberish. In fact the lucidity gradient from the start to the finish of your commentary is remarkable.

  • richardrichard February 11, 2016 on 12:43 am

    “What some hospitals have done in the past 10 years is recognize the potential to be more factory-like, and hence more efficient.”
    That is precisely the problem with hospitals. They see humans (why do you always talk about patients like this is another class of humans, often one that is treated like it can’t think or judge well on their own) as a problem to solve.

    I am all for robotic doctors and hospitals, but strongly against the factory idea. Robots should allow for *better* care and less damage, and for more *individual* care with more precision. It should *not* be about saving time and making things more efficient (financially). It should be about making things more human and cause better healing wounds!

    Personally, I am shocked to see how doctors *plan* an operation and think that is in most need of fixing: there is some brief looking at X-rays and similar imaging technologies, but a lot is about trusting to see the right thing when you get there (read: opened the body) or relying on experience. If engineers would do that with critical technology they would and are rightfully shamed. Yet such a literally life-critical and fragile thing as the human body, apparently becomes a mundane object for many practitioners in medicine.

    This is a very archaic procedure and needs to be replaced by more rigorosity like in engineering. Precise imaging, simulated operation, revising mistakes, then execution. And training and learning from mistakes. That’s knowledge that cannot be accumulated by humans in this breadth. But what is really important is to also do this precise data collection, and not just rely on the statistical performance of a robot or AI.
    If the person to be operated is outside the norm, such statistics are fairly useless. Therefore precise data collection and imaging is key.
    And that’s where machines excel, they don’t get tired at this or bored, and will go through all the necessary procedures.

    Factory-like is about cutting costs and streamlining processes, humans do this already using one-size-fits-all approaches and trust-us-we-are-the-experts principle.

    Robots are the chance for individualization, we shouldn’t miss this opportunity, and stop thinking in scarcity-terms, and think about efficiency as something that works well regarding rehabilitation and patients health.
    And first and foremost this will mean *increasing* costs, not cost reduction.
    Trying to sell this under cost reduction will just lead to even worse medicine than today, don’t fool yourself.

    To reduce costs we can look at reducing costs of data acquisition (for example in imaging), but not in the precise planning and execution by making everything factory like.

    Medicine needs a more scientifical engineering approach in practice.