Bridging the Mental Healthcare Gap With Artificial Intelligence

Artificial intelligence is learning to take on an increasing number of sophisticated tasks. Google Deepmind’s AI is now able to imitate human speech, and just this past August IBM’s Watson successfully diagnosed a rare case of leukemia.

Rather than viewing these advances as threats to job security, we can look at them as opportunities for AI to fill in critical gaps in existing service providers, such as mental healthcare professionals.

In the US alone, nearly eight percent of the population suffers from depression (that’s about one in every 13 American adults), and yet about 45 percent of this population does not seek professional care due to the costs.

There are many barriers to getting quality mental healthcare, from searching for a provider who’s within your insurance network to screening multiple potential therapists in order to find someone you feel comfortable speaking with. These barriers stop many people from finding help, which is why about ninety percent of suicide cases in the US are actually preventable.

But what if artificial intelligence could bring quality and affordable mental health support to anyone with an internet connection?

This is X2AI’s mission, a startup that’s built Tess AI to provide quality mental healthcare to anyone, regardless of income or location.

X2AI calls Tess a “psychological AI.” She provides a range of personalized mental health services—like psychotherapy, psychological coaching, and even cognitive behavioral therapy. Users can communicate with Tess through existing channels like SMS, Facebook messenger, and many internet browsers.

I had the opportunity to demo Tess at last year’s Exponential Medicine Conference in San Diego. I was blown away by how natural the conversation felt. In fact, a few minutes into the conversation I kept forgetting that the person on the other side of the conversation was actually a computer.

Now, a year later at Exponential Medicine 2016, the X2AI team is back and we’re thrilled with their progress. Here’s our interview with CEO and co-founder Michiel Rauws.


Since Tess was first created, how has the AI evolved and advanced? What has the system’s learning process been like?

The accuracy of the emotion algorithms has gone up a lot, and also the accuracy of the conversation algorithm, which understands the meaning behind what people say.

Is there a capability you are working on creating to take Tess’s conversational abilities to the next level?

We’re about to update our admin panel, so it will be very simple for psychologists to add their own favorite coping mechanisms into Tess. A coping mechanism is a specific way of talking through a specific issue with a patient.

Do you believe that users should know they are speaking with an AI? What are the benefits of having the human absent from a sensitive conversation?

 Yes, they should wholly be aware of that.

There’s quite some evidence out there that speaking with a machine takes away a feeling of judgment or social stigma. It’s available 24/7 and for as long as you want—you don’t pay by the hour.

The memory of a machine is also far better because it simply does not forget anything. In this way there is opportunity to connect dots that a human would not have thought of because they forgot part of the facts. There are also no waiting lists to get to talk to Tess.

One of the most important aspects is that, from a clinical standpoint, it is a huge advantage that Tess is always consistent and provides the same high quality work. She never has a bad day, or is tired from a long day of work.

The therapeutic bond is often mentioned as very important to the success of a treatment plan, as is the patient’s match with the therapist, otherwise he/she needs to look for another therapist. Tess, however, adapts herself to each person to ensure there will always be a match between Tess and the patient. And there is of course the part of Tess being scalable and exponentially improving.

img_20160321_095831-2
Eugene Bann, co founder and CTO in Lebanon testing the AI in the field.

How does Tess handle receiving life-threatening information, such as being sent a suicidal message from a user?   

Patient safety always comes first. At all times when Tess is talking to the user she evaluates how the person is feeling with an emotion algorithm, which we’ve developed over the past 8 years, through a research firm called AEIR, now a subsidiary of X2AI.

In that way Tess always keeps track of how the user has been feeling and if there is a certain downward trend. And of course if there are it is perhaps a certain very negative conversation going on. Then there is the conversation algorithm that uses natural language processing to understand what the user is actually talking about, to pick up expressions like, “I don’t want to wake up anymore in the morning.”

Once such a situation requires human intervention then there is a seamless protocol to let either one of our psychologists or one of the clients take over the conversation. You can learn more about this in our explanation of AI ethics and data security.

Emotional wellbeing—stress, anxiety, depression—are big issues in the US, and there’s a lack of mental health services for those at risk. What needs to happen for Tess to scale to solve this?

We are very diligent in our approach to safely and responsibly moving towards deploying Tess at scale. Right now we are working with healthcare providers to allow them to offer Tess to support their treatment.

We also employ psychologists ourselves who are creating the content of Tess in the first place. Thanks to these psychologists we are able to offer behavioral health services directly to large employers or employer health plans, as the psychologists can take care of parts of the treatment and ensure to stand-by whenever additional human intervention is required. So not only in the case of an emergency but also when Tess does not manage to figure out how to make the person feel better about a certain difficult situation.

What shifts in public opinion around AI are needed for an AI like Tess to become a societal norm? How far away do you think we are from reaching this point?

AI can be useful today to actually help people and give people access to services they were not able to afford before.

Talking to a robot can get you a better experience than a person would because of the reasons I mentioned above. If certain tasks which are handled by people now but which could also be handled by a machine (or handled better by a machine) would actually be handled by machines, then people can dedicate more of their time to problems machines cannot take care of.

In this way the entire healthcare system becomes more sustainable and affordable.

Want to keep up with coverage from Exponential Medicine? Get the latest insights here.

Alison E. Berman
Alison E. Bermanhttp://www.anchorandleap.com/
Alison tells the stories of purpose-driven leaders and is fascinated by various intersections of technology and society. When not keeping a finger on the pulse of all things Singularity University, you'll likely find Alison in the woods sipping coffee and reading philosophy (new book recommendations are welcome).
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured