Despite the ever-growing number of individuals with mental health illness in America, there is a lack of resources to help clinicians detect, treat, and monitor their patients’ health outcomes. Recent statistics show that over half of the adults who are diagnosed with a mental illness do not receive treatment, which is around 28 million people.  This figure does not include the unknown number of individuals who are still not yet diagnosed with a mental health condition. Moreover, over 28% of adults in the U.S. with diagnosed mental illnesses are unable to receive the treatment they needed, and over 42% of adults with mental illness stated that they could not receive care because they were unable to afford it. As for our adolescents, statistics demonstrate that nearly 60% of American youth who are diagnosed with major depression are not receiving mental health treatment, and the numbers for minority populations are even worse with about 8 out of 10 Asian American youths diagnosed with depression not receiving treatment at all.  

In conjunction with the growing rates of mental health issues, there is a shortage of psychiatrists, psychologists, or mental health professionals that leaves patients with limited to no options for care. Half of the world’s population live in countries where there is less than 1 psychiatrist per 100,000 people.  In the U.S., there is about 1 psychiatrist per every 350 individuals, which is a major improvement from most of the world, but still not very promising. This shortage of mental health professionals prevents patients from receiving the care that they need, and with the prevalence and severe consequences that go hand in hand with mental illness, artificial intelligence interventions might be the enterprise that closes the gap for mental healthcare.

In this blog, we will look into the growing implementation of artificial intelligence in mental health research and development, and how its use can provide greater accessibility, availability, and improvement for patients all over the world.

 

How does AI work?

Artificial intelligence, or AI, is known as the development of computer systems and algorithms that are able to carry out tasks and procedures that usually necessitate human intelligence to do so. While it sounds very promising that AI may be able to take on a lot of the caseload that the limited number of mental health care professionals carry, because of its ‘artificial’ nature, many patients and even clinicians have been hesitant to use AI.  Even when the widespread nature of technology and cellphone usage might make it very feasible for many individuals to take advantage of its processes, a lot of people are still skeptical of its reliability.  Diagnosing mental health issues is much different than diagnosing something physical, because most of the patient-physician interactions depend on self-reported data that patients provide and the mental health worker’s interpretations of that information. For this reason, many researchers and patients in the past have been wary of AI systems being able to replicate that type of person-to-person relationship.  One thing to note is that AI is not just used to mimic or stand as a replacement for actual patient-professional interactions.  Rather, AI has been shown to act as a supplement mode of support when people need rapid care.

 

AI Collects Important Information

A major crisis within the mental health world is the lack of diagnoses.  According to the World Health Organization, it is estimated that nearly two-thirds of mental disorders are untreated or even undiagnosed. However, the application of AI may be able to reduce that number and help clinicians diagnose a wider population that needs attention.  In recent years, there has been a massive increase in the amount of genetic data being collected as a result of various biomedical research projects and funding from institutions like the National Human Genome Research Institute. About 2 to 40 billion gigabytes of genetic data are generated each year, and from this copious amount of data, researchers can extricate important information from datasets that can tell us more about human health and illness. 

Where does AI fit into all of this? AI systems may be helpful in sorting through massive amounts of genetic information by helping researchers go through the data and pull out information that can identify previously unknown correlations between genes related to mental health. Finding these correlations and refining such large sets of data with AI can help monitor mental health trends of large masses of individuals and identify some important mental health signals.  Other data that AI systems may utilize are phone sensor data, texts, and social media platforms.  Monitoring this data helps the software make informed predictions about mental health risk. For example, research has shown that AI can parse through social media information to detect which individuals may be more at risk for suicide, or even who might be most likely to benefit from CBT or other forms of therapy from mental health care providers. This could prove to be very useful for early detection and diagnosing mental health illnesses.

 

AI Can Improve Communication

AI interventions are designed as a tool to provide support to clinicians and providers who have trouble meeting with patients in a timely and efficient manner.  The sheer volume of demand for care can create long waits for appointments, increased costs, and inefficient care for patients, especially when only seeing providers in person, and this is where AI-powered communication tools can provide assistance. Instead of acting as a replacement for patient-provider interaction, it acts to provide support for patients who are financially struggling or when it is difficult to book a prompt appointment.  In recent times, many companies have been able to create AI-powered chatbots that provide patient-centered care. For example, these chatbots are able to interact with patients in real time, providing empathetic and carefully curated verbiage to patients that may mimic that of which they would hear from a provider. 

One of these chat boxes, Woebot, was designed to be available at all times of the day, cost nothing for patients, and provide a sense of anonymity for patients who might feel embarrassed for wanting to reach out for help. Even with the rise of mental health awareness in mainstream media, a 2019 poll done by the American Psychiatric Association showed that only 1 in 5 workers were completely fine with talking about their mental health.  This means that about 80% of Americans still experience some kind of embarrassment or fear about discussing their mental health issues, which can lead to a lack of care.  The rise of AI-chat boxes provides a private space for people to have more honest, sensitive conversations about what they are feeling, and chat boxes like Woebot are able to have these conversations with these individuals and deliver CBT-related content and verbiage to them when they need it the most.

 

AI As a Useful Treatment Tool

Research has shown that AI has the ability to be useful in treating mental health symptoms of patients, and also provides objective, evidence-based recommendations to clinicians when diagnosing patients. A recent study published in May 2023 demonstrated that the implementation of an AI voice-based virtual coaching program actually helped change some brain activity patterns found in patients with depression and anxiety. The tool, Lumen, is a voice coach that provides problem-solving intervention sessions to patients over the course of 8 different sessions using an iPad.  Within the study, one group of patients were treated with Lumen interventions, while a control group stayed on a ‘waitlist’ for care and received no intervention. When the study wrapped up, scores from the participants showed that those who used the Lumen app had decreased scores of depression, anxiety, and distress when compared to those in the control group.  In addition, the participants in the Lumen group had shown improvements in their problem-solving skills and increased activity within the dorsolateral prefrontal cortex, a structure of the brain that controls our planning and overall brain cognition. This finding is a promising indication of the usefulness of AI-based technology to close the gap between patient care and provider availability. Even if something like Lumen provides a fraction of the improvement in brain activity and mental health scores that an actual face-to-face session might accomplish, it still works as a means of support.  Another study called the Recovery, Engagement, and Coordination for Health-Veterans Enhanced Treatment, or REACH VET, is another AI-based application that aims to prevent suicide for U.S. veterans. In this program, researchers use AI to scan the electronic health records of veterans in the Veterans Health Administration for those may be most likely to commit suicide.  Coordinators look at these health records and communicate with providers who look at the treatment strategies of high-risk patients and work on outreach initiatives to provide more personally-tailored care to those who are really struggling.

An increasing number of AI programs and studies are being conducted to explore the different ways that this technology can help shoulder some of the stress that clinicians may be under when trying to care for as many patients as possible.  A big hurdle in the implementation of AI within the mental health space is not just if the service will work, but if providers and clinicians are willing to use them regularly.  It is not the goal of AI to replace any of the face-to-face interventions that patients with mental health struggles need, but rather to support clinicians and patients alike.

 

Potential Risks of AI as a Tool for Mental Health Care

While Artificial intelligence (AI) can provide many benefits and innovation as shared above in healthcare, specifically mental healthcare, as developers create AI systems to take on these tasks, several risks and challenges emerge, including the risk of negative impact to patients such as misdiagnosis or inappropriate treatment from AI system errors, the risk of patient privacy due to data breaches and more. Potential solutions are complex but involve investment in infrastructure for high-quality, representative data; collaborative oversight by both the Food and Drug Administration and other health-care actors; and changes to medical education that will prepare providers for shifting roles in an evolving system.

Book an Appointment
Consent Preferences