While artificial intelligence (AI) tools for mental healthcare, or any health-related application of AI, may initially feel like a disjointed combination in an area where clients seek out humanizing support, there is a growing market of unique, well-integrated AI tools to improve therapeutic practices. Two years ago, on this blog, Viviana Perry created an introduction to, then novel, breakthroughs in AI mental health innovations, as well as their limitations1. In 2022, artificial intelligence was a considerably less researched topic with regards to psychological well being, moreso siloed in tech and commerce spaces. However, as providers post-COVID leverage the utility of telehealth for its efficiency and accessibility, rather than simply using it as a necessity, health spaces have made more room for other technological advancements, AI included2. Whether or not this trend is a positive shift for patient care is a matter of individual discernment, but it can reasonably be expected that artificial intelligence programs may be joining other traditional methods in a practitioner’s toolkit. As a continuation of the 2022 article, with two years of research and development in between, this article will describe unique AI tools currently available to patients and providers, along with the evolving roles and perceptions of AI in therapy. Rather than exhaustively analyzing each product on the market, the blog will spotlight unique services that demonstrate the expanding range of AI applications in mental healthcare. No mention of a product or service in this article is an endorsement, since this limited review is purely for educational purposes and the inclusion of one service over another is subjective. Before making use of any AI-assisted technology, it is important to thoroughly understand appropriate uses and limitations.

It can be a challenge to make self-care practices and positive mental health habits stick in daily life. While it can be easy to mindlessly scroll, Breathhh is a service that interrupts this to make time for stress relief3. As a chrome extension that a user can add to their browser, this program is able to use artificial intelligence to analyze an individual’s browsing and internet usage habits in order to identify the ideal times to send a notification with one of Breathhh’s practices. For example, Breathhh’s virtual wellbeing companion may pop up at the edge of a user’s screen to prompt them to start a practice. These features range from mood journaling, guided breathing exercises, and physical stretch warmups3. Aspects like this make it ideal for use in the workplace, where setting boundaries for mental health can be especially difficult. Another advantage of this product is its ability to adapt to changing preferences and needs through a personalized neural engine to integrate usage experiences and practice feedback to present more relevant strategies. No product is perfect, however, so there are drawbacks to this program. For one, it is a subscription-based model, with only five practices readily available for use, though the company is seeking to grow this catalog3. This pricing model may be a concern if the goal of AI in mental healthcare is lowering access barriers and supporting clients without additional burden. For example, if a provider were to recommend this product as a useful tool to supplement therapy for a client that struggles with dedicating time for self-care, they would also have to consider the client’s ability to pay to use the full scope of the extension’s features. Additionally, the service is largely self-directed which can be a challenge if someone struggles with integrating behavior changes already3. That being said, those who are interested in learning how Breathhh works can learn more here.

The rest of the services mentioned in this article are not as patient-facing, instead present methods of AI integration that improve the efficiency and thoroughness of a provider’s efforts. In part, this is to showcase a newfound emphasis on AI uptake by clinicians themselves amid a high saturation of AI chatbots that patients can readily access in between sessions4. One such provider-facing AI tool is from TalkSpace, a virtual therapy company. The goal of this service is to detect and intervene early in non-suicidal self injury or suicide ideation, particularly in telehealth psychotherapy settings5. As someone chats with their therapist, TalkSpace takes that written communication and runs it through a natural language processing (NLP) algorithm capable of assessing suicidality. By identifying risk factors, methods, ideation or plans, clinicians can then be notified to effectively intervene with crisis resources. An advantage of this is that it reduces the time it takes for telehealth clinicians to respond to mental health crises, without replacing crisis services5. A drawback of this service, as with all AI training, is that algorithms rely on what they are trained on. Here, a dataset of therapy transcripts were used to feed the NLP5. A universal concern with this setup is representation. When certain groups are represented over others in a training set, it can impact how an artificial intelligence algorithm is able to recognize and assess risk in patients who may have culturally-unique ways of communicating that were not accounted for when “teaching” the NLP. To learn more about TalkSpace and its services, read their press release on this tool here.

Along similar lines of provider-assisting tools for mental healthcare, there is a need to improve the efficiency and capacity of clinicians that are often overwhelmed with the time burden of documentation. This can often reduce the supply of available providers despite an increasing demand for their services. In an attempt to even this, Limbic Access, a conversational AI tool primarily used in the United Kingdom’s National Health Service (NHS), aims to make referrals and assessments within the NHS a less administratively burdensome task6. Limbic Access works as a conversational chatbot housed by the service’s website. It helps with collecting basic intake information, and, if the individual consents, a minimum dataset of clinical questions. These are then automatically integrated into the electronic health record to give providers a headstart on referrals and evaluation6. One strength of this tool is that it does keep the agency in the hands of the person seeking help by asking for information when the individual is comfortable with providing it. In this way, it also avoids the chatbox overstepping into the functions of a clinician. Overall, this service was found to reduce the time involved in a detailed clinical assessment by frontloading the data collections through the AI too6l. A drawback of this, aside from the service’s localization to the United Kingdom, are the individual styles of different clinicians may come into conflict with the questions asked by the AI. There may be information an individual provider prioritizes having that the standard set of questions leaves off, making the service less useful to some. Further, on the patient end, responding to chatbox assessment questions may seem like a mere obstacle before actually interacting with their provider, so they may brush off the online questions or answer incompletely, missing a level of detail that may open up in a person-to-person conversation. Some of these drawbacks could be remedied with thorough patient education on the tools during an in-person discussion. For those interested in the results of Limbic Access’ initial study, feel free to read more here.

A widely recognized challenge that providers face is balancing notetaking and accurate documentation with quality, empathetic patient interactions. From a patient standpoint, the knowledge that the person across you is dividing their attention between you and their chart might be an obstacle to the therapeutic process. Mentalyc takes audio input from a session recording to then generate a progress note using an automated software7. Rather than a clinician inputting hand-typed information, they can then look back at the generated progress note to make edits and revisions to accurately reflect their judgment as the provider in the room. This is a great example of finding balance between the role of AI and the role of traditional therapy. Rather than approaching these tools as replacements that threaten the essential function of human providers, it supplements providers’ capabilities in a way that still gives them the final deciding power over their care. There is a price to this tool at $39/month, which may or may not be worth the time of learning a new software, depending on individual provider preferences7. To learn more about Mentalyc, more information about their product can be found at their website here.

While the tools discussed represent the new reaches of AI in the mental healthcare field, there are many innovations that this article was not able to cover. AI or not, technology is deeply impacting how care is provided, from VR exposure therapy to note taking assistance8. Perceptions of this integration are mixed, with a consistent rift in provider opinions on its utilization for therapeutic purposes. According to one qualitative study of AI adoption, this often boils down to a need for better training on AI integration in patient care, more open organizational preparedness on AI use, and ensuring patients have a clear understanding of the scope of AI use in a clinical setting9. The study recommends that providers thoroughly vet for tools that retain a compassionate, humanistic philosophy that empowers clinicians and meets real needs rather than introducing new risks9. On a technical level, Dr. David Novillo-Ortiz, Regional Adviser on Data and Digital Health at WHO/Europe states that “Data engineering for AI models seems to be overlooked or misunderstood, and data is often not adequately managed”, a warning with serious implications for mental health practitioners10. With this in mind, there is a clear need for greater collaboration and understanding between those creating artificial intelligence tools and providers who are looking out for patient privacy and wellbeing. As both audiences are better educated on the mechanisms of emerging tools, the risk of harm from AI integration can be reduced, and the focus of development can shift toward the advancement of accessible psychological treatment.

Book an Appointment
Consent Preferences