The American Psychiatric Association has estimated that there are over 10,000 mental health apps circulating in online app stores.

There are many examples, especially in the United States, of the use of bots artificial intelligence (AI) to the attention of Mental Health. But medicine effectors and researchers have voiced their objections to some drawbacks of the algorithms, private data leakage and attention bias. At center stage is how regular these systems.

Although large sectors have expressed optimism about the benefits of this technology for patients and its potential to address the shortage of psychology professionals in several areas, others have warned that AI for diagnosis and treatment could complicate the global mental health crisis.

A successful example of institutional use of this technology was demonstrated by the Veterans Health Administration (VA), a division of the United States Department of Veterans Affairs, which treats thousands of cases each year of people suffering from serious mental health problems. It is estimated that the number of veterans who die by suicide exceeds the number of soldiers killed in action. In 2017, the VA introduced an initiative called “Reach Vet” that implemented an algorithm in clinical practice throughout its system.

Each month, the algorithm reports on approximately 6,000 patients, including those who are seeking mental health services for the first time. Doctors contact these patients, offer mental health services, inquire about stressors, and assist with accessing food and housing. While this process may seem unusual as veterans are contacted with ideas they may not have had, the implementation of the system resulted in an 8% reduction in psychiatric admissions among high-risk individuals identified by AI, and a 5% decrease in documented suicide attempts in this group. However, “Reach Vet” has not been shown to reduce suicide mortality.

While some sectors remain optimistic about the benefits of AI in mental health, others have raised concerns about how it could exacerbate the existing global mental health crisis. The American Psychiatric Association estimates that there are over 10,000 mental health apps available in online app stores, with the majority lacking approval from regulatory bodies.

In addition to apps, institutional settings have also started utilizing AI-powered chatbots such as Wysa and FDA-approved apps to address the shortage of mental health counselors and reduce substance abuse. These technologies are used to analyze conversations with patients and assess text messages to provide recommendations. Some are capable of predicting the risk of addictions and detecting mental health disorders such as depression.

However, experts have highlighted potential risks associated with AI taking on diagnostic and treatment roles without the presence of a doctor. Tina Hernandez-Boussard, a professor at Stanford University, has used AI to predict the risk of opioid addiction and cautioned that the risks could increase if AI algorithms start making diagnoses or providing treatment without medical supervision. She recommended that health technology developers establish minimum standards for algorithms or AI tools to ensure fairness and accuracy before making them publicly available.

Hernández-Boussard also expressed concerns about bias embedded in algorithms due to the way they represent race and sex across datasets, as this could lead to different predictions that amplify health disparities. Studies have revealed algorithmic bias resulting in lower-quality healthcare for black patients compared to white patients, even when black patients are at higher risk. Biased AI models have also been found to be more likely to recommend calling the police instead of offering medical aid when black or Muslim men are experiencing a mental health crisis.

Tom Zaubler, the chief medical officer of NeuroFlow, acknowledged that AI cannot solely manage a patient’s case, and reputable tech companies do not solely rely on AI for this purpose. While AI can optimize workflows and assess a patient’s risk, disadvantages include the possibility of user information being sold and invaded by advertising and messages. Last year, it was revealed that popular mental health apps like BetterHelp and Talkspace were leaking information about users’ mental health history and suicidal thoughts to third parties.

In the United States, concerns have been raised about the FDA’s control over AI-powered mental health technologies, especially given their increasing role in clinical decisions. There have been instances where AI-powered tools have been used as mental health counselors without informing users that the responses were generated by AI. This has drawn criticism from ethicists. The FDA’s ability to track and monitor these apps is limited to those that undergo FDA review, leaving many apps that reach the market without approval outside of the agency’s regulatory reach.

Critics argue that the official regulatory system is slow, and some digital health companies have been able to bypass certain regulatory hurdles due to the relaxation of requirements during the pandemic. The responsibility of determining the safety and effectiveness of mental health apps largely falls on users and online reviewers, as they have limited guidance and transparency regarding the algorithms used.

Ensuring the ethical application of technology in healthcare requires a combination of self-policing by the tech industry and agile regulation. Representatives from the United States and the European Union recently met to discuss how to ensure ethical technology applications, particularly in healthcare, which could lead to further efforts in this area. Experts believe that building confidence in AI as a mental health tool will require a combination of industry self-regulation and effective regulation.

Categorized in: