“At the South by Southwest conference in March, where health startups displayed their products, there was a near-religious conviction that AI could rebuild health care, offering apps and machines that could diagnose and treat all kinds of illnesses, replacing doctors and nurses.
Unfortunately, in the mental health space, evidence of effectiveness is lacking. Few of the many apps on the market have independent outcomes research showing they help; most haven’t been scrutinized at all by the FDA. Though marketed to treat conditions such as anxiety, attention-deficit/hyperactivity disorder, and depression, or to predict suicidal tendencies, many warn users (in small print) that they are “not intended to be medical, behavioral health or other healthcare service” or “not an FDA cleared product.” [..]
Decades ago, Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology and considered one of the fathers of artificial intelligence, predicted AI would never make a good therapist, though it could be made to sound like one. In fact, his original AI program, created in the 1960s, was a psychotherapist named ELIZA, which used word and pattern recognition combined with natural language programming to sound like a therapist [..].
He foresaw the evolution of far more sophisticated programs like ChatGPT. But “the experiences a computer might gain under such circumstances are not human experiences,” he told me. “The computer will not, for example, experience loneliness in any sense that we understand it.”
The same goes for anxiety or ecstasy, emotions so neurologically complex that scientists have not been able pinpoint their neural origins. Can a chatbot achieve transference, the empathic flow between patient and doctor that is central to many types of therapy? [..]
While some mental health apps may ultimately prove worthy, there is evidence that some can do harm. One researcher noted that some users faulted these apps for their “scripted nature and lack of adaptability beyond textbook cases of mild anxiety and depression.”
It may prove tempting for insurers to offer up apps and chatbots to meet the mental health parity requirement. After all, that would be a cheap and simple solution, compared with the difficulty of offering a panel of human therapists, especially since many take no insurance because they consider insurers’ payments too low. [..]
The FDA [..] said late last year it “intends to exercise enforcement discretion” over a range of mental health apps, which it will vet as medical devices. So far, not one has been approved. And only a very few have gotten the agency’s breakthrough device designation, which fast-tracks reviews and studies on devices that show potential.
These apps mostly offer what therapists call structured therapy — in which patients have specific problems and the app can respond with a workbook-like approach. For example, Woebot combines exercises for mindfulness and self-care (with answers written by teams of therapists) for postpartum depression. Wysa, another app that has received a breakthrough device designation, delivers cognitive behavioral therapy for anxiety, depression, and chronic pain.
But gathering reliable scientific data about how well app-based treatments function will take time. “The problem is that there is very little evidence now for the agency to reach any conclusions,” said Kedar Mate, head of the Boston-based Institute for Healthcare Improvement.”
Full article, E Rosenthal, KFF Health News, 2023.5.17