Can AI-powered voice analysis help diagnose mental disorders?

This article is part of A limited series On the possibility of solving the daily problems of artificial intelligence.

Imagine a quick and easy test like taking your temperature or measuring your blood pressure that can reliably detect an anxiety disorder or predict an impending depression recurrence.

While healthcare providers have many tools for measuring a patient’s physical condition, there is no reliable biomarker for assessing mental health – an objective indicator of the medical condition observed from outside the patient.

But some artificial intelligence researchers now believe that the sound of your voice can be the key to understanding your emotional state – and that AI is perfectly suited for detecting changes that are otherwise difficult to realize, if not impossible. The result is a set of apps and online tools designed to track your mental state, as well as programs providing real-time mental health assessments from telehealth and call-center providers.

Psychologists have long known that some mental health problems can be detected just by listening What? One person said but How They say this, says Maria Espinola, a psychologist and assistant professor at the University of Cincinnati College of Medicine.

With depressed patients, Dr. Espinola said, “Their speech is usually more monotonous, flattering and soft. They have a reduced pitch range and lower volume. They take more breaks. They stop more often. “

Anxiety patients feel more tension in their body, which can also change the way they voice, he said. “They talk fast. They have more difficulty breathing. ”

Today, such vocal features are being used by machine learning researchers to predict depression and anxiety, as well as other mental illnesses such as schizophrenia and post-traumatic stress disorder. The use of deep-learning algorithms can uncover additional patterns and features, such as those contained in short voice recordings, which may not be obvious to trained experts.

Kate Bentley, an assistant professor at Harvard Medical School and a clinical psychologist at Massachusetts General Hospital, said:

“There is a lot of excitement around finding biological or more objective indicators of psychiatric diagnoses that go beyond the more thematic forms of assessments traditionally used, such as physician-rated interviews or self-reporting systems,” he said. Other sources that researchers are tracking include changes in activity levels, sleep patterns and social media data.

These technological advances come at a time when the need for mental health care is particularly acute: according to a report by the National Coalition on Mental Illness, by 2020, one in five adults in the United States will experience a mental illness. And the number continues to grow.

While AI technology may not address the shortage of qualified mental healthcare providers – not nearly enough to meet the country’s demand, Dr. Bentley says it is hoped that it could reduce barriers to accurate diagnosis, help physicians identify patients who may be hesitant to attend. And may facilitate self-monitoring during inspection.

“A lot can happen in appointments, and technology can really offer us the possibility to improve monitoring and evaluation in a more consistent way,” said Dr. Bentley.

To test this new technology, I started by downloading the Mental Fitness app from health technology company Sonde Health to see if my discomfort was a sign of something serious or if I was just getting lazy. Described as “a voice-driven mental fitness tracking and journaling product”, the free app invited me to record my first check-in, a 30-second verbal journal entry that would rank my mental health on a scale of 1 to 100. .

One minute later my score was: 52 not a great one. The app warns “Pay attention”.

The app flagged that the level of vitality identified in my voice was significantly lower. Am I bored because I’m trying to be quiet? Should I pay attention to app tips to improve my mental fitness by walking or spacing my space? (The first question may point to one of the potential flaws of the app: As a consumer, it can be difficult to know Why The volume of your voice fluctuates.)

Later, feeling uneasy during the interview, I tested another voice-analysis program, focusing on identifying the level of anxiety. StressWaves Test is a free online tool from Signer, a healthcare and insurance company, developed in collaboration with AI expert Ellipsis Health to assess stress levels using a 60-second sample of recorded speech.

“What keeps you awake at night?” There was a prompt on the website. After I spent a minute describing my ongoing concerns, the program scored my recordings and sent me an email pronouncing: “Your stress levels are moderate.” Unlike the Sonde app, Signer Email doesn’t offer any helpful self-improvement tips.

Other technologies have added a potential support to human interaction, such as Kintsugi, a Berkeley, California-based company that raised $ 20 million in Series A funds earlier this month. Kintsugi is named after the Japanese practice of repairing broken pottery with gold veins.

Founded by Grace Chang and Rima Silova-Olson, who relate to past experience in the struggle for access to mental health care, Kintsugi has developed technology for telehealth and call-center providers that can help them identify patients who may benefit from further assistance.

Using Kintsugi’s voice-analysis program, a nurse may be requested to, for example, take an extra minute to ask a distraught parent about his or her own well-being.

One concern with the development of this type of machine learning technology is the issue of bias – ensuring that programs work equally for all patients regardless of age, gender, race, nationality or other population criteria.

“In order for machine learning models to work well, you really need to have a very large and varied and powerful data set,” said Ms. Chang.

Another major concern in this new case is privacy – especially voice data, which can be used to identify individuals, Dr. Bentley said.

And even when patients agree to record, the question of consent is sometimes doubled. In addition to assessing a patient’s mental health, some voice-analysis programs use recordings to develop and refine their own algorithms.

Another challenge, said. Consumers’ potential distrust of machine learning and so-called black box algorithms, Bentley said, works in ways that even developers can’t fully explain, especially what features they use to predict.

“The algorithm is being built here, and the algorithm is being understood there,” said Dr. Alexander S. Young, interim director of the Semel Institute for Neuroscience and Human Behavior and chair of the University of California, Psychiatry, echoes the concerns of Los Angeles. What many researchers have in common about AI and machine learning in general: that little, if any, human supervision is present during the training phase of the program.

For now, Dr. Young is cautiously optimistic about the potential of voice-analysis technology, especially as a tool for patients to monitor themselves.

“I believe you can model people’s mental health status or estimate their mental health status in general,” he said. “People like to be able to self-monitor their condition, especially with chronic illness.”

But before automated voice-analysis technologies entered mainstream use, some have called for rigorous scrutiny of their accuracy.

“We really need more legitimacy for AI and machine learning models built into other data streams, not just voice technology,” said Dr. Bentley. “And we need to gain that legitimacy from large-scale, well-planned representative studies.”

Until then, AI-powered voice-analysis technology has remained a promising but unproven tool, which may ultimately be a day-to-day approach to capturing the temperature of our mental well-being.

Leave a Comment

Your email address will not be published.