paint-brush
How Technology Can Help with Mental Health Diagnosisby@vinithjohnson23
201 reads

How Technology Can Help with Mental Health Diagnosis

by Vinith JohnsonSeptember 22nd, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Around 81% of the U.S. population owns a smartphone, and most of the time, the damned thing leaves you alone. But, what if when you first wake up, Siri or Google Assistant, without being annoyingly intrusive, greets you silently on your smartphone’s screen and inquires: “How’�s it going, Rachel? How are you feeling today?” And you had the opportunity to respond, say, by selecting one of several canned responses, such as: "Pretty good", "Feeling alright", "Meh", "Not so well, actually", "Terribly"

People Mentioned

Mention Thumbnail
Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How Technology Can Help with Mental Health Diagnosis
Vinith Johnson HackerNoon profile picture

Imagine for a minute that it's morning and you’ve just woken up. Bleary-eyed still, you fumble for your smartphone. Because, well, what else would you do? We're in 2020, in the middle of a global pandemic, and for better or for worse, approximately 81% of the U.S. population owns a smartphone.  

Maybe it’s hiding under your pillow. Or maybe it’s sitting on the nightstand. Perhaps, without realizing it, you’d slipped it under the bed before you dozed off for good last night.  Let’s all just admit it: smartphones are so ubiquitous now that it’s hardly a noteworthy phenomenon. Intentionally or otherwise, they’ve evolved to be our digital tethers to the world outside of our homes and our minds.

Fortunately, most of the time, the damned thing leaves you alone. Sure, you get those annoying notifications whenever you receive a new communication in one of your apps: Messages, Gmail, Facebook,
WhatsApp, etc. But, generally speaking – and to the extent that you believe you have any choice in the matter at all - you can decide what to do with your device when you wake up. You can stare at it and silently vow, “No. Not today,” opting instead to start your morning off on a healthy note, perhaps with a serving of chia seed pudding and twenty minutes of Vipassana meditation. Or, you could unlock it (now, simply by glancing at the screen no less) and launch the app of your choice, be it Instagram, Twitter, Snapchat, the Amazon shopping app, or Google News.

It’s likely that this app - or series of apps - that you first check every morning is something else entirely. If you’re a teen, for example, you might scroll through Instagram or catch up on Snapchat or watch a few amusing videos on TikTok or play the video game du​ jour for a few minutes before you decide that you absolutely must get up and start the day.

But, what if when you first wake up, Siri or Google Assistant, without being annoyingly intrusive, greets you silently on your smartphone’s screen and inquires: “How’s it going, Rachel? How are you feeling today?” And you had the opportunity to respond, say, by selecting one of several canned responses, such as:

"Pretty good", "Feeling alright", "Meh", "Not so well, actually", "Terribly"

And, for the sake of discussion, let’s say you click on: "Not so well, actually" and a “smart” chat ensues between you and said smart assistant. Being well-designed and meticulously tailored for empathy, this assistant would channel its inner therapist and respond sympathetically: “I’m sorry to hear that. Can you tell me more about how you are feeling? (Pick one)” , and, let’s say this morning you tap "Down" (because it feels like the most accurate descriptor, given how you are feeling right now).

And, now let’s suppose that the smart assistant responds: “Tell me about that. What’s getting you down?” Assured that your boss has no way of eavesdropping on this conversation (you’re still in your bed, after all), you thumb type: “I can’t stand my job anymore! It’s meaningless, soul-crushing work.”

Harnessing existing technology known as natural language processing, the smart assistant responds, “Sorry to hear that your work is getting you down, Sam. Would you like to try a mental exercise to help you feel better?”

Today, you decide that you don’t have the time.

But, for the sake of argument, imagine that every morning, Siri or Google Assistant - or whatever catchy name this AI chatbot would be given - checks in with you in the manner described and for more days
than not, over the past week, you respond that you feel “Down” or “Depressed”.

And on the eighth day, when you again click on “Depressed”, the assistant responds: “I’m sorry to hear that, Sam. Would you like to look into whether you might be clinically depressed?”

Curious, you tap “Yes”.

You’re now presented with some basic information about clinical depression (Major Depressive Episode) and how it’s different
from the downs of what we colloquially refer to as the ordinary “ups and downs of life”. Intrigued, you read on. You’re dismayed to find that you recognize in yourself many of the symptoms of depression. The smart assistant asks you, then, whether you would like to take a screening for clinical depression. 

You decide that you have a little extra time on your hands this morning and that, for the sake of your mental health, you ought to.  So, you tap "Yes". Without so much as even getting out of your own bed, you are now being administered an engaging version of the Patient Health Questionnaire-9 (PHQ-9), a validated screening instrument for clinical depression that is used not infrequently in primary care settings and mental health practices.

It’s possible, in other words, that elementary software could be packaged with your smartphone that would allow you to monitor your mental health and decide whether you need to be screened for depression (or any number of mental disorders) utilizing validated clinical instruments.

Since a sizable majority of the U.S. population owns a smartphone and a single mental illness - depression - is now the most common cause of disability worldwide, an interesting question arises: Why isn’t this a feature that you can opt into already? 

In the San Francisco Bay Area, where we reside, many of us have, for some time now, jumped onto the “quantified self” bandwagon; we routinely monitor different aspects of our health using smart devices, including wearables such as a FitBit and Apple Watch: the number of steps we take daily, how much and how vigorously we exercise otherwise, what our heart rate is throughout the day, and how much and how solidly we sleep each night. Some among the more fanatical of us might even prick our fingertips each morning to measure fasting glucose to determine where along the path to insulin resistance we are.  

Just recently, Apple offered women the ability to track and predict their menstrual cycles with the new Cycle​ Tracking app in the App Store. Undeniably, various aspects of quantified health are now embedded into the major smartphone operating systems. 

So, why is it that we’re only allowed to track the number of steps we’ve taken or hours we’ve slept or stairs we’ve climbed? Other than measuring basic parameters of sleep, why is it that we still don’t have an easy way to track our mental health or receive any kind of coaching to improve it? What if Apple and Google - the current market leaders of smartphone operating systems in the U.S. - had the moxie to take the plunge into mental health?  What kind of societal impact would it have?

The statistic is familiar and sobering to many of us who spend our lives immersed in some way, shape or form studying or treating mental illness: approximately one in five adults in the U.S. suffers from a mental disorder on an annual basis. This​ represents approximately 46.6 million adults annually in the U.S. Among them, almost a quarter have what is considered a serious mental illness - or approximately 10 million adults annually.  

What’s striking, however, is that 57% of U.S. adults with any mental disorder do not receive treatment annually. And, even among those with serious mental disorders, over one-third - 35% - do not receive treatment on an annual basis. Naturally, you might wonder: who are these individuals who do not receive treatment and why on Earth don’t they? 

Dr. Ron Kessler, the McNeil​ Family Professor of Health Care Policy at​ Harvard Medical School, an epidemiologist who has made a career for himself studying the social determinants of health, tackled this question head on in several studies that used face-to-face interviews as part of large mental health surveys. When Kessler and his colleagues analyzed their data, what they found was that among people with a mental illness who decided to not seek treatment, the main reasons cited were:

  1. Low-perceived need for treatment
  2. Desire to handle problems on one’s own even when there was a perceived need for treatment

This was the case particularly for people with mild to moderate mental distress and less so for people with more serious mental distress. A conclusion of these papers was that improving population mental health literacy may be more important even than improving external (or structural) barriers to treatment.

Given these findings, one could ask: is it feasible for smartphones to help bridge the gap between the burden of mental illness in the population at large and the segment of this population that actually receives treatment? In other words, can smartphones help screen for mental disorders and then provide treatments directly or refer individuals to treatment? Additionally, could smartphones educate individuals about mental disorders and mental well-being in the way that the FitBit app does for physical health? For the uninitiated, FitBit presents small snippets of information about physical health that motivate individuals to make incremental changes in their lives in order to achieve better health outcomes.

We decided that these are important and highly topical questions to contemplate. Consider, for example, the disconcerting finding that the incidence of mental illness in the U.S. appears to be increasing, particularly among adolescents. According to a report from the Centers of Disease Control and Prevention, the suicide rate among individuals in the U.S. between the ages of 10 and 24 rose by an alarming 56% from 2007 to 2017. Indeed, suicide is now the leading cause of death in the U.S. in this age group. Suicide rates have climbed nearly 30 percent from 1999 to 2016 among Americans of all ages and ethnic backgrounds 

What’s more, there exists a popular notion that technology use - especially smartphone use - and social media engagement are contributing to these findings. It’s a hypothesis that is bantered about regularly in lay and professional audiences. Correlational data across multiple studies confirm a linkage between screen time and social media use, but data supporting a directly causal role of technology in harming mental health is scant.

Unfortunately, it’s the case that designing methodologically sound studies to evaluate causality has proven to be difficult. Nevertheless, given the findings that have been published and the aforementioned notions percolating through the national dialogue, we wondered if there weren’t a moral imperative for technology companies to help mitigate any impending mental health crisis by investing in potential technology-enabled solutions.

Smartphones, by virtue of their ubiquity, can directly liaise with users who may have mental disorders by offering technology-enhanced solutions that provide digital empathy, for example. Such a solution might look like an artificial intelligence-enabled chatbot that checks in regularly on the self-reported mental state of the user as described in the example at the beginning of this article.

Indeed, several such AI chatbots exist currently: Woebot, the brainchild of Dr. Alison Darcy, a clinical research psychologist at Stanford and founder and CEO of Woebot Labs, a San Francisco startup; Tess by X2AI, another startup located in San Francisco; and Youper, yet another San Francisco startup in the AI mental health chatbot space.

Smartphones are also capable of passively collecting many types of data from user devices to make a composite assessment of their mental state, a concept that has been coined “digital phenotyping.” A digital mental health technology startup that is currently employing digital phenotyping as part of its care delivery platform for individuals with more serious mental disorders is Mountain View-based Mindstrong Health. With these two categories of information - user self-report and passively acquired information about the individual - smartphones are indeed well-positioned to monitor users for underlying mental disorders and, importantly, to offer screenings, education, and referral services at a population level in a way that has not been possible before.

Other smartphone-based interventions such as applications that provide Cognitive Behavioral Therapy (CBT) have been on the market for a few years now. These apps deliver a kind of didactic psychotherapy that is among the best-studied and most efficacious psychotherapies for a number of mental disorders. In turn, they empower individuals to learn how to manage their own stress, anxiety and depression.

In practice, for more moderate to severe conditions, CBT is often paired with pharmaceutical interventions during treatment by mental health clinicians. However, the efficacy of these apps (as opposed to therapist-delivered CBT, itself) has been periodically questioned by the scientific community, which has called for better studies of these apps versus standardly-administered CBT. 

Additionally, the integration of these apps for patient treatment within the larger healthcare system could also stand to be improved. Along with developing a better evidence base through well-designed and rigorous clinical trials and better integration, there are other concerns that we identify here that presumably discourage smartphone operating system leaders such as Apple and Google from offering routine mental health monitoring on their platforms.

Chief among these concerns, we believe, is the potentially calamitous intersection of the stigma associated with mental disorders and the never quite certain privacy of user data. Despite numerous efforts to fight negative bias, the unfortunate reality is that societal stigma still tarnishes the reputation of those living with mental disorders. While​ people on the whole seem to be more willing to accept the biological underpinnings of mental disorders and to seek treatment relative to times past, a multitude of people still have a negative view of those suffering from mental disorders.

For centuries on end, as Foucault elaborated upon in his doctoral dissertation, human beings have been particularly and harshly judgmental towards those who struggle with mental disorders, especially those illnesses that present with severely non-normative thoughts and behaviors. 

Here, we believe that well-regarded celebrities who live with mental disorders and who partner with technology companies as part of advertising campaigns to discuss how they use technology-enabled solutions to monitor and treat their mental disorders have the potential to make ground-breaking and lasting changes to the ways in which society, at large, views these conditions and those afflicted by them.

Imagine, for example, if Mariah Carey or Catherine Zeta-Jones did commercials for Apple Watch or HealthKit discussing how they track their moods and sleep cycles to monitor their bipolar disorders. Or Bradley Cooper talked about how he uses his Google Pixel to keep track of cravings and urges to use alcohol or other drugs of abuse. Or Serena Williams discussed how she used her smartphone to voice journal and receive CBT to stave off depression. It’s easy to imagine that this kind of bold campaign could have a trickle down effect into the conversations of everyday people.

The concern over data privacy, as an extension of the concern about judgment and stigma, is another towering consideration in the willingness of people to use their smartphones to monitor and treat underlying mental disorders and more routine mental distress. Recent scandals involving Facebook and Cambridge Analytica are painful reminders of large-scale misuse of personally identifiable data that were detailed enough to create psychographic profiles of individuals. Earlier this year, a Florida judge’s decision to grant a warrant allowing a law enforcement agency to override customers’ opt-out agreements and search one of the world’s largest online DNA databases brings about another cautionary tale​ to the willingness of individuals to trust technology companies with sensitive health information.

Privacy has become increasingly imperative in recent years due to the exponential rate of technology integration in healthcare. Smartphones can track substantially larger amounts of information about a patient than ever before and store vast quantities of sensitive information in an easy-to-access repositories such as a cloud-based service, making the data vulnerable to security breaches.​

Yet another potential impediment to the routine use of smartphones to natively monitor mental health status relates to the putative​liability to which technology companies may unwittingly expose themselves. Consider the potential liability that a tech company could face if, for example, a user answered question #9 (the question that inquires about suicidal ideations) affirmatively on the PHQ-9 screening instrument and then ended up successfully committing suicide. This particular quandary also touches upon another matter: response bias in self-reported data. Response bias is a phenomenon when patients respond to questions in ways that do not coincide with actual intent. Thus, response bias can be viewed as a potential threat against having valid and accurate data about an individual’s mood at any given point.

Finally, for smartphone OS leaders, operating systems are not the only technology component in the spotlight. Device manufacturers seem to prefer linking their health kits to hardware that they - or third parties - also manufacture (e.g. wearables such as watches). However, at the time of writing this article, it’s the case, unfortunately, that we still don't have any consumer-grade wearable devices that accurately gauge mental states.

Advancements in non-invasive technology such as electroencephalography (EEG), are now just beginning to be used to augment the diagnosis of some mental disorders or, in any case, provide a biosignature, as it were, for various mental disorders. Dr. Amit Etkin, a researcher at Stanford University in the Department of Psychiatry, and colleagues recently published the results of a EEG-based study that examined the likelihood of an individual with Major Depressive Disorder responding to a selective serotonin reuptake inhibitor (SSRI) antidepressant, sertraline (brand name Zoloft) as part of the Establishing Moderators and Biosignatures of Antidepressant Response in Clinic Care (EMBARC) study. In this study, a machine learning algorithm trained on resting state EEG data was able to predict with impressive accuracy which patients with depression would respond to sertraline versus placebo, thus establishing a sertraline response-specific EEG signature in Major Depressive Disorder.

Dr. Etkin recently established Alto Neuroscience to develop brain biomarker assays that can be employed in clinical practice to tailor treatment. On the subject of EEG, several companies have developed low-cost, wearable EEG devices, but these consumer-grade devices have failed to generate the type of data resolution that is required to diagnose or monitor mood effectively. 

In 2019, where wearable devices are a $25 billion industry, companies are hesitant to jump into a market where the consumer grade hardware ecosystem has not evolved enough to keep up with developments in the software realm.

Today, a good number of startups sit squarely in the mental health-tech space (in spite of the aforementioned concerns), hoping to sell the technology-enabled solutions for mental health that they have developed, including smartphone-based solutions such as AI chatbots for mental health and digital cognitive behavioral therapy (CBT) apps.

For profit services such as Psychology Today,​ ​ZocDoc,Yelp and Google Reviews aim to help treatment-seeking individuals make informed decisions about treatment providers. Other for profit ventures aim to directly connect treatment-seeking individuals with vetted providers. In the Bay Area, digital mental health technology startups catering to the employee mental health space include Lyra Health, Ginger.io and ​Modern Health.  

Yet other digital mental health startups here have taken a different approach, by providing direct-to-consumer solutions. These startups include Two Chairs,Meru Health, ​Reflect a​nd Brightside Health,​ each of which provide their customers access to their vetted mental health treatment providers.

While these efforts are to be applauded for their specific contributions in helping improve the mental health delivery apparatus in the country, what’s clear, unfortunately, from the mile high perspective, is that society at large would be much better served were larger players such as Google and Apple, companies that have a disproportionately large market share of smartphone user attention, actively involved in architecting portals of entry for native mental health monitoring, evaluation and treatment. As we spend more time on their platforms, their power grows. Within that power, lies the opportunity to transform the way we diagnose and treat mental disorders.

Contributorship: Vinith Johnson and Girish Subramanyan drafted this article.

Conflicting interests: Girish Subramanyan, co-author, is an employee of Indigo Health.

Funding: This work was not funded.