University of Ottawa logo
CHLPE logo

Two Waves of Algorithmic Accountability for Mental Health Apps

Two Waves of Algorithmic Accountability for Mental Health Apps

With Frank Pasquale & Emmanuelle Bernheim

Are apps the new normal for accessing mental health care? Are mental health apps regulated at all, or have we found ourselves in a dystopian Wild West of unregulated “care” provided by AI? Can mental health apps be used to increase access to care, or do they create further disparities between vulnerable populations? Such questions were taken up at an online event last month with Professor Frank Pasquale of Brooklyn Law School. Professor Pasquale, an expert on the law of AI algorithms and machine learning, spoke to the future of AI and healthcare as reflected in increasingly popular mental health apps. He was joined by University of Ottawa Professor Emmanuelle Bernheim.

Sarah Lazin

Feb 2022

It is undeniable that technology will continue to play a growing role in medicine. Professor Pasquale argues that this role is more complicated when dealing with mental, rather than physical, health. We have already seen an explosion of mental health apps – geared towards users with anxiety, depression, addiction, and even phobias they wish to address. Indeed, if you were to seek mental health support in the App Store, you would find hundreds – if not thousands – of apps, the vast majority of which are neither regulated nor vetted. Indeed, the “appification” of mental health falls at the intersection of law and technology, healthcare financing, and bioethics – and raises significant concerns as to whether these apps will be accountable to pluralistic, cultural conceptions of mental health and proper care (or if they will primarily fall into the category of cost minimization).

The potential shortcomings of mental health apps – and the overreliance thereon – are numerous. A fundamental duty in healthcare is the duty to report known risks to safety (whether the safety of the patient or the safety of others). Health care apps, however, overwhelmingly fail to perform basic therapeutic duties, such as reporting the abuse of minors. This failure may arise from a lack of awareness of (or even from apathy towards) reporting duties by programmers, or the complex boundary of what these apps constitute as “care.” Autonomous apps that are not connected to mental health professionals may not owe duties to report at all, raising the question of how “care” should be defined (and whether “care” necessitates a human provider). Those digital services that are still somewhat moored to healthcare professionals also demonstrate legal laxity. Services such as BetterHelp link mental health professionals (mostly based in the United States) to patients across the world; however, both medical and legal standards could vary between jurisdictions, limiting the duties of professionals and allowing only some of their actions to traverse borders.

Another oft-cited concern is that of data use. As Professor Pasquale wrote in his new book, The New Laws of Robotics, the “[d]ata-sharing policies of such apps are often incomplete or one sided, risking breaches of confidentiality that a therapist is duty-bound to keep.” Unregulated apps may not abide by notice or consent principles and may not restrict how user data is collected or stored.

Professor Pasquale was joined by University of Ottawa Law professor Emmanuelle Bernheim, who spoke to the intersection between digital health technologies and the rights of their users. Raising the power dynamics inherent in medicine, she explained that approaches to mental health (and particularly the regulation of AI use in this realm) will disproportionately affect those living in poverty and/or those who are racialized – that is, those who already receive inequitable access to mental health care (and particularly to mental health care that is culturally or contextually appropriate). Of particular concern to Professor Bernheim is the question of how information acquired by these largely unregulated apps will be used, especially the data gathered from marginalized populations.

App-driven vulnerability is another important consideration. This could take many forms, such as apps making diagnoses without medical oversight or apps posing potentially dangerous suggestions to users. A more nuanced example might be a company only offering coverage for certain apps based on their known approaches to specific problems. In his book, Professor Pasquale raises the example of an employer favouring apps that preach for unhappy employees to practice mindfulness and find joy in their current (unhappy) situations, rather than those apps that may counsel assertiveness and suggest the employee either ask for a raise or quit.

A further fundamental limitation is the question of whether AI-driven mental health apps are capable of helping (or even reaching) a broad spectrum of users. A plethora of questions arise here: is minority data included in data sets? How is minority data characterized – and is it created or interpreted by members of that community? Is it posited in such a way that it might reinforce harmful stereotypes (such as the hysterical woman)? It goes without saying that failure to accurately represent diverse populations in healthcare technologies is problematic, and could lead to missed or inaccurate diagnoses or inappropriate routes of treatment. Similarly, as one attendee noted, digital access to care requires extensive infrastructure that is not universally available – such as smart phones or computers, secure and reliable internet access, as well as technological literacy. Disparities in both representation in and access to digital health services will inevitably reduce the beneficial potential of mental health apps and may even create additional barriers to care.

Despite such concerns, mental health apps are often touted as “better than nothing” – a steppingstone in reducing barriers to mental health care by providing preliminary supports to patients who cannot otherwise access care. Professor Pasquale drew particular attention to the dangers of the “better than nothing” argument while acknowledging the precarious position many mental health app users find themselves in. As expressed by University of Ottawa Faculty of Law professor Colleen Flood during the Q&A portion of the event, there is a grave maldistribution of psychiatrists in Canada – for example, there are 62.7 psychiatrists per 100,000 people in the Greater Toronto Area, but just 7.2 psychiatrists per 100,000 residents in rural areas. While some psychiatrists will be severely overburdened, others may see only a handful of patients each year. Thus, on its face, a mental health app could reduce some of the barriers to care (such as extensive waitlists or the costs associated with traveling to see a specialist). However, the risks are two-fold. At the most obvious level, some apps may give poor (or potentially troubling) suggestions – misinformation disguised as medical advice. Equally troubling is the risk that “good enough” or “better than nothing” steps will become all that is available. After all, why should governments pour more time, money, or resources into addressing a problem when a “good enough” solution already exists?

Further, as expressed by Professor Bernheim, many patients already feel they receive insufficient time and support from mental health professionals. An overreliance on AI and mental health apps – she worries – could negatively impact the mental health of these patients by relegating them to the sidelines.

In this vein, the binary responses offered by many technological approaches to healthcare lose crucial elements of human interactions. Namely, Professor Pasquale argues that articulacy (as experienced through talk therapy) is the very point of much mental health care. However, he writes, “[t]he rise of therapy apps risks further marginalizing talk therapy in favour of more behaviouristic approaches, which try to simply end, mute, overwhelm or contradict negative thoughts, rather than exploring their context and ultimate origins.” In this way, narrowly formulated AI-driven mental health apps “both benefit from and reinforce a utilitarian approach to patients, which frames their true problems simply as impediments to productivity.”

Thus, in the face of appifying healthcare even further, we must pursue a humanistic, rather than behaviouristic, approach to mental health, and promote technologies that support (not replace) human care and interaction. Indeed, “[t]he best structural safeguard is to assure that most apps are developed as intelligence augmentation for responsible professionals rather than as AI replacing them.”

Ultimately, emerging technologies bear significant potential for increasing access to mental health care – provided we do not sacrifice our humanity for cost or convenience.  

A recording of the event can be found here.

This event was co-sponsored by the University of Ottawa Centre for Law, Technology and Society, co-organized by Bruyère Research Institute, and funded by AMS Healthcare.

University of Ottawa logo
CHLPE logo

Two Waves of Algorithmic Accountability for Mental Health Apps

Two Waves of Algorithmic Accountability for Mental Health Apps

With Frank Pasquale & Emmanuelle Bernheim

Are apps the new normal for accessing mental health care? Are mental health apps regulated at all, or have we found ourselves in a dystopian Wild West of unregulated “care” provided by AI? Can mental health apps be used to increase access to care, or do they create further disparities between vulnerable populations? Such questions were taken up at an online event last month with Professor Frank Pasquale of Brooklyn Law School. Professor Pasquale, an expert on the law of AI algorithms and machine learning, spoke to the future of AI and healthcare as reflected in increasingly popular mental health apps. He was joined by University of Ottawa Professor Emmanuelle Bernheim.

Sarah Lazin

2022-02-13

It is undeniable that technology will continue to play a growing role in medicine. Professor Pasquale argues that this role is more complicated when dealing with mental, rather than physical, health. We have already seen an explosion of mental health apps – geared towards users with anxiety, depression, addiction, and even phobias they wish to address. Indeed, if you were to seek mental health support in the App Store, you would find hundreds – if not thousands – of apps, the vast majority of which are neither regulated nor vetted. Indeed, the “appification” of mental health falls at the intersection of law and technology, healthcare financing, and bioethics – and raises significant concerns as to whether these apps will be accountable to pluralistic, cultural conceptions of mental health and proper care (or if they will primarily fall into the category of cost minimization).

The potential shortcomings of mental health apps – and the overreliance thereon – are numerous. A fundamental duty in healthcare is the duty to report known risks to safety (whether the safety of the patient or the safety of others). Health care apps, however, overwhelmingly fail to perform basic therapeutic duties, such as reporting the abuse of minors. This failure may arise from a lack of awareness of (or even from apathy towards) reporting duties by programmers, or the complex boundary of what these apps constitute as “care.” Autonomous apps that are not connected to mental health professionals may not owe duties to report at all, raising the question of how “care” should be defined (and whether “care” necessitates a human provider). Those digital services that are still somewhat moored to healthcare professionals also demonstrate legal laxity. Services such as BetterHelp link mental health professionals (mostly based in the United States) to patients across the world; however, both medical and legal standards could vary between jurisdictions, limiting the duties of professionals and allowing only some of their actions to traverse borders.

Another oft-cited concern is that of data use. As Professor Pasquale wrote in his new book, The New Laws of Robotics, the “[d]ata-sharing policies of such apps are often incomplete or one sided, risking breaches of confidentiality that a therapist is duty-bound to keep.” Unregulated apps may not abide by notice or consent principles and may not restrict how user data is collected or stored.

Professor Pasquale was joined by University of Ottawa Law professor Emmanuelle Bernheim, who spoke to the intersection between digital health technologies and the rights of their users. Raising the power dynamics inherent in medicine, she explained that approaches to mental health (and particularly the regulation of AI use in this realm) will disproportionately affect those living in poverty and/or those who are racialized – that is, those who already receive inequitable access to mental health care (and particularly to mental health care that is culturally or contextually appropriate). Of particular concern to Professor Bernheim is the question of how information acquired by these largely unregulated apps will be used, especially the data gathered from marginalized populations.

App-driven vulnerability is another important consideration. This could take many forms, such as apps making diagnoses without medical oversight or apps posing potentially dangerous suggestions to users. A more nuanced example might be a company only offering coverage for certain apps based on their known approaches to specific problems. In his book, Professor Pasquale raises the example of an employer favouring apps that preach for unhappy employees to practice mindfulness and find joy in their current (unhappy) situations, rather than those apps that may counsel assertiveness and suggest the employee either ask for a raise or quit.

A further fundamental limitation is the question of whether AI-driven mental health apps are capable of helping (or even reaching) a broad spectrum of users. A plethora of questions arise here: is minority data included in data sets? How is minority data characterized – and is it created or interpreted by members of that community? Is it posited in such a way that it might reinforce harmful stereotypes (such as the hysterical woman)? It goes without saying that failure to accurately represent diverse populations in healthcare technologies is problematic, and could lead to missed or inaccurate diagnoses or inappropriate routes of treatment. Similarly, as one attendee noted, digital access to care requires extensive infrastructure that is not universally available – such as smart phones or computers, secure and reliable internet access, as well as technological literacy. Disparities in both representation in and access to digital health services will inevitably reduce the beneficial potential of mental health apps and may even create additional barriers to care.

Despite such concerns, mental health apps are often touted as “better than nothing” – a steppingstone in reducing barriers to mental health care by providing preliminary supports to patients who cannot otherwise access care. Professor Pasquale drew particular attention to the dangers of the “better than nothing” argument while acknowledging the precarious position many mental health app users find themselves in. As expressed by University of Ottawa Faculty of Law professor Colleen Flood during the Q&A portion of the event, there is a grave maldistribution of psychiatrists in Canada – for example, there are 62.7 psychiatrists per 100,000 people in the Greater Toronto Area, but just 7.2 psychiatrists per 100,000 residents in rural areas. While some psychiatrists will be severely overburdened, others may see only a handful of patients each year. Thus, on its face, a mental health app could reduce some of the barriers to care (such as extensive waitlists or the costs associated with traveling to see a specialist). However, the risks are two-fold. At the most obvious level, some apps may give poor (or potentially troubling) suggestions – misinformation disguised as medical advice. Equally troubling is the risk that “good enough” or “better than nothing” steps will become all that is available. After all, why should governments pour more time, money, or resources into addressing a problem when a “good enough” solution already exists?

Further, as expressed by Professor Bernheim, many patients already feel they receive insufficient time and support from mental health professionals. An overreliance on AI and mental health apps – she worries – could negatively impact the mental health of these patients by relegating them to the sidelines.

In this vein, the binary responses offered by many technological approaches to healthcare lose crucial elements of human interactions. Namely, Professor Pasquale argues that articulacy (as experienced through talk therapy) is the very point of much mental health care. However, he writes, “[t]he rise of therapy apps risks further marginalizing talk therapy in favour of more behaviouristic approaches, which try to simply end, mute, overwhelm or contradict negative thoughts, rather than exploring their context and ultimate origins.” In this way, narrowly formulated AI-driven mental health apps “both benefit from and reinforce a utilitarian approach to patients, which frames their true problems simply as impediments to productivity.”

Thus, in the face of appifying healthcare even further, we must pursue a humanistic, rather than behaviouristic, approach to mental health, and promote technologies that support (not replace) human care and interaction. Indeed, “[t]he best structural safeguard is to assure that most apps are developed as intelligence augmentation for responsible professionals rather than as AI replacing them.”

Ultimately, emerging technologies bear significant potential for increasing access to mental health care – provided we do not sacrifice our humanity for cost or convenience.  

A recording of the event can be found here.

This event was co-sponsored by the University of Ottawa Centre for Law, Technology and Society, co-organized by Bruyère Research Institute, and funded by AMS Healthcare.