Prompting compassion: Mitigating stigmatizing language related to mental illness with generative AI
Using stigmatizing language to describe patients with mental illness can cause harm. Stigmas are shared perceptions that certain individuals are less deserving of compassion and care, which can impact the way patients are treated by healthcare providers. Stigmas can be reinforced through health record documentation, when patients are discredited, blamed, or assigned negative characteristics. The…
Read MoreExploring the role of technology-enabled connection in the care of young adults living with type 1 diabetes and mental health challenges
Young adults living with type 1 diabetes (T1D) make over 300 decisions each day related to food, activity, and insulin – equal to about one decision every five minutes of the day. As a result of these daily care needs, many young adults living with T1D also live with mental health challenges, including diabetes distress.…
Read MoreArtificial Intelligence and Human Rights Respecting Mental Healthcare: What Role for Law?
Acting Director, University of Ottawa Centre for Health Law, Policy, and Ethics Canada urgently needs better access to mental health services. Artificial intelligence (AI) has been embraced by some as a tool for addressing accessibility problems, e.g., AI-powered chatbots offer therapy, and AI algorithms harness social media data to detect suicidal ideation. While AI holds…
Read More