
Prompting compassion: Mitigating stigmatizing language related to mental illness with generative AI
Marta Maslej
PhD
Award: 2025 Compassion & Artificial Intelligence Grant
- Generative AI
- mental health
- Stigma
Using stigmatizing language to describe patients with mental illness can cause harm. Stigmas are shared perceptions that certain individuals are less deserving of compassion and care, which can impact the way patients are treated by healthcare providers. Stigmas can be reinforced through health record documentation, when patients are discredited, blamed, or assigned negative characteristics. The integration of generative AI documentation tools into health record systems presents an opportunity to incorporate safeguards that mitigate the impact of stigmatizing language in clinical notes. Marta and her team’s proof-of-concept study examines whether generative AI can be prompted to describe patients with severe mental illness using inclusive, non-stigmatizing, and patient-centered language. If implemented, this generative AI capability has potential to ‘prompt compassion’ between providers and patients, since using inclusive and respectful language is one of the most direct ways to challenge stigma surrounding mental illness.