All’s fair in health and AI: Building fairness into health AI models from the start
When AI entered real-world settings in the 2010s, a hard truth emerged: AI is not neutral. It can reproduce and even amplify bias for marginalized and underserved groups. But bias doesn’t live only in the data. It also arises from how the data were collected, the assumptions built into model design, and the social and institutional contexts in which the model is deployed.
Autocorrect mispredicting a word? Usually harmless. Biased AI in healthcare? High stakes. Life-altering stakes. If we want to prevent harm, fairness cannot be an afterthought. It must sit at the core, guiding every design choice, deployment decision, and interaction with the real world.
One concrete way to address algorithmic bias is to start with the source: ensure that training data are inclusive and representative. Dr. Laura Sikstrom, a 2022 AMS Fellow in Compassion and AI, a Scientist at the Centre for Addiction and Mental Health (CAMH), and the Co-Lead of the Predictive Care Lab at the Krembil Center for Neuroinformatics, exemplifies this approach. Her research asks how AI can be designed and used to promote fairness and health equity. She examines the social and ethical conditions that support fair use—transparency in how systems work, impartiality in how data are collected and applied, and inclusion of the people most affected by these technologies. The ultimate goal is to design AI systems that acknowledge the complexity of bias while embedding equity, reflexivity, and compassion into healthcare technologies.
Health systems rarely have robust sociodemographic data, which makes it difficult to evaluate potential risks and accuracy. Many of these models are also developed without marginalized groups in mind, as developers may lack training in health equity or inclusive design.
“We’re working hard to bring software engineers and computer scientists into the health system,” said Dr. Sikstrom. “By connecting technical expertise with the realities of clinical care, we can design AI tools that better reflect what patients and providers actually need.”
To address bias in AI models, Dr. Sikstrom took a close look at the training data and co-designed a ‘Fairness Dashboard’ with data scientists, clinicians, healthcare professionals, and individuals with lived and living experience of mental health disorders. Using the wealth of self-reported sociodemographic data collected from patients since 2016 at CAMH, her team built a prototype to help developers embed Fair-AI practices.
“The idea was to help AI developers gain the competencies to understand the health system, so we could bridge that ‘Grand Canyon’ gap between what data scientists need to build fair models and what patients need and want from their health system,” she explained.
Dr. Sikstrom identified four key user requirements to support Fair-AI through the dashboard. First, users needed training prior to accessing the dashboard, along with clear guidance on how to use, analyze, and interpret sociodemographic data. In the case of data involving First Nations patients, completing the First Nations principles of ownership, control, access, and possession (OCAP) training could serve as an additional safeguard. Second, the dashboard includes pop-up explanations to contextualize sociodemographic variables, helping users navigate the data effectively. Third, the dashboard revealed high rates of missing data, particularly for marginalized groups, underscoring that without high-quality, comprehensive datasets, it is difficult for data scientists to even evaluate their models for possible fairness-related harms. Finally, incorporating a feedback mechanism and clarifying responsibility for maintaining and updating the dashboard were essential to its sustainability.
Through this work, Dr. Sikstrom is helping to build more equitable AI models and algorithms in healthcare, translating into fairer systems for patients. She emphasizes that it is not enough for health technologies to simply avoid causing harm—they must actively redress harms, particularly for patient groups historically disadvantaged by algorithmic bias.
Since her AMS Fellowship, Dr. Sikstrom was awarded the 2024 Rising Star Award from the Canadian Institutes of Health Research (CIHR). With $700,000 in funding, she is reimaging how AI models should be developed and evaluated in mental health contexts. She is serving as the Fairness Advisor to a $5 million grant for the Brain Health Data Challenge platform. Additionally, Dr. Sikstrom is leading several projects with various institutions on enhancing health equity competency for data scientists and engineers, exploring human-AI teaming, and co-developing an Indigenous data sovereignty framework for AI applications in mental health.
“By designing data ecosystems with patients, not just for them, we create technologies that understand people as more than data points, and that’s how compassion takes root in AI and health,” said Dr. Sikstrom.
Dr. Laura Sikstrom is a Scientist with the Krembil Centre for Neuroinformatics and the Office of Education at the Centre for Addiction and Mental Health and an Assistant Professor (status-only) in the Department of Anthropology at the University of Toronto. She co-leads the Predictive Care Lab, where she explores how data is used in clinical applications of digital technologies and AI with a focus on promoting compassionate and equitable care. Dr. Sikstrom has received multiple awards, including being the only Canadian recipient of Google’s Award for Inclusion Research for her work on bias and risk prediction algorithms.
Read Dr. Laura Sikstrom’s Publications:
Sikstrom L, Maslej MM, Hui K, Findlay Z, Buchman DZ, Hill SL. Conceptualising fairness: three pillars for medical algorithms and health equity. BMJ Health Care Inform. 2022 Jan;29(1):e100459. doi: 10.1136/bmjhci-2021-100459.
Szkudlarek P, Kassam I, Kloiber S, Maslej M, Hill S, Sikstrom L. Co-Designing an Electronic Health Record Derived Digital Dashboard to Support Fair-AI Applications in Mental Health. Stud Health Technol Inform. 2025 Feb 18;322:12-16. doi: 10.3233/SHTI250005.
