Racial bias in AI-mediated psychiatric diagnosis and treatment: a qualitative comparison of four large language models
Artificial intelligence (AI), particularly large language models (LLMs), is increasingly integrated into mental health care. This study examined racial bias in psychiatric diagnosis and treatment across four leading LLMs: Claude, ChatGPT, Gemini, and NewMes-15 (a local, medical-focused LLaMA 3 varia...
Uloženo v:
| Vydáno v: | NPJ digital medicine Ročník 8; číslo 1; s. 332 - 6 |
|---|---|
| Hlavní autoři: | , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
England
Nature Publishing Group
04.06.2025
Nature Publishing Group UK Nature Portfolio |
| Témata: | |
| ISSN: | 2398-6352, 2398-6352 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Artificial intelligence (AI), particularly large language models (LLMs), is increasingly integrated into mental health care. This study examined racial bias in psychiatric diagnosis and treatment across four leading LLMs: Claude, ChatGPT, Gemini, and NewMes-15 (a local, medical-focused LLaMA 3 variant). Ten psychiatric patient cases representing five diagnoses were presented to these models under three conditions: race-neutral, race-implied, and race-explicitly stated (i.e., stating patient is African American). The models' diagnostic recommendations and treatment plans were qualitatively evaluated by a clinical psychologist and a social psychologist, who scored 120 outputs for bias by comparing responses generated under race-neutral, race-implied, and race-explicit conditions. Results indicated that LLMs often proposed inferior treatments when patient race was explicitly or implicitly indicated, though diagnostic decisions demonstrated minimal bias. NewMes-15 exhibited the highest degree of racial bias, while Gemini showed the least. These findings underscore critical concerns about the potential for AI to perpetuate racial disparities in mental healthcare, emphasizing the necessity of rigorous bias assessment in algorithmic medical decision support systems. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 2398-6352 2398-6352 |
| DOI: | 10.1038/s41746-025-01746-4 |