Higher performance for women than men in MRI-based Alzheimer’s disease detection

Introduction Although machine learning classifiers have been frequently used to detect Alzheimer’s disease (AD) based on structural brain MRI data, potential bias with respect to sex and age has not yet been addressed. Here, we examine a state-of-the-art AD classifier for potential sex and age bias...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Alzheimer's research & therapy Ročník 15; číslo 1; s. 84 - 13
Hlavní autori: Klingenberg, Malte, Stark, Didem, Eitel, Fabian, Budding, Céline, Habes, Mohamad, Ritter, Kerstin
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: London BioMed Central 20.04.2023
Springer Nature B.V
BMC
Predmet:
ISSN:1758-9193, 1758-9193
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Introduction Although machine learning classifiers have been frequently used to detect Alzheimer’s disease (AD) based on structural brain MRI data, potential bias with respect to sex and age has not yet been addressed. Here, we examine a state-of-the-art AD classifier for potential sex and age bias even in the case of balanced training data. Methods Based on an age- and sex-balanced cohort of 432 subjects (306 healthy controls, 126 subjects with AD) extracted from the ADNI data base, we trained a convolutional neural network to detect AD in MRI brain scans and performed ten different random training-validation-test splits to increase robustness of the results. Classifier decisions for single subjects were explained using layer-wise relevance propagation. Results The classifier performed significantly better for women (balanced accuracy 87.58 ± 1.14 % ) than for men ( 79.05 ± 1.27 % ). No significant differences were found in clinical AD scores, ruling out a disparity in disease severity as a cause for the performance difference. Analysis of the explanations revealed a larger variance in regional brain areas for male subjects compared to female subjects. Discussion The identified sex differences cannot be attributed to an imbalanced training dataset and therefore point to the importance of examining and reporting classifier performance across population subgroups to increase transparency and algorithmic fairness. Collecting more data especially among underrepresented subgroups and balancing the dataset are important but do not always guarantee a fair outcome.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1758-9193
1758-9193
DOI:10.1186/s13195-023-01225-6