Higher performance for women than men in MRI-based Alzheimer’s disease detection

Introduction Although machine learning classifiers have been frequently used to detect Alzheimer’s disease (AD) based on structural brain MRI data, potential bias with respect to sex and age has not yet been addressed. Here, we examine a state-of-the-art AD classifier for potential sex and age bias...

Full description

Saved in:
Bibliographic Details
Published in:Alzheimer's research & therapy Vol. 15; no. 1; pp. 84 - 13
Main Authors: Klingenberg, Malte, Stark, Didem, Eitel, Fabian, Budding, Céline, Habes, Mohamad, Ritter, Kerstin
Format: Journal Article
Language:English
Published: London BioMed Central 20.04.2023
Springer Nature B.V
BMC
Subjects:
ISSN:1758-9193, 1758-9193
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Introduction Although machine learning classifiers have been frequently used to detect Alzheimer’s disease (AD) based on structural brain MRI data, potential bias with respect to sex and age has not yet been addressed. Here, we examine a state-of-the-art AD classifier for potential sex and age bias even in the case of balanced training data. Methods Based on an age- and sex-balanced cohort of 432 subjects (306 healthy controls, 126 subjects with AD) extracted from the ADNI data base, we trained a convolutional neural network to detect AD in MRI brain scans and performed ten different random training-validation-test splits to increase robustness of the results. Classifier decisions for single subjects were explained using layer-wise relevance propagation. Results The classifier performed significantly better for women (balanced accuracy 87.58 ± 1.14 % ) than for men ( 79.05 ± 1.27 % ). No significant differences were found in clinical AD scores, ruling out a disparity in disease severity as a cause for the performance difference. Analysis of the explanations revealed a larger variance in regional brain areas for male subjects compared to female subjects. Discussion The identified sex differences cannot be attributed to an imbalanced training dataset and therefore point to the importance of examining and reporting classifier performance across population subgroups to increase transparency and algorithmic fairness. Collecting more data especially among underrepresented subgroups and balancing the dataset are important but do not always guarantee a fair outcome.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1758-9193
1758-9193
DOI:10.1186/s13195-023-01225-6