Bias analysis of AI models for undergraduate student admissions

Bias detection and mitigation is an active area of research in machine learning. This work extends previous research done by the authors Van Busum and Fang (Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing, 2023) to provide a rigorous and more complete analysis of the bias found in...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neural computing & applications Ročník 37; číslo 12; s. 7785 - 7795
Hlavní autoři: Van Busum, Kelly, Fang, Shiaofen
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Springer London 01.04.2025
Springer Nature B.V
Témata:
ISSN:0941-0643, 1433-3058
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Bias detection and mitigation is an active area of research in machine learning. This work extends previous research done by the authors Van Busum and Fang (Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing, 2023) to provide a rigorous and more complete analysis of the bias found in AI predictive models. Admissions data spanning six years was used to create an AI model to determine whether a given student would be directly admitted into the School of Science under various scenarios at a large urban research university. During this time, submission of standardized test scores as part of a student’s application became optional which led to interesting questions about the impact of standardized test scores on admission decisions. We developed and analyzed AI models to understand which variables are important in admissions decisions, and how the decision to exclude test scores affects the demographics of the students who are admitted. We then evaluated the predictive models to detect and analyze biases these models may carry with respect to three variables chosen to represent sensitive populations: gender, race, and whether a student was the first in his/her family to attend college. We also extended our analysis to show that the biases detected were persistent. Finally, we included several fairness metrics in our analysis and discussed the uses and limitations of these metrics.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-024-10762-6