FairSense: Long-Term Fairness Analysis of ML-Enabled Systems

Algorithmic fairness of machine learning (ML) models has raised significant concern in the recent years. Many testing, verification, and bias mitigation techniques have been proposed to identify and reduce fairness issues in ML models. The existing methods are model-centric and designed to detect fa...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings / International Conference on Software Engineering s. 782 - 794
Hlavní autoři: She, Yining, Biswas, Sumon, Kastner, Christian, Kang, Eunsuk
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 26.04.2025
Témata:
ISSN:1558-1225
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Algorithmic fairness of machine learning (ML) models has raised significant concern in the recent years. Many testing, verification, and bias mitigation techniques have been proposed to identify and reduce fairness issues in ML models. The existing methods are model-centric and designed to detect fairness issues under static settings. However, many ML-enabled systems operate in a dynamic environment where the predictive decisions made by the system impact the environment, which in turn affects future decision-making. Such a self-reinforcing feedback loop can cause fairness violations in the long term, even if the immediate outcomes are fair. In this paper, we propose a simulation-based framework called Fairsenseto detect and analyze long-term unfairness in ML-enabled systems. Given a fairness requirement, Fairsenseperforms Monte-Carlo simulation to enumerate evolution traces for each system configuration. Then, Fairsenseperforms sensitivity analysis on the space of possible configurations to understand the impact of design options and environmental factors on the long-term fairness of the system. We demonstrate Fairsense'spotential utility through three real-world case studies: Loan lending, opioids risk scoring, and predictive policing.
ISSN:1558-1225
DOI:10.1109/ICSE55347.2025.00159