No Bias Left Behind: Fairness Testing for Deep Recommender Systems Targeting General Disadvantaged Groups

Recommender systems play an increasingly important role in modern society, powering digital platforms that suggest a wide array of content, from news and music to job listings, and influencing many aspects of daily life. To improve personalization, these systems often use demographic information. Ho...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings of the ACM on software engineering Ročník 2; číslo ISSTA; s. 1607 - 1629
Hlavní autoři: Wu, Zhuo, Wang, Zan, Luo, Chuan, Du, Xiaoning, Chen, Junjie
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York, NY, USA ACM 22.06.2025
Témata:
ISSN:2994-970X, 2994-970X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Recommender systems play an increasingly important role in modern society, powering digital platforms that suggest a wide array of content, from news and music to job listings, and influencing many aspects of daily life. To improve personalization, these systems often use demographic information. However, ensuring fairness in recommendation quality across demographic groups is challenging, especially since recommender systems are susceptible to the "rich get richer'' Matthew effect due to user feedback loops. With the adoption of deep learning algorithms, uncovering fairness issues has become even more complex. Researchers have started to explore methods for identifying the most disadvantaged user groups using optimization algorithms. Despite this, suboptimal disadvantaged groups remain underexplored, which leaves the risk of bias amplification due to the Matthew effect unaddressed. In this paper, we argue for the necessity of identifying both the most disadvantaged and suboptimal disadvantaged groups. We introduce FairAS, an adaptive sampling based approach, to achieve this goal. Through evaluations on four deep recommender systems and six datasets, FairAS demonstrates an average improvement of 19.2% in identifying the most disadvantaged groups over the state-of-the-art fairness testing approach (FairRec), while reducing testing time by 43.07%. Additionally, the extra suboptimal disadvantaged groups identified by FairAS help improve system fairness, achieving an average improvement of 70.27% over FairRec across all subjects.
ISSN:2994-970X
2994-970X
DOI:10.1145/3728948