Astraea: Grammar-Based Fairness Testing

Software often produces biased outputs. In particular, machine learning (ML) based software is known to produce erroneous predictions when processing discriminatory inputs . Such unfair program behavior can be caused by societal bias. In the last few years, Amazon, Microsoft and Google have provided...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on software engineering Ročník 48; číslo 12; s. 5188 - 5211
Hlavní autori: Soremekun, Ezekiel, Udeshi, Sakshi, Chattopadhyay, Sudipta
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York IEEE 01.12.2022
IEEE Computer Society
Predmet:
ISSN:0098-5589, 1939-3520
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Software often produces biased outputs. In particular, machine learning (ML) based software is known to produce erroneous predictions when processing discriminatory inputs . Such unfair program behavior can be caused by societal bias. In the last few years, Amazon, Microsoft and Google have provided software services that produce unfair outputs, mostly due to societal bias (e.g., gender or race). In such events, developers are saddled with the task of conducting fairness testing . Fairness testing is challenging; developers are tasked with generating discriminatory inputs that reveal and explain biases . We propose a grammar-based fairness testing approach (called Astraea ) which leverages context-free grammars to generate discriminatory inputs that reveal fairness violations in software systems. Using probabilistic grammars, Astraea also provides fault diagnosis by isolating the cause of observed software bias. Astraea 's diagnoses facilitate the improvement of ML fairness. Astraea was evaluated on 18 software systems that provide three major natural language processing (NLP) services. In our evaluation, Astraea generated fairness violations at a rate of about 18%. Astraea generated over 573K discriminatory test cases and found over 102K fairness violations. Furthermore, Astraea improves software fairness by about 76% via model-retraining, on average.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0098-5589
1939-3520
DOI:10.1109/TSE.2022.3141758