Automated Directed Fairness Testing

Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatic...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE) s. 98 - 108
Hlavní autoři: Udeshi, Sakshi, Arora, Pryanshu, Chattopadhyay, Sudipta
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: ACM 01.09.2018
Témata:
ISSN:2643-1572
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatically validate the fairness of arbitrary machine-learning models? For a given machine-learning model and a set of sensitive input parameters, our Aequitas approach automatically discovers discriminatory inputs that highlight fairness violation. At the core of Aequitas are three novel strategies to employ probabilistic search over the input space with the objective of uncovering fairness violation. Our Aequitas approach leverages inherent robustness property in common machine-learning models to design and implement scalable test generation methodologies. An appealing feature of our generated test inputs is that they can be systematically added to the training set of the underlying model and improve its fairness. To this end, we design a fully automated module that guarantees to improve the fairness of the model. We implemented Aequitas and we have evaluated it on six state-of-the-art classifiers. Our subjects also include a classifier that was designed with fairness in mind. We show that Aequitas effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of respective models using the generated test inputs. In our evaluation, Aequitas generates up to 70% discriminatory inputs (w.r.t. the total number of inputs generated) and leverages these inputs to improve the fairness up to 94%.
AbstractList Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatically validate the fairness of arbitrary machine-learning models? For a given machine-learning model and a set of sensitive input parameters, our Aequitas approach automatically discovers discriminatory inputs that highlight fairness violation. At the core of Aequitas are three novel strategies to employ probabilistic search over the input space with the objective of uncovering fairness violation. Our Aequitas approach leverages inherent robustness property in common machine-learning models to design and implement scalable test generation methodologies. An appealing feature of our generated test inputs is that they can be systematically added to the training set of the underlying model and improve its fairness. To this end, we design a fully automated module that guarantees to improve the fairness of the model. We implemented Aequitas and we have evaluated it on six state-of-the-art classifiers. Our subjects also include a classifier that was designed with fairness in mind. We show that Aequitas effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of respective models using the generated test inputs. In our evaluation, Aequitas generates up to 70% discriminatory inputs (w.r.t. the total number of inputs generated) and leverages these inputs to improve the fairness up to 94%.
Author Udeshi, Sakshi
Arora, Pryanshu
Chattopadhyay, Sudipta
Author_xml – sequence: 1
  givenname: Sakshi
  surname: Udeshi
  fullname: Udeshi, Sakshi
  organization: Singapore Univ. of Tech. and Design,Singapore
– sequence: 2
  givenname: Pryanshu
  surname: Arora
  fullname: Arora, Pryanshu
  organization: BITS,Pilani,India
– sequence: 3
  givenname: Sudipta
  surname: Chattopadhyay
  fullname: Chattopadhyay, Sudipta
  organization: Singapore Univ. of Tech. and Design,Singapore
BookMark eNotjs9LAzEQRqMo2NaePXgpeN46mclskmOprQoFLxW8lWR3IhG7lc168L-3_ji9j3f4eGN11h06UepKw1xrw7eE5LSx81_WfKLGRwvEnuzLqRphbajSbPFCTUt5AwB0Fhh5pG4Wn8NhHwZpZ3e5l-ZnrEPuOylltpUy5O71Up2n8F5k-s-Jel6vtsuHavN0_7hcbKqAloeKCV10AVLDPhlIQAAtYnRRgzZtgCCOmkha1w1KYicuknXRo6-dtokm6vrvN4vI7qPP-9B_7fwxFyzQN3yOPy4
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1145/3238147.3238165
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 145035937X
9781450359375
EISSN 2643-1572
EndPage 108
ExternalDocumentID 9000070
Genre orig-research
GroupedDBID 29I
6IE
6IF
6IH
6IK
6IL
6IM
6IN
6J9
AAJGR
AAWTH
ABLEC
ACREN
ADYOE
ADZIZ
AFYQB
ALMA_UNASSIGNED_HOLDINGS
AMTXH
APO
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IPLJI
M43
OCL
RIE
RIL
ID FETCH-LOGICAL-a275t-5328b8a0fc59f40f0300d22b8b1014da0ae83cb3116c2ef58e8b378b9296817f3
IEDL.DBID RIE
ISICitedReferencesCount 133
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000553784500013&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 13 06:22:43 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a275t-5328b8a0fc59f40f0300d22b8b1014da0ae83cb3116c2ef58e8b378b9296817f3
PageCount 11
ParticipantIDs ieee_primary_9000070
PublicationCentury 2000
PublicationDate 2018-Sept.
PublicationDateYYYYMMDD 2018-09-01
PublicationDate_xml – month: 09
  year: 2018
  text: 2018-Sept.
PublicationDecade 2010
PublicationTitle 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE)
PublicationTitleAbbrev ASE
PublicationYear 2018
Publisher ACM
Publisher_xml – name: ACM
SSID ssj0002870525
ssj0051577
Score 2.5179825
Snippet Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and...
SourceID ieee
SourceType Publisher
StartPage 98
SubjectTerms Computational modeling
Decision making
Directed Testing
Machine learning
Robustness
Software
Software engineering
Software Fairness
Test pattern generators
Testing
Training
Videos
Title Automated Directed Fairness Testing
URI https://ieeexplore.ieee.org/document/9000070
WOSCitedRecordID wos000553784500013&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwED4VxMBUoEW8FQlG0tqxXdsjQlQMqOpQULfKZzuoS4pKyu_HdkLLwMLkkyefrLvv3gdw5wrL1AhZjoK4nGtGciO0D14rtdwxhph6q95e5GSi5nM97cD9thfGe5-Kz_wgkimX71Z2E0NlQ50Sb8FB35NSNr1a23hKTNiJ3Zy9ANNStqN8KBdDFqGJy0E6I5L82qWSoGTc_d8jjqC_68nLplu0OYaOr06g-7OUIWtltAe3D5t6FcxQ77JGnQVibJbrqNKyWZypUb334XX8NHt8zttNCLkppKhzwQqFypDSCl1yUgbJJK4oUGFctesMMV4xi4zSkS18KZRXyKTCYPuMFJUlO4X9alX5M8gs18pZIoTlwbdyiEZrpFpZw9FSo86hF3lefDTDLhYtuxd_X1_CYbAgVFN0dQX79Xrjr-HAftXLz_VN-qFvXteO2Q
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTwIxEJ4QNNETKhjfbqJHF9ptS9ujMRKMSDig4Ub6WsNlMbj4-227K3jw4qmTnjppZr55D8CtzQwRfU1SzZBNqSQoVUw677ViQy0hWsfeqrcRH4_FbCYnDbjb9MI452LxmesGMuby7dKsQ6isJ2PizTvoO4zSDFfdWpuISkjZse2kPQ_UnNfDfDBlPRLAifJuPAOW_NqmEsFk0PrfMw6gs-3KSyYbvDmEhiuOoPWzliGppbQNN_frcukNUWeTSqF5YqAWq6DUkmmYqlG8d-B18Dh9GKb1LoRUZZyVKSOZ0EKh3DCZU5R72UQ2y7TQYdmuVUg5QYwmGPdN5nImnNCEC-2tn77APCfH0CyWhTuBxFAprEGMGeq9K6u1klJjKYyi2mAlTqEdeJ5_VOMu5jW7Z39fX8PecPoymo-exs_nsO_tCVGVYF1As1yt3SXsmq9y8bm6ir_1DQKNkiA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=2018+33rd+IEEE%2FACM+International+Conference+on+Automated+Software+Engineering+%28ASE%29&rft.atitle=Automated+Directed+Fairness+Testing&rft.au=Udeshi%2C+Sakshi&rft.au=Arora%2C+Pryanshu&rft.au=Chattopadhyay%2C+Sudipta&rft.date=2018-09-01&rft.pub=ACM&rft.eissn=2643-1572&rft.spage=98&rft.epage=108&rft_id=info:doi/10.1145%2F3238147.3238165&rft.externalDocID=9000070