Herd Accountability of Privacy-Preserving Algorithms: A Stackelberg Game Approach

AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming....

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on information forensics and security Ročník 20; s. 2237 - 2251
Hlavní autoři: Yang, Ya-Ting, Zhang, Tao, Zhu, Quanyan
Médium: Journal Article
Jazyk:angličtina
Vydáno: IEEE 2025
Témata:
ISSN:1556-6013, 1556-6021
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming. Herd audit, on the other hand, offers a promising alternative by leveraging collective intelligence from end-users. However, the presence of epistemic disparity among auditors, resulting in varying levels of domain expertise and access to relevant knowledge, captured by the rational inattention model, may impact audit assurance. An effective herd audit must establish a credible accountability threat for algorithm developers, incentivizing them not to breach user trust. In this work, our objective is to develop a systematic framework that explores the impact of herd audits on algorithm developers through the lens of the Stackelberg game. Our analysis reveals the importance of easy access to information and the appropriate design of rewards, as they increase the auditors' assurance in the audit process. In this context, herd audit serves as a deterrent to negligent behavior. Therefore, by enhancing herd accountability, herd audit contributes to responsible algorithm development, fostering trust between users and algorithms.
AbstractList AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming. Herd audit, on the other hand, offers a promising alternative by leveraging collective intelligence from end-users. However, the presence of epistemic disparity among auditors, resulting in varying levels of domain expertise and access to relevant knowledge, captured by the rational inattention model, may impact audit assurance. An effective herd audit must establish a credible accountability threat for algorithm developers, incentivizing them not to breach user trust. In this work, our objective is to develop a systematic framework that explores the impact of herd audits on algorithm developers through the lens of the Stackelberg game. Our analysis reveals the importance of easy access to information and the appropriate design of rewards, as they increase the auditors' assurance in the audit process. In this context, herd audit serves as a deterrent to negligent behavior. Therefore, by enhancing herd accountability, herd audit contributes to responsible algorithm development, fostering trust between users and algorithms.
Author Zhu, Quanyan
Yang, Ya-Ting
Zhang, Tao
Author_xml – sequence: 1
  givenname: Ya-Ting
  orcidid: 0000-0002-9158-1722
  surname: Yang
  fullname: Yang, Ya-Ting
  email: yy4348@nyu.edu
  organization: Department of Electrical and Computer Engineering, New York University, Brooklyn, NY, USA
– sequence: 2
  givenname: Tao
  orcidid: 0000-0002-1454-4645
  surname: Zhang
  fullname: Zhang, Tao
  email: tz636@nyu.edu
  organization: Department of Electrical and Computer Engineering, New York University, Brooklyn, NY, USA
– sequence: 3
  givenname: Quanyan
  orcidid: 0000-0002-0008-2953
  surname: Zhu
  fullname: Zhu, Quanyan
  email: qz494@nyu.edu
  organization: Department of Electrical and Computer Engineering, New York University, Brooklyn, NY, USA
BookMark eNpNkNtqwkAYhJdioWr7AIVe7AvE7jHZ9C5IPYBQi_Y6_Nn80W1jIrup4NurKKVXMxczw_ANSK9pGyTkmbMR5yx9Xc8nq5FgQo-kVkzq5I70udZxFDPBe3-eywcyCOGbMaV4bPrkc4a-pJm17W_TQeFq1x1pW9Gldwewx2jpMaA_uGZDs3rTetdtd-GNZnTVgf3BukC_oVPYIc32e9-C3T6S-wrqgE83HZKvyft6PIsWH9P5OFtElivTRSiQQVyyUkiRmvMdK-JCohYAIIoqASMVmjhVlU1QKW0Eh9JaEGeF1Fo5JPy6a30bgscq33u3A3_MOcsvTPILk_zCJL8xOXderh2HiP_yJklZYuQJIWBgMg
CODEN ITIFA6
Cites_doi 10.1007/s12599-010-0114-8
10.1016/S0304-3932(03)00029-1
10.1145/3449148
10.1016/j.ejor.2019.05.012
10.1145/3139256
10.1037/0033-295X.111.4.1036
10.1007/s10796-012-9350-4
10.1109/MWC.2016.7721739
10.1145/3243734.3243863
10.4324/9781315212043-31
10.1109/CIC.2018.00054
10.1109/JIOT.2020.2996229
10.1109/TIFS.2021.3118886
10.1145/2480741.2480742
10.1017/S0265052522000085
10.1002/9781119723950.ch2
10.1109/MIC.2012.68
10.1007/978-3-031-30709-6
10.1109/LCSYS.2021.3132801
10.1016/j.procs.2019.08.098
10.1002/0471200611
10.1016/j.cogsys.2007.07.001
10.1287/opre.2013.1235
10.4135/9781452230597.n2
10.1109/JSAC.2012.121214
10.1109/CDC51059.2022.9993423
10.1145/3243734.3243818
10.1145/3057268
10.3390/s22031032
10.1111/j.1099-1123.2009.00413.x
10.1093/cybsec/tyae009
10.1007/978-3-319-47413-7_19
10.1257/aer.20140117
10.1016/j.cose.2018.01.005
10.1109/TIFS.2019.2955891
10.1007/978-3-540-79228-4_1
10.1007/978-3-319-68711-7_7
10.3390/g15040028
10.1038/s41598-016-0011-6
10.1016/j.red.2012.03.003
10.1257/aer.20130047
10.1007/978-3-030-66065-9_9
10.3390/su12239827
10.4108/icst.collaboratecom.2012.250499
10.1007/978-3-031-50670-3_18
10.1016/j.jcorpfin.2020.101588
10.1111/j.1540-6261.2007.01263.x
10.1093/acprof:oso/9780198237907.001.0001
10.1257/jel.20211524
10.1007/s11831-024-10095-6
10.1145/3159652.3159654
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
DOI 10.1109/TIFS.2025.3540357
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Government
Computer Science
EISSN 1556-6021
EndPage 2251
ExternalDocumentID 10_1109_TIFS_2025_3540357
10879078
Genre orig-research
GroupedDBID 0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
IFIPE
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
ID FETCH-LOGICAL-c148t-e2e0a6d0d23298044c26b3e52aaa2bf7a834e8694fc7e445821adcca221aa9cc3
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001435443500007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1556-6013
IngestDate Sat Nov 29 08:14:10 EST 2025
Wed Aug 27 01:49:28 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c148t-e2e0a6d0d23298044c26b3e52aaa2bf7a834e8694fc7e445821adcca221aa9cc3
ORCID 0000-0002-9158-1722
0000-0002-1454-4645
0000-0002-0008-2953
PageCount 15
ParticipantIDs ieee_primary_10879078
crossref_primary_10_1109_TIFS_2025_3540357
PublicationCentury 2000
PublicationDate 20250000
2025-00-00
PublicationDateYYYYMMDD 2025-01-01
PublicationDate_xml – year: 2025
  text: 20250000
PublicationDecade 2020
PublicationTitle IEEE transactions on information forensics and security
PublicationTitleAbbrev TIFS
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
References ref13
ref57
ref12
ref56
ref15
ref59
ref14
ref58
ref53
ref52
ref11
ref55
Guszcza (ref1) 2018
ref10
ref54
ref16
ref51
ref50
Tramer (ref20) 2022
ref46
ref45
ref48
ref47
ref42
ref41
ref44
Nasr (ref23)
ref49
ref8
ref9
ref4
ref3
ref6
ref5
ref40
ref35
ref34
ref37
ref36
ref31
ref30
ref33
ref32
Steinke (ref22); 36
ref2
ref39
ref38
Dynan (ref43)
Jagielski (ref18); 33
ref24
ref26
(ref62) 2001
Mittelstadt (ref7) 2016; 10
ref25
ref28
ref27
Lu (ref19); 35
ref29
Stevens (ref21) 2022
ref60
ref61
Ge (ref17) 2022
References_xml – ident: ref9
  doi: 10.1007/s12599-010-0114-8
– ident: ref41
  doi: 10.1016/S0304-3932(03)00029-1
– ident: ref6
  doi: 10.1145/3449148
– ident: ref52
  doi: 10.1016/j.ejor.2019.05.012
– ident: ref24
  doi: 10.1145/3139256
– ident: ref40
  doi: 10.1037/0033-295X.111.4.1036
– ident: ref25
  doi: 10.1007/s10796-012-9350-4
– ident: ref34
  doi: 10.1109/MWC.2016.7721739
– start-page: 16
  volume-title: Proc. Financial Innov. Real Economy’ Conf. Sponsored Center Study Innov. Productiv.
  ident: ref43
  article-title: Financial innovation and the great moderation: What do household data say?
– start-page: 1631
  volume-title: Proc. 32nd USENIX Secur. Symp.
  ident: ref23
  article-title: Tight auditing of differentially private machine learning
– ident: ref4
  doi: 10.1145/3243734.3243863
– start-page: 19
  volume-title: Classical Detection and Estimation Theory
  year: 2001
  ident: ref62
– ident: ref11
  doi: 10.4324/9781315212043-31
– ident: ref60
  doi: 10.1109/CIC.2018.00054
– ident: ref36
  doi: 10.1109/JIOT.2020.2996229
– ident: ref58
  doi: 10.1109/TIFS.2021.3118886
– ident: ref12
  doi: 10.1145/2480741.2480742
– volume: 33
  start-page: 22205
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref18
  article-title: Auditing differentially private machine learning: How private is private SGD?
– ident: ref27
  doi: 10.1017/S0265052522000085
– ident: ref13
  doi: 10.1002/9781119723950.ch2
– ident: ref32
  doi: 10.1109/MIC.2012.68
– year: 2022
  ident: ref17
  article-title: Accountability and insurance in IoT supply chain
  publication-title: arXiv:2201.11855
– ident: ref39
  doi: 10.1007/978-3-031-30709-6
– year: 2022
  ident: ref21
  article-title: Backpropagation clipping for deep learning with differential privacy
  publication-title: arXiv:2202.05089
– ident: ref5
  doi: 10.1109/LCSYS.2021.3132801
– ident: ref47
  doi: 10.1016/j.procs.2019.08.098
– ident: ref61
  doi: 10.1002/0471200611
– volume: 35
  start-page: 4165
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref19
  article-title: A general framework for auditing differentially private machine learning
– ident: ref38
  doi: 10.1016/j.cogsys.2007.07.001
– volume: 10
  start-page: 12
  year: 2016
  ident: ref7
  article-title: Automation, algorithms, and politics-auditing for transparency in content personalization systems
  publication-title: Int. J. Commun.
– ident: ref33
  doi: 10.1287/opre.2013.1235
– ident: ref37
  doi: 10.4135/9781452230597.n2
– ident: ref57
  doi: 10.1109/JSAC.2012.121214
– ident: ref59
  doi: 10.1109/CDC51059.2022.9993423
– ident: ref3
  doi: 10.1145/3243734.3243818
– ident: ref46
  doi: 10.1145/3057268
– ident: ref50
  doi: 10.3390/s22031032
– ident: ref8
  doi: 10.1111/j.1099-1123.2009.00413.x
– ident: ref54
  doi: 10.1093/cybsec/tyae009
– ident: ref55
  doi: 10.1007/978-3-319-47413-7_19
– ident: ref16
  doi: 10.1257/aer.20140117
– ident: ref53
  doi: 10.1016/j.cose.2018.01.005
– ident: ref56
  doi: 10.1109/TIFS.2019.2955891
– ident: ref2
  doi: 10.1007/978-3-540-79228-4_1
– ident: ref51
  doi: 10.1007/978-3-319-68711-7_7
– ident: ref49
  doi: 10.3390/g15040028
– ident: ref29
  doi: 10.1038/s41598-016-0011-6
– ident: ref42
  doi: 10.1016/j.red.2012.03.003
– ident: ref15
  doi: 10.1257/aer.20130047
– ident: ref26
  doi: 10.1007/978-3-030-66065-9_9
– volume-title: Why We Need to Audit Algorithms
  year: 2018
  ident: ref1
– ident: ref30
  doi: 10.3390/su12239827
– ident: ref35
  doi: 10.4108/icst.collaboratecom.2012.250499
– volume: 36
  start-page: 49268
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref22
  article-title: Privacy auditing with one (1) training run
– ident: ref14
  doi: 10.1007/978-3-031-50670-3_18
– ident: ref28
  doi: 10.1016/j.jcorpfin.2020.101588
– ident: ref44
  doi: 10.1111/j.1540-6261.2007.01263.x
– ident: ref10
  doi: 10.1093/acprof:oso/9780198237907.001.0001
– ident: ref45
  doi: 10.1257/jel.20211524
– ident: ref48
  doi: 10.1007/s11831-024-10095-6
– year: 2022
  ident: ref20
  article-title: Debugging differential privacy: A case study for privacy auditing
  publication-title: arXiv:2202.12219
– ident: ref31
  doi: 10.1145/3159652.3159654
SSID ssj0044168
Score 2.4160867
Snippet AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed...
SourceID crossref
ieee
SourceType Index Database
Publisher
StartPage 2237
SubjectTerms accountability
Accuracy
Algorithm audit
Costs
Differential privacy
Games
Government
Machine learning algorithms
Privacy
Protection
rational inattention
Stackelberg game
Training
Uniform resource locators
Title Herd Accountability of Privacy-Preserving Algorithms: A Stackelberg Game Approach
URI https://ieeexplore.ieee.org/document/10879078
Volume 20
WOSCitedRecordID wos001435443500007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1556-6021
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0044168
  issn: 1556-6013
  databaseCode: RIE
  dateStart: 20060101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8NAEB5s8VAPVmvF-mIPnoTU7ebtLYi1gpSKVXoLm82kFmwiaVrov3d3k2IuHjwlhATCzM7OfDuPD-DGEgkVEbUMTzBFYUa5wSlGRsLsyGUYeUJXW3y8uOOxN5v5k6pZXffCIKIuPsO-utW5_DgTa3VUJi3ccyWY8xrQcF2nbNbabbvSrZd9b7btGBJlmFUKc0D9u-nz8E1CQWb31SmHqVxRzQnVWFW0Uxm2__k7R3BYRY8kKNV9DHuYdqC9Y2YglaF24KA2ZrADrV9C3RN4HWEek4okohzSvSVZQib5YsPF1lA1GWr_SOck-Jpn-aL4XK7uSUBkWCotXo_EIk98iSSoxpF34X34OH0YGRWvgiEk-CkMZEi5E9NYRlO-JyUnmBOZaDPOOYsSl3umhZ7jW4lw0VKJtQGPpaaZvHJfCPMUmmmW4hmQgcQbSFGYMlCy0BlEMSaMco0zZWCGPbjdCTr8LsdnhBp2UD9UWgmVVsJKKz3oKiHXXizle_7H8wtoqc_LA5FLaBb5Gq9gX2yKxSq_1qvjB_NXuAk
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT8JAEJ4omogHUcSIzz14Miku27e3xogQkWBEw63ZbqdIImBKIeHfu7stkYsHT22apmlmdnbm23l8ADeWSKiIqGV4gikKM8oNTjEyEmZHLsPIE7ra4qPr9nrecOj3i2Z13QuDiLr4DBvqVufy45lYqKMyaeGeK8Gctw07tmUxmrdrrTde6djzzjfbdgyJM8wiidmk_t2g03qTYJDZDXXOYSpntOGGNnhVtFtpVf75Q4dwUMSPJMgVfgRbOK1CZc3NQApTrcL-xqDBKpR_KXWP4bWNaUwKmoh8TPeKzBLST8dLLlaGqspQO8h0RIKv0SwdZ5-T-T0JiAxMpc3roVjkiU-QBMVA8hq8tx4HD22jYFYwhIQ_mYEMKXdiGst4yvek5ARzIhNtxjlnUeJyz7TQc3wrES5aKrXW5LHUNZNX7gthnkBpOpviKZCmRBxIUZgyVLLQaUYxJoxyjTRlaIZ1uF0LOvzOB2iEGnhQP1RaCZVWwkIrdagpIW-8mMv37I_n17DXHrx0w26n93wOZfWp_HjkAkpZusBL2BXLbDxPr_RK-QG0rbtQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Herd+Accountability+of+Privacy-Preserving+Algorithms%3A+A+Stackelberg+Game+Approach&rft.jtitle=IEEE+transactions+on+information+forensics+and+security&rft.au=Yang%2C+Ya-Ting&rft.au=Zhang%2C+Tao&rft.au=Zhu%2C+Quanyan&rft.date=2025&rft.issn=1556-6013&rft.eissn=1556-6021&rft.volume=20&rft.spage=2237&rft.epage=2251&rft_id=info:doi/10.1109%2FTIFS.2025.3540357&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TIFS_2025_3540357
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1556-6013&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1556-6013&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1556-6013&client=summon