Herd Accountability of Privacy-Preserving Algorithms: A Stackelberg Game Approach

AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming....

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on information forensics and security Ročník 20; s. 2237 - 2251
Hlavní autoři: Yang, Ya-Ting, Zhang, Tao, Zhu, Quanyan
Médium: Journal Article
Jazyk:angličtina
Vydáno: IEEE 2025
Témata:
ISSN:1556-6013, 1556-6021
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming. Herd audit, on the other hand, offers a promising alternative by leveraging collective intelligence from end-users. However, the presence of epistemic disparity among auditors, resulting in varying levels of domain expertise and access to relevant knowledge, captured by the rational inattention model, may impact audit assurance. An effective herd audit must establish a credible accountability threat for algorithm developers, incentivizing them not to breach user trust. In this work, our objective is to develop a systematic framework that explores the impact of herd audits on algorithm developers through the lens of the Stackelberg game. Our analysis reveals the importance of easy access to information and the appropriate design of rewards, as they increase the auditors' assurance in the audit process. In this context, herd audit serves as a deterrent to negligent behavior. Therefore, by enhancing herd accountability, herd audit contributes to responsible algorithm development, fostering trust between users and algorithms.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2025.3540357