Herd Accountability of Privacy-Preserving Algorithms: A Stackelberg Game Approach

AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information forensics and security Jg. 20; S. 2237 - 2251
Hauptverfasser: Yang, Ya-Ting, Zhang, Tao, Zhu, Quanyan
Format: Journal Article
Sprache:Englisch
Veröffentlicht: IEEE 2025
Schlagworte:
ISSN:1556-6013, 1556-6021
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming. Herd audit, on the other hand, offers a promising alternative by leveraging collective intelligence from end-users. However, the presence of epistemic disparity among auditors, resulting in varying levels of domain expertise and access to relevant knowledge, captured by the rational inattention model, may impact audit assurance. An effective herd audit must establish a credible accountability threat for algorithm developers, incentivizing them not to breach user trust. In this work, our objective is to develop a systematic framework that explores the impact of herd audits on algorithm developers through the lens of the Stackelberg game. Our analysis reveals the importance of easy access to information and the appropriate design of rewards, as they increase the auditors' assurance in the audit process. In this context, herd audit serves as a deterrent to negligent behavior. Therefore, by enhancing herd accountability, herd audit contributes to responsible algorithm development, fostering trust between users and algorithms.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2025.3540357