Herd Accountability of Privacy-Preserving Algorithms: A Stackelberg Game Approach

AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming....

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on information forensics and security Vol. 20; pp. 2237 - 2251
Main Authors: Yang, Ya-Ting, Zhang, Tao, Zhu, Quanyan
Format: Journal Article
Language:English
Published: IEEE 2025
Subjects:
ISSN:1556-6013, 1556-6021
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming. Herd audit, on the other hand, offers a promising alternative by leveraging collective intelligence from end-users. However, the presence of epistemic disparity among auditors, resulting in varying levels of domain expertise and access to relevant knowledge, captured by the rational inattention model, may impact audit assurance. An effective herd audit must establish a credible accountability threat for algorithm developers, incentivizing them not to breach user trust. In this work, our objective is to develop a systematic framework that explores the impact of herd audits on algorithm developers through the lens of the Stackelberg game. Our analysis reveals the importance of easy access to information and the appropriate design of rewards, as they increase the auditors' assurance in the audit process. In this context, herd audit serves as a deterrent to negligent behavior. Therefore, by enhancing herd accountability, herd audit contributes to responsible algorithm development, fostering trust between users and algorithms.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2025.3540357