Adversarial Attacks on Machine Learning-Aided Visualizations

Gespeichert in:
Bibliographische Detailangaben
Titel: Adversarial Attacks on Machine Learning-Aided Visualizations
Autoren: Fujiwara, Takanori, Kucher, Kostiantyn, Dr., 1989, Wang, Junpeng, Martins, Rafael M., Kerren, Andreas, Dr.-Ing., 1971, Ynnerman, Anders, 1963
Quelle: eLLIIT – The Linköping – Lund Initiative on IT and Mobile Communication Journal of Visualization. 28:133-151
Schlagwörter: ML4VIS, AI4VIS, Visualization, Cybersecurity, Neural networks, Parametric dimensionality reduction, Chart recommendation, Informations- och programvisualisering, Information and software visualization
Beschreibung: Research in ML4VIS investigates how to use machine learning (ML) techniques to generate visualizations, and the field is rapidly growing with high societal impact. However, as with any computational pipeline that employs ML processes, ML4VIS approaches are susceptible to a range of ML-specific adversarial attacks. These attacks can manipulate visualization generations, causing analysts to be tricked and their judgments to be impaired. Due to a lack of synthesis from both visualization and ML perspectives, this security aspect is largely overlooked by the current ML4VIS literature. To bridge this gap, we investigate the potential vulnerabilities of ML-aided visualizations from adversarial attacks using a holistic lens of both visualization and ML perspectives. We first identify the attack surface (i.e., attack entry points) that is unique in ML-aided visualizations. We then exemplify five different adversarial attacks. These examples highlight the range of possible attacks when considering the attack surface and multiple different adversary capabilities. Our results show that adversaries can induce various attacks, such as creating arbitrary and deceptive visualizations, by systematically identifying input attributes that are influential in ML inferences. Based on our observations of the attack surface characteristics and the attack examples, we underline the importance of comprehensive studies of security issues and defense mechanisms as a call of urgency for the ML4VIS community.
Dateibeschreibung: electronic
Zugangs-URL: https://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-132853
https://doi.org/10.1007/s12650-024-01029-2
Datenbank: SwePub
Beschreibung
Abstract:Research in ML4VIS investigates how to use machine learning (ML) techniques to generate visualizations, and the field is rapidly growing with high societal impact. However, as with any computational pipeline that employs ML processes, ML4VIS approaches are susceptible to a range of ML-specific adversarial attacks. These attacks can manipulate visualization generations, causing analysts to be tricked and their judgments to be impaired. Due to a lack of synthesis from both visualization and ML perspectives, this security aspect is largely overlooked by the current ML4VIS literature. To bridge this gap, we investigate the potential vulnerabilities of ML-aided visualizations from adversarial attacks using a holistic lens of both visualization and ML perspectives. We first identify the attack surface (i.e., attack entry points) that is unique in ML-aided visualizations. We then exemplify five different adversarial attacks. These examples highlight the range of possible attacks when considering the attack surface and multiple different adversary capabilities. Our results show that adversaries can induce various attacks, such as creating arbitrary and deceptive visualizations, by systematically identifying input attributes that are influential in ML inferences. Based on our observations of the attack surface characteristics and the attack examples, we underline the importance of comprehensive studies of security issues and defense mechanisms as a call of urgency for the ML4VIS community.
ISSN:13438875
DOI:10.1007/s12650-024-01029-2