Explainable AI-Based Intrusion Detection Systems for Industry 5.0 and Adversarial XAI: A Systematic Review.

Uloženo v:
Podrobná bibliografie
Název: Explainable AI-Based Intrusion Detection Systems for Industry 5.0 and Adversarial XAI: A Systematic Review.
Autoři: Khan, Naseem, Ahmad, Kashif, Al Tamimi, Aref, Alani, Mohammed M., Bermak, Amine, Khalil, Issa
Zdroj: Information; Dec2025, Vol. 16 Issue 12, p1036, 45p
Témata: INTRUSION detection systems (Computer security), ARTIFICIAL intelligence, MACHINE learning, SCHOLARLY method, DISCLOSURE, INTERNET security, CYBER physical systems
Abstrakt: Industry 5.0 represents a paradigm shift toward human–AI collaboration in manufacturing, incorporating unprecedented volumes of robots, Internet of Things (IoT) devices, Augmented/Virtual Reality (AR/VR) systems, and smart devices. This extensive interconnectivity introduces significant cybersecurity vulnerabilities. While AI has proven effective for cybersecurity applications, including intrusion detection, malware identification, and phishing prevention, cybersecurity professionals have shown reluctance toward adopting black-box machine learning solutions due to their opacity. This hesitation has accelerated the development of explainable artificial intelligence (XAI) techniques that provide transparency into AI decision-making processes. This systematic review examines XAI-based intrusion detection systems (IDSs) for Industry 5.0 environments. We analyze how explainability impacts cybersecurity through the critical lens of adversarial XAI (Adv-XIDS) approaches. Our comprehensive analysis of 135 studies investigates XAI's influence on both advanced deep learning and traditional shallow architectures for intrusion detection. We identify key challenges, opportunities, and research directions for implementing trustworthy XAI-based cybersecurity solutions in high-stakes Industry 5.0 applications. This rigorous analysis establishes a foundational framework to guide future research in this rapidly evolving domain. [ABSTRACT FROM AUTHOR]
Copyright of Information is the property of MDPI and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Databáze: Complementary Index
Popis
Abstrakt:Industry 5.0 represents a paradigm shift toward human–AI collaboration in manufacturing, incorporating unprecedented volumes of robots, Internet of Things (IoT) devices, Augmented/Virtual Reality (AR/VR) systems, and smart devices. This extensive interconnectivity introduces significant cybersecurity vulnerabilities. While AI has proven effective for cybersecurity applications, including intrusion detection, malware identification, and phishing prevention, cybersecurity professionals have shown reluctance toward adopting black-box machine learning solutions due to their opacity. This hesitation has accelerated the development of explainable artificial intelligence (XAI) techniques that provide transparency into AI decision-making processes. This systematic review examines XAI-based intrusion detection systems (IDSs) for Industry 5.0 environments. We analyze how explainability impacts cybersecurity through the critical lens of adversarial XAI (Adv-XIDS) approaches. Our comprehensive analysis of 135 studies investigates XAI's influence on both advanced deep learning and traditional shallow architectures for intrusion detection. We identify key challenges, opportunities, and research directions for implementing trustworthy XAI-based cybersecurity solutions in high-stakes Industry 5.0 applications. This rigorous analysis establishes a foundational framework to guide future research in this rapidly evolving domain. [ABSTRACT FROM AUTHOR]
ISSN:20782489
DOI:10.3390/info16121036