Three Challenges to Secure AI Systems in the Context of AI Regulations

Uloženo v:
Podrobná bibliografie
Název: Three Challenges to Secure AI Systems in the Context of AI Regulations
Autoři: Ronan Hamon, Henrik Junklewitz, Josep Soler Garrido, Ignacio Sanchez
Zdroj: IEEE Access, Vol 12, Pp 61022-61035 (2024)
Informace o vydavateli: Institute of Electrical and Electronics Engineers (IEEE), 2024.
Rok vydání: 2024
Témata: cybersecurity, lifecycle management, 0202 electrical engineering, electronic engineering, information engineering, regulation, Electrical engineering. Electronics. Nuclear engineering, 02 engineering and technology, Adversarial machine learning, artificial intelligence, conformity assessment, TK1-9971
Popis: This article examines the interplay between artificial intelligence (AI) and cybersecurity in light of future regulatory requirements on the security of AI systems, specifically focusing on the robustness of high-risk AI systems against cyberattacks in the context of the European Union’s AI Act. The paper identifies and analyses three challenges to achieve compliance of AI systems with the cybersecurity requirement: accounting for the diversity and the complexity of AI technologies, assessing AI-specific risks, and developing secure-by-design AI systems. The contribution of the article consists in providing an overview of AI cybersecurity practices and identifying gaps in current approaches to security conformity assessment for AI systems. Our analysis highlights the unique vulnerabilities present in AI systems and the absence of established cybersecurity practices tailored to these systems, and emphasises the need for continuous alignment between legal requirements and technological capabilities, acknowledging the necessity for further research and development to address the challenges. It concludes that comprehensive cybersecurity practices must evolve to accommodate the unique aspects of AI, with a collaborative effort from various sectors to ensure effective implementation and standardisation.
Druh dokumentu: Article
ISSN: 2169-3536
DOI: 10.1109/access.2024.3391021
Přístupová URL adresa: https://doaj.org/article/b2594c30a8e74ffe8dddca552c779131
Rights: CC BY
Přístupové číslo: edsair.doi.dedup.....8dbd4b877569cd49d48a58dcf607a97e
Databáze: OpenAIRE
Popis
Abstrakt:This article examines the interplay between artificial intelligence (AI) and cybersecurity in light of future regulatory requirements on the security of AI systems, specifically focusing on the robustness of high-risk AI systems against cyberattacks in the context of the European Union’s AI Act. The paper identifies and analyses three challenges to achieve compliance of AI systems with the cybersecurity requirement: accounting for the diversity and the complexity of AI technologies, assessing AI-specific risks, and developing secure-by-design AI systems. The contribution of the article consists in providing an overview of AI cybersecurity practices and identifying gaps in current approaches to security conformity assessment for AI systems. Our analysis highlights the unique vulnerabilities present in AI systems and the absence of established cybersecurity practices tailored to these systems, and emphasises the need for continuous alignment between legal requirements and technological capabilities, acknowledging the necessity for further research and development to address the challenges. It concludes that comprehensive cybersecurity practices must evolve to accommodate the unique aspects of AI, with a collaborative effort from various sectors to ensure effective implementation and standardisation.
ISSN:21693536
DOI:10.1109/access.2024.3391021