Podrobná bibliografia
| Názov: |
AI-Powered Vulnerability Detection and Patch Management in Cybersecurity: A Systematic Review of Techniques, Challenges, and Emerging Trends. |
| Autori: |
Malkawi, Malek, Alhajj, Reda |
| Zdroj: |
Machine Learning & Knowledge Extraction; Jan2026, Vol. 8 Issue 1, p19, 27p |
| Predmety: |
ARTIFICIAL intelligence, DEEP learning, MACHINE learning, SOFTWARE upgrades, TECHNOLOGICAL innovations, INTERNET security, PENETRATION testing (Computer security) |
| Abstrakt: |
With the increasing complexity of cyber threats and the inefficiency of traditional vulnerability management, artificial intelligence has been increasingly integrated into cybersecurity. This review provides a comprehensive evaluation of AI-powered strategies including machine learning, deep learning, and large language models for identifying cybersecurity vulnerabilities and supporting automated patching. In this review, we conducted a synthesis and appraisal of 29 peer-reviewed studies published between 2019 and 2024. Our results indicate that AI methods substantially improve the precision of detection, scalability, and response speed compared with human-driven and rule-based approaches. We detail the transition from conventional ML categorization to using deep learning for source code analysis and dynamic network detection. Moreover, we identify advanced mitigation strategies such as AI-powered prioritization, neuro-symbolic AI, deep reinforcement learning and the generative abilities of LLMs which are used for automated patch suggestions. To strengthen methodological rigor, this review followed a registered protocol and PRISMA-based study selection, and it reports reproducible database searches (exact queries and search dates) and transparent screening decisions. We additionally assessed the quality and risk of bias of included studies using criteria tailored to AI-driven vulnerability research (dataset transparency, leakage control, evaluation rigor, reproducibility, and external validation), and we used these quality results to contextualize the synthesis. Our critical evaluation indicates that this area remains at an early stage and is characterized by significant gaps. The absence of standard benchmarks, limited generalizability of the models to various domains, and lack of adversarial testing are the obstacles that prevent adoption of these methods in real-world scenarios. Furthermore, the research suggests that the black-box nature of most models poses a serious problem in terms of trust. Thus, XAI is quite pertinent in this context. This paper serves as a thorough guide for the evolution of AI-driven vulnerability management and indicates that next-generation AI systems should not only be more accurate but also transparent, robust, and generalizable. [ABSTRACT FROM AUTHOR] |
|
Copyright of Machine Learning & Knowledge Extraction is the property of MDPI and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.) |
| Databáza: |
Complementary Index |