Bibliographic Details
| Title: |
DMVL4AVD: a deep multi-view learning model for automated vulnerability detection. |
| Authors: |
Du, Xiaozhi, Zhou, Yanrong, Du, Hongyuan |
| Source: |
Neural Computing & Applications; Mar2025, Vol. 37 Issue 8, p5873-5889, 17p |
| Subject Terms: |
GRAPH neural networks, ARTIFICIAL intelligence, FEATURE extraction, SOURCE code, IMAGE processing, DEEP learning |
| Abstract: |
Automated vulnerability detection is crucial to protect software systems. However, state-of-the-art approaches mainly focus on a single view of the source code, which often leads to incomplete code representation and low detection accuracy. To solve these problems, this paper proposes a novel automatic vulnerability detection model, DMVL4AVD, based on deep multi-view learning that represents source codes from three distinct views: code sequences, code property graphs, and code metrics. Different deep models are employed to extract features from each view. Firstly, the [CLS] vectors derived from encoder layers 1 to 12 of GraphCodeBERT are used as code sequence features which contain rich semantic information. Next, the gated graph neural network (GGNN) is exploited to learn the features of nodes in the code property graph, encompassing both syntactic and dependency information of the source code. During the extraction of graph features, node representation is augmented by incorporating the degree centrality of each node, along with its corresponding code and type attributes, resulting in a more comprehensive depiction of the graph's structure. Statistical metrics generated by the code analysis tool SourceMonitor are then processed through a 1-dimensional (1-D) CNN to produce metric features. Fused features from these three views are learned by a multilayer perceptron (MLP) to yield final classification results. Experimental results demonstrate the superiority of DMVL4AVD over existing approaches. The model performs significantly better than the studied baselines, achieving an average increase in accuracy of 6.79% and an average boost of 6.94% in precision compared to the approaches in the literature. [ABSTRACT FROM AUTHOR] |
|
Copyright of Neural Computing & Applications is the property of Springer Nature and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.) |
| Database: |
Complementary Index |