编码(集合论)
计算机科学
财产(哲学)
源代码
脆弱性(计算)
程序设计语言
计算机安全
哲学
认识论
集合(抽象数据类型)
作者
Ruitong Liu,Yanbin Wang,Haitao Xu,Bin Liu,Sun Jian-guo,Zhenhao Guo,Wenrui Ma
出处
期刊:Cornell University - arXiv
日期:2024-04-23
被引量:4
标识
DOI:10.48550/arxiv.2404.14719
摘要
Code Language Models (codeLMs) and Graph Neural Networks (GNNs) are widely used in code vulnerability detection. However, GNNs often rely on aggregating information from adjacent nodes, limiting structural information propagation across layers. While codeLMs can supplement GNNs with semantic information, existing integration methods underexplore their collaborative potential. To address these challenges, we propose Vul-LMGNNs, integrating pre-trained codeLMs with GNNs to enable cross-layer propagation of semantic and structural information. Vul-LMGNNs leverage Code Property Graphs (CPGs) to incorporate syntax, control flow, and data dependencies, using gated GNNs for structural extraction. An online knowledge distillation (KD) mechanism allows a student GNN to capture structural information from a trained counterpart via alternating training. Additionally, an "implicit-explicit" joint training framework leverages codeLMs to initialize embeddings and propagate code semantics. In the explicit phase, it performs late fusion via linear interpolation. Evaluations on real-world vulnerability datasets show Vul-LMGNNs outperform 17 state-of-the-art approaches. Source code is available at: https://github.com/Vul-LMGNN/vul-LMGNN.
科研通智能强力驱动
Strongly Powered by AbleSci AI