论文标题

MVD:基于流敏图神经网络的内存相关漏洞检测

MVD: Memory-Related Vulnerability Detection Based on Flow-Sensitive Graph Neural Networks

论文作者

Cao, Sicong, Sun, Xiaobing, Bo, Lili, Wu, Rongxin, Li, Bin, Tao, Chuanqi

论文摘要

与内存有关的漏洞构成了对现代软件安全性的严重威胁。尽管基于深度学习的通用脆弱性检测方法成功,但在用于检测与内存相关的漏洞的情况下,它们仍然受到流量信息的不足的限制,从而导致高误报。 在本文中,我们提出了MVD,这是基于流动敏感图神经网络(FS-GNN)的语句级与内存相关的漏洞检测方法。 FS-GNN用于共同嵌入非结构化信息(即源代码)和结构化信息(即控制和数据流),以捕获隐式与内存相关的漏洞模式。我们在数据集中评估了MVD,其中包含4,353个现实世界中与内存有关的漏洞,并将我们的方法与三种基于深度学习的方法以及五个流行的基于静态分析的记忆探测器进行比较。实验结果表明,MVD可实现更好的检测准确性,表现优于基于最先进的基于DL的最先进和基于静态分析的方法。此外,MVD在准确性和效率之间做出了巨大的权衡。

Memory-related vulnerabilities constitute severe threats to the security of modern software. Despite the success of deep learning-based approaches to generic vulnerability detection, they are still limited by the underutilization of flow information when applied for detecting memory-related vulnerabilities, leading to high false positives. In this paper,we propose MVD, a statement-level Memory-related Vulnerability Detection approach based on flow-sensitive graph neural networks (FS-GNN). FS-GNN is employed to jointly embed both unstructured information (i.e., source code) and structured information (i.e., control- and data-flow) to capture implicit memory-related vulnerability patterns. We evaluate MVD on the dataset which contains 4,353 real-world memory-related vulnerabilities, and compare our approach with three state-of-the-art deep learning-based approaches as well as five popular static analysisbased memory detectors. The experiment results show that MVD achieves better detection accuracy, outperforming both state-of-theart DL-based and static analysis-based approaches. Furthermore, MVD makes a great trade-off between accuracy and efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源