论文标题
使用Code2Vec嵌入Java类:可变混淆的改进
Embedding Java Classes with code2vec: Improvements from Variable Obfuscation
论文作者
论文摘要
软件工程关键领域的自动源代码分析(例如代码安全)可以从机器学习(ML)中受益。但是,许多标准ML方法需要数据的数字表示,并且不能直接应用于源代码。因此,要启用ML,我们需要将源代码嵌入数字特征向量中,同时尽可能地维护代码的语义。 Code2Vec是一种最近发布的嵌入方法,该方法使用方法名称预测的代理任务映射Java方法以配备向量。但是,对Code2Vec进行的实验表明,它学会依靠可变名称进行预测,从而使其很容易被错别字或对抗性攻击所愚弄。此外,它只能够嵌入单个Java方法,并且不能嵌入整个方法集合(例如典型Java类中的一种方法),从而使在类级别上执行预测很难(例如,用于识别恶意Java类)。本文提出的研究中解决了这两个缺陷。我们研究了在训练Code2VEC模型期间混淆变量名称的效果,以迫使其依靠代码的结构而不是特定名称的结构,并考虑一种简单的方法来通过汇总方法嵌入的集合来创建类别级别的嵌入。我们的结果是在具有挑战性的新源代码分类问题集合中获得的,表明混淆变量名称会产生一个嵌入模型,既不涉及变量命名,又更准确地反映了代码语义。共享数据集,模型和代码,以进一步研究源代码。
Automatic source code analysis in key areas of software engineering, such as code security, can benefit from Machine Learning (ML). However, many standard ML approaches require a numeric representation of data and cannot be applied directly to source code. Thus, to enable ML, we need to embed source code into numeric feature vectors while maintaining the semantics of the code as much as possible. code2vec is a recently released embedding approach that uses the proxy task of method name prediction to map Java methods to feature vectors. However, experimentation with code2vec shows that it learns to rely on variable names for prediction, causing it to be easily fooled by typos or adversarial attacks. Moreover, it is only able to embed individual Java methods and cannot embed an entire collection of methods such as those present in a typical Java class, making it difficult to perform predictions at the class level (e.g., for the identification of malicious Java classes). Both shortcomings are addressed in the research presented in this paper. We investigate the effect of obfuscating variable names during the training of a code2vec model to force it to rely on the structure of the code rather than specific names and consider a simple approach to creating class-level embeddings by aggregating sets of method embeddings. Our results, obtained on a challenging new collection of source-code classification problems, indicate that obfuscating variable names produces an embedding model that is both impervious to variable naming and more accurately reflects code semantics. The datasets, models, and code are shared for further ML research on source code.