论文标题
使用变压器编码的自然语言
Natural Language to Code Using Transformers
论文作者
论文摘要
我们解决了使用CONALA数据集从自然语言描述中生成代码片段的问题。我们使用基于自我注意力的变压器体系结构,并表明其性能比基于注意的基于注意的编码器解码器更好。此外,我们开发了一种修改的背面翻译形式,并使用循环一致的损失以端到端的方式训练模型。我们以16.99的比分击败了先前报道的Conala挑战基线。
We tackle the problem of generating code snippets from natural language descriptions using the CoNaLa dataset. We use the self-attention based transformer architecture and show that it performs better than recurrent attention-based encoder decoder. Furthermore, we develop a modified form of back translation and use cycle consistent losses to train the model in an end-to-end fashion. We achieve a BLEU score of 16.99 beating the previously reported baseline of the CoNaLa challenge.