论文标题
ST $^2 $:通过多任务元学习的小型数据文本样式转移
ST$^2$: Small-data Text Style Transfer via Multi-task Meta-Learning
论文作者
论文摘要
文本样式转移旨在将句子以一种样式的句子解释为另一种样式,同时保存内容。由于缺乏并行的培训数据,因此最先进的方法是无监督的,并且依靠共享内容的大型数据集。此外,现有方法已应用于非常有限的样式,例如正/负和正式/非正式。在这项工作中,我们开发了一个元学习框架,以在任何类型的文本样式之间进行转移,包括更精细的个人写作样式,共享较少的内容和较小的培训数据。尽管最先进的模型在几次播放样式转移任务中失败,但我们的框架有效地利用了其他样式的信息来提高语言流利度和样式转移精度。
Text style transfer aims to paraphrase a sentence in one style into another style while preserving content. Due to lack of parallel training data, state-of-art methods are unsupervised and rely on large datasets that share content. Furthermore, existing methods have been applied on very limited categories of styles such as positive/negative and formal/informal. In this work, we develop a meta-learning framework to transfer between any kind of text styles, including personal writing styles that are more fine-grained, share less content and have much smaller training data. While state-of-art models fail in the few-shot style transfer task, our framework effectively utilizes information from other styles to improve both language fluency and style transfer accuracy.