论文标题

VIECAP4H-VLSP 2021:使用SWIN Transformer和基于注意力的LSTM为医疗保健领域的越南图像字幕

vieCap4H-VLSP 2021: Vietnamese Image Captioning for Healthcare Domain using Swin Transformer and Attention-based LSTM

论文作者

Nguyen, Thanh Tin, Nguyen, Long H., Pham, Nhat Truong, Nguyen, Liu Tai, Do, Van Huong, Nguyen, Hai, Nguyen, Ngoc Duy

论文摘要

这项研究介绍了我们对越南语言和语音处理任务(VLSP)挑战2021的文本处理任务的自动越南图像字幕的方法,如图1所示,近年来,图像字幕通常采用基于卷积神经网络的建筑作为编码和长期记忆(长期短期记忆(LSTM)的(LSTM),以生成DEDERATIESS doderate ddeDeres。这些模型在不同的数据集中表现出色。我们提出的模型还具有编码器和一个解码器,但是我们在编码器中使用了SWIN变压器,LSTM与解码器中的注意模块结合在一起。该研究介绍了我们在比赛期间使用的培训实验和技术。我们的模型在vietcap4h数据集上达到了0.293的BLEU4得分,并且该分数在私人排行榜上排名3 $^{rd} $位置。我们的代码可以在\ url {https://git.io/jddjm}上找到。

This study presents our approach on the automatic Vietnamese image captioning for healthcare domain in text processing tasks of Vietnamese Language and Speech Processing (VLSP) Challenge 2021, as shown in Figure 1. In recent years, image captioning often employs a convolutional neural network-based architecture as an encoder and a long short-term memory (LSTM) as a decoder to generate sentences. These models perform remarkably well in different datasets. Our proposed model also has an encoder and a decoder, but we instead use a Swin Transformer in the encoder, and a LSTM combined with an attention module in the decoder. The study presents our training experiments and techniques used during the competition. Our model achieves a BLEU4 score of 0.293 on the vietCap4H dataset, and the score is ranked the 3$^{rd}$ place on the private leaderboard. Our code can be found at \url{https://git.io/JDdJm}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源