论文标题

孟加拉模因和文本的多模式仇恨言论检测

Multimodal Hate Speech Detection from Bengali Memes and Texts

论文作者

Karim, Md. Rezaul, Dey, Sumon Kanti, Islam, Tanhim, Shajalal, Md., Chakravarthi, Bharathi Raja

论文摘要

已经提出了许多机器学习(ML)和基于深度学习的方法(DL)方法,以利用社交媒体的文本数据进行反社会行为分析,例如网络欺凌,虚假新闻检测以及主要用于诸如英语之类高度资源的语言的仇恨言论。然而,尽管有很多多样性和数百万的母语者,但孟加拉语(例如孟加拉语)的资源不足,这是由于缺乏自然语言处理的计算资源(NLP)所致。与其他语言类似,孟加拉社交媒体内容还包括图像以及文本(例如,通过将短文嵌入到Facebook上的图像中,发布了多模式模因)。因此,只有文本数据不足以判断它们,因为图像可能会提供额外的上下文来做出适当的判断。本文是关于从多模式孟加拉模因和文本中发现的仇恨言论检测。我们为孟加拉语的问题准备了唯一的多模式仇恨言语数据集,我们用来训练最先进的神经体系结构(例如,带有单词嵌入式 + convnets + convnets +预培养的语言模型的BI-LSTM/CORV-LSTM,例如,单层Bangla bangla Bert-bangla bert-tobles-todeclual-textraly andmapial and-baseal and and and and and and and and and and and andmape and and and and and and and and and and and and and and。仇恨言论检测。 Conv-LSTM和XLM-Roberta模型对文本的表现最佳,其F1得分分别为0.78和0.82。从模因开始,RESNET-152和DENSENET-161模型的F1得分分别为0.78和0.79。至于多模式融合,XLM-ROBERTA + DENSENET-161表现最佳,F1得分为0.83。我们的研究表明,文本方式对于仇恨语音检测最有用,而模因中等有用。

Numerous machine learning (ML) and deep learning (DL)-based approaches have been proposed to utilize textual data from social media for anti-social behavior analysis like cyberbullying, fake news detection, and identification of hate speech mainly for highly-resourced languages such as English. However, despite having a lot of diversity and millions of native speakers, some languages like Bengali are under-resourced, which is due to a lack of computational resources for natural language processing (NLP). Similar to other languages, Bengali social media contents also include images along with texts (e.g., multimodal memes are posted by embedding short texts into images on Facebook). Therefore, only the textual data is not enough to judge them since images might give extra context to make a proper judgement. This paper is about hate speech detection from multimodal Bengali memes and texts. We prepared the only multimodal hate speech dataset for-a-kind of problem for Bengali, which we use to train state-of-the-art neural architectures (e.g., Bi-LSTM/Conv-LSTM with word embeddings, ConvNets + pre-trained language models, e.g., monolingual Bangla BERT, multilingual BERT-cased/uncased, and XLM-RoBERTa) to jointly analyze textual and visual information for hate speech detection. Conv-LSTM and XLM-RoBERTa models performed best for texts, yielding F1 scores of 0.78 and 0.82, respectively. As of memes, ResNet-152 and DenseNet-161 models yield F1 scores of 0.78 and 0.79, respectively. As for multimodal fusion, XLM-RoBERTa + DenseNet-161 performed the best, yielding an F1 score of 0.83. Our study suggests that text modality is most useful for hate speech detection, while memes are moderately useful.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源