论文标题

使用多模式深度学习进行灾难反应分析社交媒体数据

Analysis of Social Media Data using Multimodal Deep Learning for Disaster Response

论文作者

Ofli, Ferda, Alam, Firoj, Imran, Muhammad

论文摘要

社交媒体平台中的多媒体内容在灾难事件中提供了重要的信息。共享的信息类型包括受伤或已故人员的报告,基础设施损害以及失踪或发现的人等。尽管许多研究表明文本和图像内容对灾难响应目的的有用性,但该研究主要集中于过去仅分析文本方式。在本文中,我们建议同时使用社交媒体数据的文本和图像方式,以使用最先进的深度学习技术来学习联合表示。具体而言,我们利用卷积神经网络来定义具有模态 - 非稳态共享表示形式的多模式深度学习体系结构。对现实世界灾难数据集的广泛实验表明,所提出的多模式体系结构比使用单个模式(例如,文本或图像)训练的模型产生的性能更好。

Multimedia content in social media platforms provides significant information during disaster events. The types of information shared include reports of injured or deceased people, infrastructure damage, and missing or found people, among others. Although many studies have shown the usefulness of both text and image content for disaster response purposes, the research has been mostly focused on analyzing only the text modality in the past. In this paper, we propose to use both text and image modalities of social media data to learn a joint representation using state-of-the-art deep learning techniques. Specifically, we utilize convolutional neural networks to define a multimodal deep learning architecture with a modality-agnostic shared representation. Extensive experiments on real-world disaster datasets show that the proposed multimodal architecture yields better performance than models trained using a single modality (e.g., either text or image).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源