论文标题
用于图像插图的多功能共学习
Multi-feature Co-learning for Image Inpainting
论文作者
论文摘要
图像介入通过同时利用图像结构和纹理特征来取得了巨大进步。但是,由于缺乏有效的多功能融合技术,现有的图像介绍方法仍然显示出有限的改进。在本文中,我们设计了一个深层多功能共同学习网络,用于图像介入,其中包括软门偶特征融合(SDFF)和双边传播特征聚合(BPFA)模块。具体来说,我们首先使用两个分支来分别学习结构特征和纹理功能。然后,提出的SDFF模块将结构特征集成到纹理特征中,同时使用纹理功能作为生成结构特征的辅助功能。这样的共学习策略使结构和纹理的特征更加一致。接下来,提出的BPFA模块通过共同学习上下文关注,渠道信息和特征空间来增强从本地功能到整体一致性的连接,这可以进一步完善生成的结构和纹理。最后,在包括Celeba,Place2和Paris Streetview在内的基准数据集上进行了广泛的实验。实验结果证明了所提出的方法比最先进的优越性。源代码可在https://github.com/gzhu-dvl/mfcl-inpainting上找到。
Image inpainting has achieved great advances by simultaneously leveraging image structure and texture features. However, due to lack of effective multi-feature fusion techniques, existing image inpainting methods still show limited improvement. In this paper, we design a deep multi-feature co-learning network for image inpainting, which includes Soft-gating Dual Feature Fusion (SDFF) and Bilateral Propagation Feature Aggregation (BPFA) modules. To be specific, we first use two branches to learn structure features and texture features separately. Then the proposed SDFF module integrates structure features into texture features, and meanwhile uses texture features as an auxiliary in generating structure features. Such a co-learning strategy makes the structure and texture features more consistent. Next, the proposed BPFA module enhances the connection from local feature to overall consistency by co-learning contextual attention, channel-wise information and feature space, which can further refine the generated structures and textures. Finally, extensive experiments are performed on benchmark datasets, including CelebA, Places2, and Paris StreetView. Experimental results demonstrate the superiority of the proposed method over the state-of-the-art. The source codes are available at https://github.com/GZHU-DVL/MFCL-Inpainting.