论文标题

自适应上下文感知的多模式网络,用于深度完成

Adaptive Context-Aware Multi-Modal Network for Depth Completion

论文作者

Zhao, Shanshan, Gong, Mingming, Fu, Huan, Tao, Dacheng

论文摘要

深度完成旨在从稀疏深度数据和相应的单个RGB图像中恢复密集的深度图。观察到的像素为恢复未观察到的像素的深度提供了重要的指导。但是,由于深度数据的稀疏性,由大多数现有方法利用的标准卷积操作对于具有深度值的观察到的上下文进行建模。为了解决这个问题,我们建议采用图表传播以捕获观察到的空间上下文。具体而言,我们首先在观察到的像素的不同尺度上构造多个图。由于图结构因样品而异,因此我们将注意机制应用于传播,这鼓励网络自适应地对上下文信息进行建模。此外,考虑到输入数据的mutli模式,我们分别利用了两种模式的图形传播来提取多模式表示。最后,我们介绍了对称的封闭式融合策略,以有效利用提取的多模式特征。拟议的策略保留了一种模式的原始信息,并通过学习自适应门控权重来吸收另一种方式的互补信息。我们的模型称为自适应上下文感知的多模式网络(ACMNET),在两个基准上实现了最先进的性能,即{\ it i.e.},Kitti和Nyu-V2,同时具有比最新模型的参数少。我们的代码可在:\ url {https://github.com/sshan-zhao/acmnet}中获得。

Depth completion aims to recover a dense depth map from the sparse depth data and the corresponding single RGB image. The observed pixels provide the significant guidance for the recovery of the unobserved pixels' depth. However, due to the sparsity of the depth data, the standard convolution operation, exploited by most of existing methods, is not effective to model the observed contexts with depth values. To address this issue, we propose to adopt the graph propagation to capture the observed spatial contexts. Specifically, we first construct multiple graphs at different scales from observed pixels. Since the graph structure varies from sample to sample, we then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively. Furthermore, considering the mutli-modality of input data, we exploit the graph propagation on the two modalities respectively to extract multi-modal representations. Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively. The proposed strategy preserves the original information for one modality and also absorbs complementary information from the other through learning the adaptive gating weights. Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks, {\it i.e.}, KITTI and NYU-v2, and at the same time has fewer parameters than latest models. Our code is available at: \url{https://github.com/sshan-zhao/ACMNet}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源