论文标题
用于基于视图的3D对象检索的多个歧视和成对CNN
Multiple Discrimination and Pairwise CNN for View-based 3D Object Retrieval
论文作者
论文摘要
随着计算机,相机设备,网络和硬件技术的快速开发和广泛的应用,3D对象(或模型)检索引起了广泛的关注,并且已成为计算机视觉域中的热门研究主题。事实证明,3D对象检索中已经可用的深度学习功能比手工制作的功能的检索表现更好。但是,大多数现有网络没有考虑到多视图图像选择对网络培训的影响,并且仅使用对比度损失仅强迫同类样本尽可能接近。在这项工作中,提出了一个新的解决方案多视图歧视和成对CNN(MDPCNN)用于3D对象检索,以解决这些问题。它可以通过添加切片层和cont层同时输入多批次和多个视图。此外,通过训练样品可以通过聚类进行分类而获得高度歧视的网络。最后,我们将对比中心的损失和对比损失部署为具有更好类内部紧凑性和类间可分离性的优化目标。大规模实验表明,所提出的MDPCNN可以在3D对象检索中对最新算法实现显着性能。
With the rapid development and wide application of computer, camera device, network and hardware technology, 3D object (or model) retrieval has attracted widespread attention and it has become a hot research topic in the computer vision domain. Deep learning features already available in 3D object retrieval have been proven to be better than the retrieval performance of hand-crafted features. However, most existing networks do not take into account the impact of multi-view image selection on network training, and the use of contrastive loss alone only forcing the same-class samples to be as close as possible. In this work, a novel solution named Multi-view Discrimination and Pairwise CNN (MDPCNN) for 3D object retrieval is proposed to tackle these issues. It can simultaneously input of multiple batches and multiple views by adding the Slice layer and the Concat layer. Furthermore, a highly discriminative network is obtained by training samples that are not easy to be classified by clustering. Lastly, we deploy the contrastive-center loss and contrastive loss as the optimization objective that has better intra-class compactness and inter-class separability. Large-scale experiments show that the proposed MDPCNN can achieve a significant performance over the state-of-the-art algorithms in 3D object retrieval.