论文标题

ASLFEAT:学习准确形状和本地化的本地特征

ASLFeat: Learning Local Features of Accurate Shape and Localization

论文作者

Luo, Zixin, Zhou, Lei, Bai, Xuyang, Chen, Hongkai, Zhang, Jiahui, Yao, Yao, Li, Shiwei, Fang, Tian, Quan, Long

论文摘要

这项工作着重于减轻当地特征探测器和描述符联合学习的两个局限性。首先,在密集的特征提取过程中通常忽略了特征点的局部形状(比例,方向等)的能力,而形状意识对于获得更强的几何不变性至关重要。其次,检测到的关键点的本地化精度不足以可靠地恢复相机几何形状,这已成为3D重建等任务的瓶颈。在本文中,我们提出了Aslfeat,并进行了三个轻巧但有效的修改,以减轻上述问题。首先,我们求助于可变形的卷积网络,以密集估计并应用局部变换。其次,我们利用固有的特征层次结构来恢复空间分辨率和低级详细信息,以进行准确的关键点定位。最后,我们使用峰值测量来关联特征响应并得出更多指示性检测得分。对每种修饰的效果进行了详尽的研究,并在各种实际情况下进行了广泛进行评估。据报道,最新的结果证明了我们方法的优势。

This work focuses on mitigating two limitations in the joint learning of local feature detectors and descriptors. First, the ability to estimate the local shape (scale, orientation, etc.) of feature points is often neglected during dense feature extraction, while the shape-awareness is crucial to acquire stronger geometric invariance. Second, the localization accuracy of detected keypoints is not sufficient to reliably recover camera geometry, which has become the bottleneck in tasks such as 3D reconstruction. In this paper, we present ASLFeat, with three light-weight yet effective modifications to mitigate above issues. First, we resort to deformable convolutional networks to densely estimate and apply local transformation. Second, we take advantage of the inherent feature hierarchy to restore spatial resolution and low-level details for accurate keypoint localization. Finally, we use a peakiness measurement to relate feature responses and derive more indicative detection scores. The effect of each modification is thoroughly studied, and the evaluation is extensively conducted across a variety of practical scenarios. State-of-the-art results are reported that demonstrate the superiority of our methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源