论文标题
一键式分段的多阶段融合
Multi-Stage Fusion for One-Click Segmentation
论文作者
论文摘要
在图像中分割感兴趣的对象是应用程序和图像分析等应用程序的重要组成部分。在交互式设置下,应该在最小化用户输入的同时获得良好的细分。当前基于深度学习的交互式分割方法使用早期融合,并在图像输入层上合并用户线索。由于分割CNN具有许多层,因此早期融合可能会削弱用户相互作用对最终预测结果的影响。因此,我们为交互式分割提出了一个新的多阶段指导框架。通过在网络的不同阶段合并用户线索,我们允许用户交互以更直接的方式影响最终的细分输出。与早期融合框架相比,我们提出的框架的参数计数可以忽略不计。我们对标准交互实例分割和一键分割基准进行了广泛的实验,并报告了最先进的性能。
Segmenting objects of interest in an image is an essential building block of applications such as photo-editing and image analysis. Under interactive settings, one should achieve good segmentations while minimizing user input. Current deep learning-based interactive segmentation approaches use early fusion and incorporate user cues at the image input layer. Since segmentation CNNs have many layers, early fusion may weaken the influence of user interactions on the final prediction results. As such, we propose a new multi-stage guidance framework for interactive segmentation. By incorporating user cues at different stages of the network, we allow user interactions to impact the final segmentation output in a more direct way. Our proposed framework has a negligible increase in parameter count compared to early-fusion frameworks. We perform extensive experimentation on the standard interactive instance segmentation and one-click segmentation benchmarks and report state-of-the-art performance.