论文标题

通过语义特征融合通过语义特征融合的无监督分割

Unsupervised segmentation via semantic-apparent feature fusion

论文作者

Li, Xi, Ma, Huimin, Ma, Hongbing, Wang, Yidong

论文摘要

前景细分是图像理解领域的重要任务。在无监督的条件下,不同的图像和实例始终具有可变表达式,这使得很难基于固定规则或单一功能实现稳定的分割性能。为了解决这个问题,研究提出了一种基于语义 - 特征融合(SAFF)的无监督前景分割方法。在这里,我们发现前景对象的关键区域可以通过语义特征准确响应,而明显的特征(以显着性和边缘表示)提供了更丰富的详细表达式。为了结合两种功能的优势,建立了用于一级区域特征和二进制上下文特征的编码方法,该方法对两种类型的表达式进行了全面描述。然后,提出一种自适应参数学习的方法来计算最合适的特征权重并生成前景置信度得分图。此外,分割网络用于从不同实例中学习前景共同特征。通过融合语义和明显的特征,并层叠了内图像自适应特征的体重学习和图像间的共同特征学习的模块,该研究实现了在Pascal VOC 2012数据集中显着超过基线的性能。

Foreground segmentation is an essential task in the field of image understanding. Under unsupervised conditions, different images and instances always have variable expressions, which make it difficult to achieve stable segmentation performance based on fixed rules or single type of feature. In order to solve this problem, the research proposes an unsupervised foreground segmentation method based on semantic-apparent feature fusion (SAFF). Here, we found that key regions of foreground object can be accurately responded via semantic features, while apparent features (represented by saliency and edge) provide richer detailed expression. To combine the advantages of the two type of features, an encoding method for unary region features and binary context features is established, which realizes a comprehensive description of the two types of expressions. Then, a method for adaptive parameter learning is put forward to calculate the most suitable feature weights and generate foreground confidence score map. Furthermore, segmentation network is used to learn foreground common features from different instances. By fusing semantic and apparent features, as well as cascading the modules of intra-image adaptive feature weight learning and inter-image common feature learning, the research achieves performance that significantly exceeds baselines on the PASCAL VOC 2012 dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源