论文标题

eolo:嵌入式对象分割仅查看一次

EOLO: Embedded Object Segmentation only Look Once

论文作者

Zeng, Longfei, Sabah, Mohammed

论文摘要

在本文中,我们引入了一种无锚和单弹的实例分割方法,该方法在概念上很简单,具有3个独立的分支,完全卷积,可以通过将其轻松嵌入移动设备和嵌入式设备来使用。 我们的方法(称为eolo)通过实例中心分类和每个像素上的4D距离回归来重新制定实例分割问题,以预测语义分割并区分重叠对象问题。此外,我们提出了一种有效的损失功能,以处理采样高质量的重心示例并优化4D距离回归,这可以显着改善地图性能。没有任何铃铛和哨子,Eolo在IOU50下的Mask Mab中获得了27.7 $ \%$,并在1080TI GPU上达到30 fps,并在具有挑战性的COCO2017数据集上进行了单模和单尺度培训/测试。 我们第一次从上下,上下和直接预测的范式方面,首次展示了最近方法中实例分割的不同理解。然后,我们说明了我们的模型和目前相关的实验和结果。我们希望拟议的EOLO框架可以作为实时工业场景中单次实例细分任务的基本基线。

In this paper, we introduce an anchor-free and single-shot instance segmentation method, which is conceptually simple with 3 independent branches, fully convolutional and can be used by easily embedding it into mobile and embedded devices. Our method, refer as EOLO, reformulates the instance segmentation problem as predicting semantic segmentation and distinguishing overlapping objects problem, through instance center classification and 4D distance regression on each pixel. Moreover, we propose one effective loss function to deal with sampling a high-quality center of gravity examples and optimization for 4D distance regression, which can significantly improve the mAP performance. Without any bells and whistles, EOLO achieves 27.7$\%$ in mask mAP under IoU50 and reaches 30 FPS on 1080Ti GPU, with a single-model and single-scale training/testing on the challenging COCO2017 dataset. For the first time, we show the different comprehension of instance segmentation in recent methods, in terms of both up-bottom, down-up, and direct-predict paradigms. Then we illustrate our model and present related experiments and results. We hope that the proposed EOLO framework can serve as a fundamental baseline for a single-shot instance segmentation task in Real-time Industrial Scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源