论文标题

ASIST:无带注释的合成实例细分和显微镜视频分析的跟踪

ASIST: Annotation-free synthetic instance segmentation and tracking for microscope video analysis

论文作者

Liu, Quan, Gaeta, Isabella M., Zhao, Mengyang, Deng, Ruining, Jha, Aadarsh, Millis, Bryan A., Mahadevan-Jansen, Anita, Tyska, Matthew J., Huo, Yuankai

论文摘要

实例对象细分和跟踪提供了跨显微镜视频的对象的全面量化。最近的单阶段像素插入的深度学习方法表明,与“分段 - 然后交配”的两阶段解决方案相比,其表现出色。但是,将基于监督像素的方法应用于显微镜视频的一个主要局限性是资源密集型的手动标签,该标签涉及在视频框架之间使用其时间关联来追踪数百个重叠的对象。受到最近基于无注释的图像分割的最新生成对抗网络(GAN)的启发,我们提出了一种新颖的无注释的合成实例分割和跟踪(ASIST)算法,用于分析亚细胞微观的显微镜视频。本文的贡献是三个方面的:(1)提出了一个新的无注释视频分析范式。 (2)将基于嵌入的实例分割和跟踪以无注释的合成学习为整体框架; (3)据我们所知,这是第一个研究微维利实例进行细分和跟踪使用基于嵌入的深度学习的研究。从实验结果中,与监督学习相比,提出的无注释方法取得了出色的表现。

Instance object segmentation and tracking provide comprehensive quantification of objects across microscope videos. The recent single-stage pixel-embedding based deep learning approach has shown its superior performance compared with "segment-then-associate" two-stage solutions. However, one major limitation of applying a supervised pixel-embedding based method to microscope videos is the resource-intensive manual labeling, which involves tracing hundreds of overlapped objects with their temporal associations across video frames. Inspired by the recent generative adversarial network (GAN) based annotation-free image segmentation, we propose a novel annotation-free synthetic instance segmentation and tracking (ASIST) algorithm for analyzing microscope videos of sub-cellular microvilli. The contributions of this paper are three-fold: (1) proposing a new annotation-free video analysis paradigm is proposed. (2) aggregating the embedding based instance segmentation and tracking with annotation-free synthetic learning as a holistic framework; and (3) to the best of our knowledge, this is first study to investigate microvilli instance segmentation and tracking using embedding based deep learning. From the experimental results, the proposed annotation-free method achieved superior performance compared with supervised learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源