论文标题

学习从单光场图像重建共聚焦显微镜堆栈

Learning to Reconstruct Confocal Microscopy Stacks from Single Light Field Images

论文作者

Page, Josue, Saltarin, Federico, Belyaev, Yury, Lyck, Ruth, Favaro, Paolo

论文摘要

我们提出了一种新颖的深度学习方法,可从单光场图像重建共聚焦显微镜堆栈。为了执行重建,我们介绍了LFMNET,这是一种受U-Net设计启发的新型神经网络体系结构。它能够以高精度为112x112x57.6 $μm^3 $体积(1287x1287x64 voxels),分别为50ms,给定一个单一的光场图像,即1287x1287像素的单个光场图像,从而大大降低了720倍的时间,以便于在相同的设备上进行共同扫描的时间和64-乘积的共同量。为了证明生命科学的适用性,我们的方法在用荧光标记的血管上进行了定量和质量评估。由于扫描时间和存储空间的急剧减少,我们的设置和方法直接适用于体内3D显微镜的实时。我们提供有关光学设计,网络体系结构和培训程序的分析,以最佳地重建给定目标深度范围的量。为了训练我们的网络,我们构建了一个数据集,其中包括362个小鼠脑血管的光场图像和相应的对齐的3D共聚焦扫描集,我们将其用作地面真相。数据集将用于研究目的。

We present a novel deep learning approach to reconstruct confocal microscopy stacks from single light field images. To perform the reconstruction, we introduce the LFMNet, a novel neural network architecture inspired by the U-Net design. It is able to reconstruct with high-accuracy a 112x112x57.6$μm^3$ volume (1287x1287x64 voxels) in 50ms given a single light field image of 1287x1287 pixels, thus dramatically reducing 720-fold the time for confocal scanning of assays at the same volumetric resolution and 64-fold the required storage. To prove the applicability in life sciences, our approach is evaluated both quantitatively and qualitatively on mouse brain slices with fluorescently labelled blood vessels. Because of the drastic reduction in scan time and storage space, our setup and method are directly applicable to real-time in vivo 3D microscopy. We provide analysis of the optical design, of the network architecture and of our training procedure to optimally reconstruct volumes for a given target depth range. To train our network, we built a data set of 362 light field images of mouse brain blood vessels and the corresponding aligned set of 3D confocal scans, which we use as ground truth. The data set will be made available for research purposes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源