论文标题

从时空图像贴片中学习非刚性表面重建

Learning non-rigid surface reconstruction from spatio-temporal image patches

论文作者

Pedone, Matteo, Mostafa, Abdelrahman, heikkilä, Janne

论文摘要

我们提出了一种直接从视频序列中直接从视频序列中重建非偶性变形对象的密集时空深度图的方法。深度的估计是在视频的时空贴片上本地执行的,然后将整个形状的完整深度视频结合在一起。由于局部时空贴片的几何复杂性通常很简单,通常足够简单,可以通过参数模型忠实地表示,因此我们人为地生成一个小型变形的矩形网格的数据库,该数据库具有不同的材料和光条件,并使用相应的数据进行培训,并使用相应的数据进行培训,并使用相应的数据进行培训。我们在合成数据和Kinect数据上测试了我们的方法,并在实验上观察到,重建误差明显低于使用其他方法(例如从运动中传统的非刚性结构)获得的方法。

We present a method to reconstruct a dense spatio-temporal depth map of a non-rigidly deformable object directly from a video sequence. The estimation of depth is performed locally on spatio-temporal patches of the video, and then the full depth video of the entire shape is recovered by combining them together. Since the geometric complexity of a local spatio-temporal patch of a deforming non-rigid object is often simple enough to be faithfully represented with a parametric model, we artificially generate a database of small deforming rectangular meshes rendered with different material properties and light conditions, along with their corresponding depth videos, and use such data to train a convolutional neural network. We tested our method on both synthetic and Kinect data and experimentally observed that the reconstruction error is significantly lower than the one obtained using other approaches like conventional non-rigid structure from motion.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源