论文标题
repu:完善和融合未观察的视图,以详细保留单位图3D人类重建
ReFu: Refine and Fuse the Unobserved View for Detail-Preserving Single-Image 3D Human Reconstruction
论文作者
论文摘要
单位图3D人类重建旨在重建人体的3D纹理表面,给出一个图像。尽管基于隐式函数的方法最近实现了合理的重建性能,但它们仍然具有限制,表明从未观察到的视图中,表面几何和纹理的质量降低。作为响应,为了产生逼真的纹理表面,我们提出了一种粗到精细的方法,它是一种粗略的方法,可完善投影的背面视图图像并融合精制图像以预测最终的人体。为了抑制在投影图像和重建网格中引起噪声的扩散占用,我们建议通过同时利用基于占用量的体积渲染的2D和3D监督来训练占用率。我们还介绍了一种改进体系结构,该体系结构可生成详细信息的背面视图图像。广泛的实验表明,我们的方法从单个图像中实现了3D人类重建的最新性能,从未观察到的视图中显示出增强的几何形状和纹理质量。
Single-image 3D human reconstruction aims to reconstruct the 3D textured surface of the human body given a single image. While implicit function-based methods recently achieved reasonable reconstruction performance, they still bear limitations showing degraded quality in both surface geometry and texture from an unobserved view. In response, to generate a realistic textured surface, we propose ReFu, a coarse-to-fine approach that refines the projected backside view image and fuses the refined image to predict the final human body. To suppress the diffused occupancy that causes noise in projection images and reconstructed meshes, we propose to train occupancy probability by simultaneously utilizing 2D and 3D supervisions with occupancy-based volume rendering. We also introduce a refinement architecture that generates detail-preserving backside-view images with front-to-back warping. Extensive experiments demonstrate that our method achieves state-of-the-art performance in 3D human reconstruction from a single image, showing enhanced geometry and texture quality from an unobserved view.