论文标题
紫外线量用于实时渲染的可编辑自由观看人类绩效
UV Volumes for Real-time Rendering of Editable Free-view Human Performance
论文作者
论文摘要
神经量渲染可以使人类表演者的照片逼真的效果图在自由观看中,这是沉浸式VR/AR应用中的关键任务。但是,这种做法受到渲染过程中高计算成本的严重限制。为了解决这个问题,我们提出了紫外线量,这是一种新方法,可以实时呈现人类表演者的可编辑免费视频视频。它将人类外观(即非平滑)外观与3D体积分开,并将其编码为2D神经纹理堆栈(NTS)。光滑的紫外线量允许较小且较浅的神经网络以3D的形式获得密度和纹理坐标,同时捕获2D NT中的详细外观。为了编辑性,参数化的人类模型与光滑纹理坐标之间的映射使我们可以更好地对新型姿势和形状进行更好的概括。此外,NTS的使用启用了有趣的应用程序,例如重新启动。关于CMU Panoptic,ZJU MOCAP和H36M数据集的广泛实验表明,我们的模型平均可以在30fps中呈现960 x 540张图像,并具有可比的照片现实主义与先进方法。该项目和补充材料可在https://fanegg.github.io/uv-volumes上获得。
Neural volume rendering enables photo-realistic renderings of a human performer in free-view, a critical task in immersive VR/AR applications. But the practice is severely limited by high computational costs in the rendering process. To solve this problem, we propose the UV Volumes, a new approach that can render an editable free-view video of a human performer in real-time. It separates the high-frequency (i.e., non-smooth) human appearance from the 3D volume, and encodes them into 2D neural texture stacks (NTS). The smooth UV volumes allow much smaller and shallower neural networks to obtain densities and texture coordinates in 3D while capturing detailed appearance in 2D NTS. For editability, the mapping between the parameterized human model and the smooth texture coordinates allows us a better generalization on novel poses and shapes. Furthermore, the use of NTS enables interesting applications, e.g., retexturing. Extensive experiments on CMU Panoptic, ZJU Mocap, and H36M datasets show that our model can render 960 x 540 images in 30FPS on average with comparable photo-realism to state-of-the-art methods. The project and supplementary materials are available at https://fanegg.github.io/UV-Volumes.