论文标题

深图学习,用于空间变化的室内照明预测

Deep Graph Learning for Spatially-Varying Indoor Lighting Prediction

论文作者

Bai, Jiayang, Guo, Jie, Wan, Chenchen, Chen, Zhenyu, He, Zhen, Yang, Shan, Yu, Piaopiao, Zhang, Yan, Guo, Yanwen

论文摘要

在许多视觉和增强现实(AR)应用中,来自单个图像的照明预测变得越来越重要,在这些视觉和增强现实(AR)应用中,应该保证虚拟对象和真实对象之间的阴影和阴影一致性。但是,这是一个众所周知的问题,尤其是对于室内场景,由于室内照明器的复杂性以及2D图像中涉及的有限信息。在本文中,我们提出了一个基于图的学​​习框架,用于室内照明估计。其核心是基于深度增强的球形高斯(SG)和图形卷积网络(GCN)的新照明模型(称为DSGLIGHT),该模型从有限视野的单个LDR图像中渗透了新的照明表示。我们的照明模型在室内全景上建立了128个均匀分布的SG,每个SG都编码照明和该节点周围的深度。然后,提出的GCN了解了从输入图像到DSGLIGHT的映射。与现有的照明模型相比,我们的DSGLIGHT更加忠实,更紧凑地编码直接照明和间接环境照明。它还使网络培训和推理更加稳定。估计的深度分布可以在空间变化的照明下进行时间稳定的阴影和阴影。通过彻底的实验,我们表明我们的方法显然在定性和定量上都优于现有方法。

Lighting prediction from a single image is becoming increasingly important in many vision and augmented reality (AR) applications in which shading and shadow consistency between virtual and real objects should be guaranteed. However, this is a notoriously ill-posed problem, especially for indoor scenarios, because of the complexity of indoor luminaires and the limited information involved in 2D images. In this paper, we propose a graph learning-based framework for indoor lighting estimation. At its core is a new lighting model (dubbed DSGLight) based on depth-augmented Spherical Gaussians (SG) and a Graph Convolutional Network (GCN) that infers the new lighting representation from a single LDR image of limited field-of-view. Our lighting model builds 128 evenly distributed SGs over the indoor panorama, where each SG encoding the lighting and the depth around that node. The proposed GCN then learns the mapping from the input image to DSGLight. Compared with existing lighting models, our DSGLight encodes both direct lighting and indirect environmental lighting more faithfully and compactly. It also makes network training and inference more stable. The estimated depth distribution enables temporally stable shading and shadows under spatially-varying lighting. Through thorough experiments, we show that our method obviously outperforms existing methods both qualitatively and quantitatively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源