论文标题

使用全球照明环境的通用光度立体网络

Universal Photometric Stereo Network using Global Lighting Contexts

论文作者

Ikehata, Satoshi

论文摘要

本文处理了一项新的光度立体声任务,称为通用光度立体声。与假定特定物理照明模型的现有任务不同;因此,在极大地限制了其可用性,该任务的解决方案算法应该适用于在任意照明变化下具有不同形状和材料的对象,而无需假设任何特定的模型。为了解决这项极具挑战性的任务,我们提出了一种纯粹的数据驱动方法,该方法通过提取通用照明表示的提取,以替换物理照明参数的恢复来消除先前的照明假设,称为全球照明环境。我们将它们像校准的光度立体声网络中的照明参数一样使用,以恢复表面正常向量PixelWisely。为了使我们的网络适应各种形状,材料和灯光,在新的合成数据集上进行了训练,该数据集模拟了野外物体的外观。将我们的方法与我们的测试数据上的其他最新未校准的光度立体观点方法进行了比较,以证明我们方法的重要性。

This paper tackles a new photometric stereo task, named universal photometric stereo. Unlike existing tasks that assumed specific physical lighting models; hence, drastically limited their usability, a solution algorithm of this task is supposed to work for objects with diverse shapes and materials under arbitrary lighting variations without assuming any specific models. To solve this extremely challenging task, we present a purely data-driven method, which eliminates the prior assumption of lighting by replacing the recovery of physical lighting parameters with the extraction of the generic lighting representation, named global lighting contexts. We use them like lighting parameters in a calibrated photometric stereo network to recover surface normal vectors pixelwisely. To adapt our network to a wide variety of shapes, materials and lightings, it is trained on a new synthetic dataset which simulates the appearance of objects in the wild. Our method is compared with other state-of-the-art uncalibrated photometric stereo methods on our test data to demonstrate the significance of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源