Relighting4D: Neural Relightable Human from Videos
ECCV 2022


Human relighting is a highly desirable yet challenging task. Existing works either require expensive one-light-at-a-time (OLAT) captured data using light stage or cannot freely change the viewpoints of the rendered body. In this work, we propose a principled framework, Relighting4D, that enables free-viewpoints relighting from only human videos under unknown illuminations. Our key insight is that the space-time varying geometry and reflectance of the human body can be decomposed as a set of neural fields of normal, occlusion, diffuse, and specular maps. These neural fields are further integrated into reflectance-aware physically based rendering, where each vertex in the neural field absorbs and reflects the light from the environment. The whole framework can be learned from videos in a self-supervised manner, with physically informed priors designed for regularization. Extensive experiments on both real and synthetic datasets demonstrate that our framework is capable of relighting dynamic human actors with free-viewpoints.



Given the input video frame at time step t, Relighting4D represents the human as a neural field on latent vectors anchored to a deformable human model. The value of the neural field at any 3D point x and time t is taken as latent feature and fed into multilayer perceptrons to obtain geometry and reflectance, which are normal, occlusion, diffuse, and specular maps respectively. Finally, a physically based renderer is raised to render the human subject to the input light probe under novel illumination.



We train our model using only human videos in a self-supervised manner. During inference, the trained model takes the target HDR light map as input to render the dynamic human actor with novel viewpoints.



This work is supported by the National Research Foundation, Singapore under its AI Singapore Programme, NTU NAP, MOE AcRF Tier 2 (T2EP20221-0033), and under the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

Relighting4D is implemented on top of the NeuralBody codebase.
Thanks to Fangzhou Hong, Jiawei Ren and Tong Wu for proofreading the paper.
The website template is borrowed from Mip-NeRF.