URHand: Universal Relightable Hands
Oral Presentation
CVPR 2024

Abstract

URHand (a.k.a Your Hand). Our model is a high-fidelity Universal prior for Relightable Hands built upon light-stage data. It generalizes to novel viewpoints, poses, identities, and illuminations, which enables quick personalization from a phone scan.



Framework

Our model takes as input a mean texture, hand pose, and a coarse mesh for each identity. The physical branch focuses on geometry refinement and providing accurate shading features for the neural branch. The core of the neural branch is the linear lighting model (LLM) which takes as input the physics-inspired shading features from the physical branch. The neural branch learns to predict the gain and bias map over the mean texture. We leverage a differentiable rasterizer for rendering and minimize the loss of both branches against ground truth images. The sg(·) denotes the stop-gradient operation.

scales


Rendering with Directional Light

Physics-inspired refinement and shading features enable high-quality geometry and specularity:



We conduct detailed ablation studies on the effectiveness of the physical shading features and geometry:



Rendering with Environment Map

The proposed spatially varying linear lighting model achieves generalization to arbitrary illuminations represented by environment maps while training on OLAT sequences only.



The different designs choices of the linaer lighting model also impact the generalization from OLAT to environment maps:



Quick Personalization from a Phone Scan

Our universal relightable prior enables quick adaptation of a relightable personalized hand model from a casual phone scan, which is ready to be photorealistically rendered with arbitrary illuminations:



Full Presentation

Citation

Acknowledgements

The website template is borrowed from Mip-NeRF