Poster
Neural Multi-View Uncalibrated Photometric Stereo without Photometric Stereo Cues
Xu Cao · Takafumi Taketomi
We propose a neural inverse rendering approach to reconstruct 3D shape, spatially varying BRDF, and lighting parameters from multi-view images captured under varying lighting conditions.Unlike conventional multi-view photometric stereo (MVPS) methods, our approach does not rely on geometric, reflectance, or lighting cues derived from single-view photometric stereo. Instead, we jointly optimize all scene properties end-to-end to directly reproduce raw image observations.We represent both geometry and SVBRDF as neural implicit fields and incorporate shadow-aware volume rendering with physics-based shading. Experiments show that our method outperforms MVPS methods guided by high-quality normal maps and enables photorealistic rendering from novel viewpoints under novel lighting conditions. Our method reconstructs intricate surface details for objects with challenging reflectance properties using view-unaligned OLAT images, which conventional MVPS methods cannot handle.
Live content is unavailable. Log in and register to view live content