Poster
Learning Neural Scene Representation from iToF Imaging
Wenjie Chang · Hanzhi Chang · Yueyi Zhang · Wenfei Yang · Tianzhu Zhang
Indirect Time-of-Flight (iToF) cameras are popular for 3D perception because they are cost-effective and easy to deploy. They emit modulated infrared signals to illuminate the scene and process the received signals to generate amplitude and phase images. The depth is calculated from the phase using the modulation frequency. However, the obtained depth often suffers from noise caused by multi-path interference (MPI), low signal-to-noise ratio (SNR), and depth wrapping.Building on recent advancements in neural scene representations, which have shown great potential in 3D modeling from multi-view RGB images, we propose leveraging this approach to reconstruct 3D representations from noisy iToF data. Our method utilizes the multi-view consistency of amplitude and phase maps, averaging information from all input views to generate an accurate scene representation.Considering the impact of infrared illumination, we propose a new rendering scheme for amplitude maps based on signed distance function (SDF) and introduce a neural lighting function to model the appearance variations caused by active illumination.We also incorporate a phase-guided sampling strategy and a wrapping-aware phase-to-depth loss to utilize raw phase information and mitigate depth wrapping.Additionally, we add a noise-weight loss to prevent excessive smoothing information across noisy multi-view measurements.Experiments conducted on synthetic and real-world datasets demonstrate that the proposed method outperforms state-of-the-art techniques.
Live content is unavailable. Log in and register to view live content