Poster
Diff$^2$I2P: Differentiable Image-to-Point Cloud Registration with Diffusion Prior
Juncheng Mu · Chengwei Ren · Weixiang Zhang · Liang Pan · Xiao-Ping Zhang · Yue Gao
[
Abstract
]
Abstract:
Learning cross-modal correspondences is essential for image-to-point cloud (I2P) registration. Existing methods achieve this mostly by utilizing metric learning to enforce feature alignment across modalities, disregarding the inherent modality gap between image and point data. Consequently, this paradigm struggles to ensure accurate cross-modal correspondences. To this end, inspired by the cross-modal generation success of recent large diffusion models, we propose **Diff$^2$I2P**, a fully **Diff**erentiable **I2P** registration framework, leveraging a novel and effective **Diff**usion prior for bridging the modality gap. Specifically, we propose a Control-Side Score Distillation (CSD) technique to distill knowledge from a depth-conditioned diffusion model to directly optimize the predicted transformation. However, the gradients on the transformation fail to backpropagate onto the cross-modal features due to the non-differentiability of correspondence retrieval and PnP solver. To this end, we further propose a Deformable Correspondence Tuning (DCT) module to estimate the correspondences in a differentiable way, followed by the transformation estimation using a differentiable PnP solver. With these two designs, the Diffusion model serves as a strong prior to guide the cross-modal feature learning of image and point cloud for forming robust correspondences, which significantly improves the registration. Extensive experimental results demonstrate that **Diff$^2$I2P** consistently outperforms state-of-the-art I2P registration methods, achieving over 7 \% improvement in registration recall on the 7-Scenes benchmark. Moreover, **Diff$^2$I2P** exhibits robust and superior scene-agnostic registration performance.
Live content is unavailable. Log in and register to view live content