Poster
Genflow3D: Generative scene flow estimation and prediction on point cloud sequences
Hanlin Li · Wenming Weng · Yueyi Zhang · Zhiwei Xiong
Scene flow provides the fundamental information of the scene dynamics. Existing scene flow estimation methods typically rely on the correlation between only a consecutive point cloud pair, making them limited to the instantaneous state of the scene and face challenge in real-world scenarios with factors like occlusion, noise, and diverse motion of background and foreground. In this paper, we study the joint sequential scene flow estimation and future scene flow prediction on point cloud sequences. The expanded sequential input introduces long-term and high-order motion information. We propose GenFlow3D, a recurrent neural network model which integrates diffusion in the decoder to better incorporate the two tasks and enhance the ability to extract general motion patterns. A transformer-based denoising network is adopted to help capture useful information. Depending on the input point clouds, discriminative condition signals are generated to guide the diffusion decoder to switch among different modes specific for scene flow estimation and prediction in a multi-scale manner. GenFlow3D is evaluated on the real-world datasets Nuscenes and Argoverse 2, and demonstrates superior performance compared with the existing methods.
Live content is unavailable. Log in and register to view live content