Skip to yearly menu bar Skip to main content


Poster

PriorMotion: Generative Class-Agnostic Motion Prediction with Raster-Vector Motion Field Priors

Kangan Qian · Jinyu Miao · Xinyu Jiao · Ziang Luo · Zheng Fu · Yining Shi · Yunlong Wang · Kun Jiang · Diange Yang


Abstract: Reliable spatial and motion perception is essential for safe autonomous navigation. Recently, class-agnostic motion prediction on bird's-eye view (BEV) cell grids derived from LiDAR point clouds has gained significant attention. However, existing frameworks typically perform cell classification and motion prediction on a per-pixel basis, neglecting important motion field priors such as rigidity constraints, temporal consistency, and future interactions between agents. These limitations lead to degraded performance, particularly in sparse and distant regions.To address these challenges, we introduce $\textbf{PriorMotion}$, an innovative generative framework designed for class-agnostic motion prediction that integrates essential motion priors by modeling them as distributions within a structured latent space. Specifically, our method captures structured motion priors using raster-vector representations and employs a variational autoencoder with distinct dynamic and static components to learn future motion distributions in the latent space. Experiments on the nuScenes dataset demonstrate that $\textbf{PriorMotion}$ outperforms state-of-the-art methods across both traditional metrics and our newly proposed evaluation criteria. Notably, we achieve improvements of approximately 15.24\% in accuracy for fast-moving objects, an 3.59\% increase in generalization, a reduction of 0.0163 in motion stability, and a 31.52\% reduction in prediction errors in distant regions. Further validation on FMCW LiDAR sensors confirms the robustness of our approach.

Live content is unavailable. Log in and register to view live content