Poster
SparseLaneSTP: Leveraging Spatio-Temporal Priors with Sparse Transformers for 3D Lane Detection
Maximilian Pittner · Joel Janai · Mario Faigle · Alexandru Condurache
3D lane detection has emerged as a critical challenge in autonomous driving, encompassing identification and localization of lane markings and the 3D road surface.Conventional 3D methods detect lanes from dense Birds-Eye-View (BEV) features, though erroneous transformations often result in a poor feature representation misaligned with the true 3D road surface.While recent sparse lane detectors have outperformed dense BEV approaches, they remain simple adaptations of the standard detection transformer, completely ignoring valuable lane-specific priors. Furthermore, existing methods fail to utilize historic lane observations, which yield the potential to resolve ambiguities in situations of poor visibility. To address these challenges, we present SparseLaneSTP, a novel method that integrates both geometric properties of the lane structure and temporal information into a sparse lane transformer. It introduces a new lane-specific spatio-temporal attention mechanism, a continuous lane representation tailored for sparse architectures as well as temporal regularization.Identifying the weaknesses of existing 3D lane datasets, we further introduce a precise and consistent 3D lane dataset using a simple yet effective auto-labeling strategy.Our experimental section proves the benefits of our contributions and demonstrates state-of-the-art performance across all detection and error metrics on existing 3D lane detection benchmarks as well as on our novel dataset.We aim to release code and data by the publication date.
Live content is unavailable. Log in and register to view live content