Tutorial
Benchmarking Egocentric Visual-Inertial SLAM at City Scale
Shaohui Liu · Anusha Krishnan · Jakob Engel · Marc Pollefeys
Simultaneous localization and mapping (SLAM) is a fundamental technique with applications spanning robotics, spatial AI, and autonomous navigation. It addresses two tightly coupled challenges: localizing the device while incrementally building a coherent map of the surroundings. Localization, or positioning, involves estimating a 6 Degrees-of-Freedom (6-DoF) pose for each image in a continuous sequence, typically aided by other sensor data, while mapping involves constructing an evolving representation of the surrounding environment. This tutorial specifically addresses the task of accurate positioning for large-scale egocentric data using visual-inertial SLAM and odometry (VIO). It offers a comprehensive overview of the challenges faced by VIO/SLAM methods on egocentric data and introduces a new dataset and benchmark that can serve as a robust testbed for benchmarking these systems. With the help of well-positioned speakers, this tutorial explores the new benchmarking approach by analyzing failure cases, identifying limitations, and highlighting open problems in open-source academic VIO/SLAM systems. Additionally, it provides hands-on experience using the dataset and evaluation tools for researchers to get started with their own SLAM evaluations.
Live content is unavailable. Log in and register to view live content