Multi-modal Localization and Mapping
Timothy D Barfoot, Luca Carlone, Daniel Cremers, Frank Dellaert, Ayoung Kim, Yan Xia, Niclas Zeller
Abstract
Multi-modal Localization and Mapping is an essential component of computer vision, with diverse applications in fields such as autonomous robotics, augmented reality, and beyond. This workshop aims to unite researchers, practitioners, and enthusiasts to explore the latest advancements, challenges, and innovations in multi-modal localization and mapping. By leveraging information from various sensors (e.g. camera, IMU, LiDAR, radar, and language), multi-modal approaches can significantly enhance localization and mapping accuracy in complex environments.
Video
Chat is not available.
Successful Page Load