Skip to yearly menu bar Skip to main content


Workshop

Foundations Models for V2X-Based Cooperative Autonomous Driving

Walter Zimmer, Ross Greer, Max Ronecker, Lars Ullrich, Arpita Vats, Chuheng Wei, Haibao Yu, Rui Song, Jiajie Zhang, Julie Stephany Berrio Perez, Zewei Zhou, Tianhui Cai, Yifan Liu, Haoxuan Ma, Xingcheng Zhou, Rahul Raja, Zhengzhong Tu, Holger Caesar, Alina Roitberg, Guoyuan Wu, Jiaqi Ma, Daniel Watzenig, Mohan Trivedi, Alois Knoll

Sun 19 Oct, 11 a.m. PDT

DriveX explores the integration of foundation models and V2X-based cooperative systems to improve perception, planning, and decision-making in autonomous vehicles. While traditional single-vehicle systems have advanced tasks like 3D object detection, emerging challenges like holistic scene understanding and 3D occupancy prediction require more comprehensive solutions. Collaborative driving systems, utilizing V2X communication and roadside infrastructure, extend sensory range, provide hazard warnings, and improve decision-making through shared data. Simultaneously, Vision-Language Models (VLMs) offer generalization abilities, enabling zero-shot learning, open-vocabulary recognition, and scene explanation for novel scenarios. DriveX aims to bring together experts to explore these technologies, address challenges, and advance road safety.

Live content is unavailable. Log in and register to view live content