Workshop
End-to-End 3D Learning
Zhiwen Fan, Qianqian Wang, Yuanbo Xiangli, Wenyan Cong, Yiqing Liang, Jiachen Li, Zhengzhong Tu, Georgios Pavlakos, Yan Wang, Achuta Kadambi
Sun 19 Oct, 4 p.m. PDT
End-to-End 3D Learning (E2E3D) investigates unified, fully differentiable frameworks to map raw sensor data into comprehensive 3D representations. By merging multiple handcrafted stages into a single trainable pipeline, E2E3D strives to scale spatial understanding. Topics include self-supervised pretraining of large-scale 3D foundation models, efficient real-time inference on resource-limited platforms, and automated, high-fidelity 3D annotation methods. We showcase applications in autonomous driving, robotics, AR/VR, and scientific imaging—demonstrating how integrated 3D systems enhance perception, content generation, and science. Through cross-disciplinary talks, posters, and panels, participants will help define the next generation of robust, real-world 3D AI.
Live content is unavailable. Log in and register to view live content