Workshop
From street to space: 3D Vision AcrosS altiTudes
Yujiao Shi, Yuanbo Xiangli, Zuzana Kukelova, Bo Dai, Richard Hartley, Hongdong Li
Mon 20 Oct, 11:30 a.m. PDT
As large-scale 3D scene modeling becomes increasingly important for applications such as urban planning, robotics, autonomous navigation, and virtual simulations, the need for diverse, high-quality visual data is greater than ever. However, acquiring dense and high-resolution ground-level imagery at scale is often impractical due to access limitations, cost, and environmental variability. In contrast, aerial and satellite imagery provide broader spatial coverage but lack the fine-grained details needed for many downstream applications. Combining images from multiple altitudes — from ground cameras to aerial drones and satellites—offers a promising solution to overcome these limitations, enabling richer, more complete 3D reconstructions. How can we achieve coherent and accurate 3D scene modeling when our visual world is captured from vastly different altitudes—ground, aerial, and satellite—under varying conditions? Each altitude offers distinct advantages, but cross-altitude data fusion introduces significant challenges: sparse and incomplete views, visual ambiguities, spatio-temporal inconsistencies, image quality variations, dynamic scene changes, and environmental factors that alter topology over time. Traditional 3D reconstruction methods, optimized for dense and structured inputs, struggle with such heterogeneous multi-altitude data. Advances in multi-scale feature alignment, neural scene representations, and robust cross-view fusion offer promising solutions, but key challenges remain.
Live content is unavailable. Log in and register to view live content