Poster
Active Learning Meets Foundation Models: Fast Remote Sensing Data Annotation for Object Detection
Marvin Burges · Philipe Dias · Dalton Lunga · Carson Woody · Sarah Walters
Object detection in remote sensing demands extensive, high-quality annotations—a process that is both labor-intensive and time-consuming. In this work, we introduce a real-time active learning and semi-automated labeling framework that leverages foundation models to streamline dataset annotation for object detection in remote sensing imagery. For example, by integrating a Segment Anything Model (SAM), our approach generates mask-based bounding boxes that serve as the basis for dual sampling: (a) uncertainty estimation to pinpoint challenging samples, and (b) diversity assessment to ensure broad data coverage. Furthermore, our Dynamic Box Switching Module (DBS) addresses the well-known cold start problem for object detection models by replacing its suboptimal initial predictions with SAM-derived masks, thereby enhancing early-stage localization accuracy. Extensive evaluations on multiple remote sensing datasets plus a real-world user study, demonstrate that our framework not only reduces annotation effort, but also significantly boosts detection performance compared to traditional active learning sampling methods. The code for training and the user interface will be made available.
Live content is unavailable. Log in and register to view live content