Skip to yearly menu bar Skip to main content


Poster

TorchAdapt: Towards Light-Agnostic Real-Time Visual Perception

Khurram Azeem Hashmi · Karthik Suresh · Didier Stricker · Muhammad Zeshan Afzal


Abstract:

Low-light conditions significantly degrade the performance of high-level vision tasks. Existing approaches either enhance low-light images without considering normal illumination scenarios, leading to poor generalization or are tailored to specific tasks. We propose TorchAdapt, a real-time adaptive feature enhancement framework that generalizes robustly across varying illumination conditions without degrading performance in well-lit scenarios. TorchAdapt consists of two complementary modules: the Torch module enhances semantic features beneficial for downstream tasks, while the Adapt module dynamically modulates these enhancements based on input content. Leveraging a novel light-agnostic learning strategy, TorchAdapt aligns feature representations of enhanced and well-lit images to produce powerful illumination-invariant features. Extensive experiments on multiple high-level vision tasks, including object detection, face detection, instance segmentation, semantic segmentation, and video object detection, demonstrate that TorchAdapt consistently outperforms state-of-the-art low-light enhancement and task-specific methods in both low-light and light-agnostic settings. TorchAdapt thus provides a unified, flexible solution for robust visual perception across diverse lighting conditions. Code and models are provided as supplementary.

Live content is unavailable. Log in and register to view live content