Skip to yearly menu bar Skip to main content


Poster

Toward Fair and Accurate Cross-Domain Medical Image Segmentation: A VLM-Driven Active Domain Adaptation Paradigm

Hongqiu Wang · Wu Chen · Xiangde Luo · Zhaohu Xing · Lihao Liu · Jing Qin · Shaozhi Wu · Lei Zhu


Abstract:

Fairness in AI-assisted medical image analysis is crucial for equitable healthcare, but is often neglected, especially in cross-domain scenarios (diverse patient demographics and imaging protocols) that are prevalent in medical applications. Effective and equitable deployment of AI models in these scenarios are critical, yet traditional Unsupervised Domain Adaptation (UDA) methods exhibit limited improvements. Emerging Active Domain Adaptation (ADA) approaches offer more effective enhancements, but all ignore fairness issues, exacerbating biased outcomes. Therefore, in this work, we propose the first fairness-aware ADA paradigm that simultaneously achieves both enhanced fairness and superior overall performance. Our method leverages the multimodal alignment capability of Vision-Language Models (VLMs): By performing medical images (vision) and sensitive attributes (language) learning, VLM inherently captures semantic correlations between visual features and protected attributes, enabling explicit attributes representation. Building on this foundation, we further devise an attribute-aware strategy (FairAP), which dynamically adapts to diverse patient demographics to promote equitable and high-quality outcomes by considering both Attribute and Polysemy. Extensive experiments on the FairDomain benchmark demonstrate that our method significantly reduces bias and maintains state-of-the-art performance in segmentation tasks, outperforming existing UDA and ADA methods. This work pioneers a VLM-driven ADA paradigm for fair cross-domain medical segmentation, offering a blueprint for effective and equitable AI deployment in clinical practice. Code will be released.

Live content is unavailable. Log in and register to view live content