Skip to yearly menu bar Skip to main content


Poster

CoSMIC: Continual Self-supervised Learning for Multi-Domain Medical Imaging via Conditional Mutual Information Maximization

Yihang Liu · Ying Wen · Longzhen Yang · Lianghua He · Heng Tao Shen


Abstract:

Medical foundation models, pre-trained on diverse data sources, have shown significant potential for multi-domain medical imaging tasks.However, the domain shifts across different anatomical types significantly hinder their performance compared to domain-specific models.To address this challenge, we propose CoSMIC, a Continual Self-supervised learning framework for Multi-domain medIcal image analysis, with the core idea of Conditional mutual information maximization. Specifically, CoSMIC (i) acquires domain-specific knowledge sequentially, bypassing domain shifts caused by joint pre-training; (ii) enhances generalized representations by proposing a novel conditional contrastive loss to prevent catastrophic forgetting. This loss hierarchically aligns multi-view features within the current domain, maximizing their mutual information conditioned on domain-invariant representations extracted from prior domains through Anatomy-Guided Calibration. We pre-train CoSMIC across four medical domains and evaluate it on fifteen downstream datasets from five domains: Retinoscopy, Radiography, Ophthalmoscopy, Dermoscopy, and Histopathology (unseen). Experimental results show that CoSMIC (i) achieves robust feature extraction ability comparable to domain-specific models, (ii) exhibits exceptional generalization capability, significantly surpassing SOTA medical foundation models, and (iii) demonstrates superior transferability to new domains, overcoming current continual pre-training methods.

Live content is unavailable. Log in and register to view live content