Skip to yearly menu bar Skip to main content


Poster

Towards a Universal 3D Medical Multi-modality Generalization via Learning Personalized Invariant Representation

Zhaorui Tan · Xi Yang · Tan Pan · TIANYI LIU · Chen Jiang · Xin Guo · Qiufeng Wang · Anh Nguyen · Yuan Qi · Kaizhu Huang · Yuan Cheng


Abstract: Variations in medical imaging modalities and individual anatomical differences pose challenges to cross-modality generalization in multi-modal tasks. Existing methods often concentrate exclusively on common anatomical patterns, thereby neglecting individual differences and consequently limiting their generalization performance. This paper emphasizes the critical role of learning individual-level invariance, i.e., personalized representation $\mathbb{X}_h$, to enhance multi-modality generalization under both homogeneous and heterogeneous settings.It reveals that mappings from individual anatomy to different medical modalities remain static across the population, which is implied in the personalization process.We propose a two-stage approach: pre-training with invariant representation $\mathbb{X}_h$ for personalization, then fine-tuning for diverse downstream tasks.We provide both theoretical and empirical evidence demonstrating the feasibility and advantages of personalization, showing that our approach yields greater generalizability and transferability across diverse multi-modal medical tasks compared to methods lacking personalization. Extensive experiments further validate that our approach significantly enhances performance in various generalization scenarios.

Live content is unavailable. Log in and register to view live content