Skip to yearly menu bar Skip to main content


Poster

SAMora: Enhancing SAM through Hierarchical Self-Supervised Pre-Training for Medical Images

Shuhang Chen · Hangjie Yuan · Pengwei Liu · Hanxue Gu · Tao Feng · Dong Ni


Abstract:

The Segment Anything Model (SAM) has demonstrated significant potential in medical image segmentation, yet its performance is limited when only a small amount of labeled data is available, while there are abundance of valuable yet often overlooked hierarchical information inherent in medical data. To address this limitation, we draw inspiration from self-supervised learning and propose SAMora, an innovative framework that captures hierarchical medical knowledge by applying complementary self-supervised learning objectives at the image, patch, and pixel levels. To fully exploit the complementarity of hierarchical knowledge within LoRAs, we introduce HL-Attn, a hierarchical fusion module that integrates multi-scale features while maintaining their distinct characteristics. SAMora is compatible with various SAM variants, including SAM2, SAMed and H-SAM. Experimental results on the Synapse, LA, and PROMISE12 datasets demonstrate that SAMora outperforms existing SAM variants, achieving state-of-the-art performance in both few-shot and fully-supervised settings, while reducing fine-tuning epochs by 90\%.

Live content is unavailable. Log in and register to view live content