Poster
An OpenMind for 3D medical vision self-supervised learning
Tassilo Wald · Constantin Ulrich · Jonathan Suprijadi · Sebastian Ziegler · Michal Nohel · Robin Peretzke · Gregor Koehler · Klaus Maier-Hein
The field of self-supervised learning (SSL) for 3D medical images lacks consistency and standardization.While many methods have been developed, it is impossible to identify the current state-of-the-art, due to i) varying and small pre-training datasets, ii) varying architectures, and iii) being evaluated on differing downstream datasets. In this paper we bring clarity to this field and lay the foundation for further method advancements through three key contributions: We a) publish the largest publicly available pre-training dataset comprising 114k brain MRI volumes, enabling all practitioners to pre-train on a large-scale dataset. We b) benchmark existing 3D self-supervised learning methods on this dataset for a state-of-the-art CNN and Transformer architecture, clarifying the state of 3D SSL pre-training. Among many findings, we show that pre-trained methods can exceed a strong from-scratch nnU-Net ResEnc-L baseline. Lastly, we c) publish the code of our pre-training and fine-tuning frameworks and provide the pre-trained models created during the benchmarking process to facilitate rapid adoption and reproduction.
Live content is unavailable. Log in and register to view live content