Skip to yearly menu bar Skip to main content


Poster

CoralSRT: Revisiting Coral Reef Semantic Segmentation by Feature Rectifying via Self-supervised Guidance

Zheng Ziqiang · Wong Kwan · Binh-Son Hua · Jianbo Shi · Sai-Kit Yeung


Abstract:

We investigate coral reef semantic segmentation, in which coral reefs are governed by multifaceted factors, like genes, environmental changes, and internal interactions. Unlike segmenting structural units/instances, which are predictable and follow a set pattern, also referred to as commonsense or prior, segmenting coral reefs involves modeling \textit{self-repeated}, \textit{asymmetric}, and \textit{amorphous} distribution of elements, \emph{e.g.}, corals can grow in almost any shape and appearance. We revisited existing segmentation approaches and found that both computer vision and coral reef research communities failed to incorporate the intrinsic properties of the corals into model design. In this work, we propose a simple formulation for coral reef semantic segmentation: \textit{segment} as the basis to model both \textit{within-segment} and \textit{cross-segment} affinities. We propose \textbf{CoralSRT}, a feature rectification module via self-supervised guidance, to reduce the stochasticity of coral features extracted by powerful foundation models (FMs), as demonstrated in Fig.~\ref{fig:teaser}. We incorporate the intrinsic properties of corals to strengthen within-segment affinity by guiding the features within the self-supervised segments to align with the centrality. We investigate the features from FMs that were optimized by various pretext tasks on significantly large-scale unlabeled or labeled data, already contain rich information for modeling both within-segment and cross-segment affinity, enabling the adaptation of FMs for coral segmentation. CoralSRT can rectify features from FMs to more efficient features for label propagation and lead to further significant semantic segmentation performance gains, all without requiring additional human supervision, retraining/finetuning FMs or even domain-specific data. These advantages help reduce human effort and the need for domain expertise in data collection and labeling. Our method is easy to implement, and also both \textit{method-} and \textit{model-}agnostic. Our CoralSRT bridges the self-supervised pre-training and supervised training in the feature space, also offering insights for segmenting elements/stuffs (\emph{e.g.}, grass, plants, cells, and biofoulings).

Live content is unavailable. Log in and register to view live content