Skip to yearly menu bar Skip to main content


Poster

Details Matter for Indoor Open-vocabulary 3D Instance Segmentation

Sanghun Jung · Jingjing Zheng · Ke Zhang · Nan Qiao · Albert Y. C. Chen · Lu Xia · Chi Liu · Yuyin Sun · Xiao Zeng · Hsiang-Wei Huang · Byron Boots · Min Sun · Cheng-Hao Kuo


Abstract:

Unlike closed-vocabulary 3D instance segmentation that is trained end-to-end, open-vocabulary 3D instance segmentation (OV-3DIS) leverages vision-language models (VLMs) to generate 3D instance proposals and classify them. While various concepts have been proposed from existing research, we observe that these individual concepts are not mutually exclusive but complementary. In this paper, we propose a new state-of-the-art solution for OV-3DIS by carefully designing a recipe to combine the concepts together and refining them to address key challenges. Our solution follows the two-stage scheme: 3D proposal generation and instance classification. We employ robust 3D tracking-based proposal aggregation to generate 3D proposals and remove overlapped or partial proposals by iterative merging/removal. For the classification stage, we replace the standard CLIP model with Alpha-CLIP, which incorporates object masks as an alpha channel to reduce background noise and obtain object-centric representation. Additionally, we introduce the standardized maximum similarity (SMS) score to normalize text-to-proposal similarity, effectively filtering out false positives and boosting precision. Our framework achieves state-of-the-art performance on ScanNet200, S3DIS, and Replica across all AP and AR metrics, even surpassing an end-to-end closed-vocabulary method.

Live content is unavailable. Log in and register to view live content