Poster
GARF: Learning Generalizable 3D Reassembly for Real-World Fractures
Sihang Li · Zeyu Jiang · Grace Chen · Chenyang Xu · Siqi Tan · Xue Wang · Irving Fang · Kristof Zyskowski · Shannon McPherron · Radu Iovita · Chen Feng · Jing Zhang
3D reassembly is a challenging spatial intelligence task with broad applications across scientific domains. While large-scale synthetic datasets have fueled promising learning-based approaches, their generalizability to different domains is limited. Critically, it remains uncertain whether models trained on synthetic datasets can generalize to real-world fractures where breakage patterns are more complex. To bridge this gap, we propose \acronym{}, a \textbf{g}eneralizable 3D re\textbf{a}ssembly framework for \textbf{r}eal-world \textbf{f}ractures. \acronym{} leverages fracture-aware pretraining to learn fracture features from individual fragments, while flow matching enables precise 6-DoF alignments. At inference time, we introduce one-step preassembly, improving robustness to unseen objects and varying numbers of fractures. In collaboration with archaeologists, paleoanthropologists, and ornithologists, we curate \dataset{}, a diverse dataset for vision and learning communities, featuring real-world fracture types across ceramics, bones, eggshells, and lithics. Comprehensive experiments have demonstrated our approach consistently outperforms state-of-the-art methods on both synthetic and real-world datasets, achieving 82.87\% lower rotation error and 25.15\% higher part accuracy. This work sheds light on training on synthetic data to advance real-world 3D puzzle solving, showcasing its strong generalization across unseen object shapes and diverse fracture types.
Live content is unavailable. Log in and register to view live content