Poster
Split-and-Combine: Enhancing Style Augmentation for Single Domain Generalization
Lichuan Gu · Shuai Yang · Qianlong Dang · Zhize Wu · LiChuan Gu
Single domain generalization aims to learn a model with good generalization ability from a single source domain. Recent advances in this field have focused on increasing the diversity of the training data through style (e.g., color and texture) augmentation. However, most existing methods apply uniform perturbations to the entire image, failing to simulate complex images with multiple distinct stylistic regions. To address this, we propose a ``Split-And-Combine" (SAC) strategy to enhance style diversity. Specifically, SAC first performs patch-aware augmentation, which splits an image into multiple patches and applies style augmentation independently to each patch, enabling distinct color variations across regions. Then, SAC combines these patches to reconstruct a complete image and applies adaptive random convolutions, which utilizes a deformable convolution layer with random and Gaussian filters to enhance texture diversity while preserving object integrity. Notably, SAC leverages entropy as a risk assessment criterion to adaptively determine whether a sample should undergo augmentation within the iterative process of random convolutions, preventing excessive augmentation. Furthermore, SAC introduces an energy-based distribution discrepancy score to quantify out-of-distribution likelihood, systematically expanding the augmented data's distribution. SAC can serve as a plug-and-play component to improve the performance of recent methods. Extensive experiments on four datasets demonstrate the effectiveness of SAC.
Live content is unavailable. Log in and register to view live content