Poster
MaTe: Images Are All You Need for Material Transfer via Diffusion Transformer
Nisha Huang · Henglin Liu · Yizhou Lin · Kaer Huang · Chubin Chen · Jie Guo · Tong-Yee Lee · Xiu Li
Recent diffusion-based methods for material transfer rely on image fine-tuning or complex architectures with assistive networks, but face challenges including text dependency, extra computational costs, and feature misalignment. To address these limitations, we propose MaTe, a streamlined diffusion framework that eliminates textual guidance and reference networks. MaTe integrates input images at the token level, enabling unified processing via multi-modal attention in a shared latent space. This design removes the need for additional adapters, ControlNet, inversion sampling, or model fine-tuning. Extensive experiments demonstrate that MaTe achieves high-quality material generation under a zero-shot, training-free paradigm. It outperforms state-of-the-art methods in both visual quality and efficiency while preserving precise detail alignment, significantly simplifying inference prerequisites.
Live content is unavailable. Log in and register to view live content