Poster
HADES: Human Avatar with Dynamic Explicit Hair Strands
Zhanfeng Liao · Hanzhang Tu · Cheng Peng · Hongwen Zhang · Boyao Zhou · Yebin Liu
We introduce HADES, the first framework to seamlessly integrate dynamic hair into human avatars. HADES represents hair as strands bound to 3D Gaussians, with roots attached to the scalp.By modeling inertial and velocity-aware motion, HADES is able to simulate realistic hair dynamics that naturally align with body movements.To enhance avatar fidelity, we incorporate multi-scale data and address color inconsistencies across cameras using a lightweight MLP-based correction module, which generates color correction matrices for consistent color tones. Besides, we resolve rendering artifacts, such as hair dilation during zoom-out, through a 2D Mip filter and physically constrained hair radii. Furthermore, a temporal fusion module is introduced to ensure temporal coherence by modeling historical motion states. Experimental results demonstrate that HADES achieves high-fidelity avatars with physically plausible hair dynamics, outperforming existing state-of-the-art solutions in realism and robustness.
Live content is unavailable. Log in and register to view live content