Skip to yearly menu bar Skip to main content


Poster

Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars

Vanessa Skliarova · Egor Zakharov · Malte Prinzler · Giorgio Becherini · Michael Black · Justus Thies


Abstract:

We present a novel approach for hair reconstruction from single photographs based on a global hair prior combined with local optimization.Capturing strand-based hair geometry from single photographs is challenging due to the variety and geometric complexity of hairstyles and the lack of ground truth training data.Classical reconstruction methods like multi-view stereo only reconstruct the visible hair strands, missing the inner structure of hair and hampering realistic hair simulation.To address this, existing methods leverage hairstyle priors trained on synthetic data.Such data, however, is limited in both quantity and quality since it requires manual work from skilled artists to model the 3D hairstyles and create nearly-photorealistic renderings.To address this, we propose a novel approach that uses both, real and synthetic data to learn an effective hairstyle prior.Specifically, we train a transformer-based prior model on synthetic data to obtain knowledge of the internal hairstyle geometry and introduce real data in the learning process to model the outer structure.This training scheme is able to model the visible hair strands depicted in an input image, while preserving the general structure of hairstyles.We exploit this prior to create a Gaussian-splatting-based reconstruction method that creates hairstyles from one or more images.Through qualitative and quantitative comparisons with existing reconstruction pipelines, we demonstrate the effectiveness and superior performance of our method for capturing detailed hair orientation, overall silhouette, and backside consistency.

Live content is unavailable. Log in and register to view live content