Skip to yearly menu bar Skip to main content


Poster

imHead: A large-scale implicit morphable model for localized head modeling

Rolandos Alexandros Potamias · Stathis Galanakis · Jiankang Deng · Athanasios Papaioannou · Stefanos Zafeiriou


Abstract:

Over the last years, 3D morphable models (3DMMs) have emerged as a state-of-the-art methodology for modeling and generating expressive 3D avatars. However, given their reliance on a strict topology, along with their linear nature, they struggle to represent complex full-head shapes. Following the advent of deep implicit functions (DIFs), we propose imHead, a novel implicit 3DMM that not only models expressive 3D head avatars but also facilitates localized editing of the facial features. Previous methods directly divided the latent space into local components accompanied by an identity encoding to capture the global shape variations, leading to expensive latent sizes. In contrast, we retain a single compact identity space and introduce an intermediate region-specific latent representation to enable local edits. To train imHead, we curate a large-scale dataset of over 4,500 identities, making a step-towards large scale 3D head modeling. Under a series of experiments we demonstrate the expressive power of the proposed model to represent diverse identities and expressions outperforming previous approaches. Additionally, the proposed approach provides an interpretable solution for 3D face manipulation, allowing the user to make localized edits. Models and data will be made publicly available for research purposes.

Live content is unavailable. Log in and register to view live content