Skip to yearly menu bar Skip to main content


Poster

SpinMeRound: Consistent Multi-View Identity Generation Using Diffusion Models

Stathis Galanakis · Alexandros Lattas · Stylianos Moschoglou · Bernhard Kainz · Stefanos Zafeiriou


Abstract:

Despite recent progress in diffusion models, generating realistic head portraits from novel viewpoints remains a significant challenge in computer vision. Most current approaches are constrained to limited angular ranges, predominantly focusing on frontal or near-frontal views. Moreover, although the recent emerging large-scale diffusion models have been proven robust in handling 3D scenes, they underperform on facial data, given their complex structure and the uncanny valley pitfalls. In this paper, we propose SpinMeRound, a diffusion-based approach designed to generate consistent and accurate head portraits from novel viewpoints. By leveraging a number of input views alongside an identity embedding, our method effectively synthesizes diverse viewpoints of a subject whilst robustly maintaining its unique identity features. Through experimentation, we showcase our model's generation capabilities in full head synthesis, while beating current state-of-the-art multi-view diffusion models.

Live content is unavailable. Log in and register to view live content