Poster
MemoryTalker: Personalized Speech-Driven 3D Facial Animation via Audio-Guided Stylization
Hyung Kyu Kim · Sangmin Lee · HAK GU KIM
[
Abstract
]
Abstract:
Speech-driven 3D facial animation aims to synthesize realistic facial motion sequences from given audio, matching the speaker’s speaking style. However, previous works often require priors such as class labels of a speaker or additional 3D facial meshes at inference, which makes them fail to reflect the speaking style and limits their practical use. To address these issues, we propose \textit{MemoryTalker} which enables realistic and accurate 3D facial motion synthesis by reflecting speaker style only with audio input to maximize usability in applications. Our framework consists of two training stages: $<$1-stage$>$ is storing and retrieving general motion (\textit{i.e.}, Memorizing), and $<$2-stage$>$ is to perform the personalized facial motion synthesis (\textit{i.e.}, Animating) with the motion memory stylized by the audio-driven speaking style feature. In this second stage, our model learns about which facial motion types should be emphasized for a particular piece of audio. As a result, our \textit{MemoryTalker} can generate a reliable personalized facial animation without additional prior information. With quantitative and qualitative evaluations, as well as user study, we show the effectiveness of our model and its performance enhancement for personalized facial animation over state-of-the-art methods. Our source code will be released to facilitate further research.
Live content is unavailable. Log in and register to view live content