Skip to yearly menu bar Skip to main content


Poster

GestureHYDRA: Semantic Co-speech Gesture Synthesis via Hybrid Modality Diffusion Transformer and Cascaded-Synchronized Retrieval-Augmented Generation

Quanwei Yang · Luying Huang · Kaisiyuan Wang · Jiazhi Guan · Shengyi He · Fengguo Li · Hang Zhou · Lingyun Yu · Yingying Li · Haocheng Feng · Hongtao Xie


Abstract:

While increasing attention has been paid to human gesture synthesis, most previous works concentrate on holistic body movements without investigating hand gestures with explicit and essential semantics. In this paper, we study co-speech gesture generation with an emphasis on specific hand gesture activation, which can deliver more instructional information than common body movements. To achieve this, we first build a high-quality dataset of 3D human body movements including a set of semantically explicit hand gestures that are commonly used by live streamers. Then we present a hybrid-modality gesture generation system built upon hybrid-modality diffusion transformer architecture with novelly designed motion-style injective transformer layers, which enables advanced gesture modeling ability and versatile gesture operations. To guarantee these specific hand gestures can be activated, we introduce a cascaded retrieval-augmented generation strategy built upon a semantic gesture repository annotated for each subject and an adaptive audio-gesture synchronization mechanism, which substantially improves semantic gesture activation and production efficiency.Quantitative and qualitative experiments demonstrate that our proposed approach achieves superior performance over all the counterparts.

Live content is unavailable. Log in and register to view live content