Poster
FreeDance: Towards Harmonic Free-Number Group Dance Generation via a Unified Framework
Yiwen Zhao · Yang Wang · Liting Wen · Hengyuan Zhang · Xingqun Qi
[
Abstract
]
Abstract:
Generating harmonic and diverse human motions from music signals, especially for multi-person group dance, is a practical yet challenging task in virtual avatar creation.Existing methods merely model the group dance with a fixed number of dancers, lacking the flexibility to generate arbitrary individual group movements. To fulfill this goal, we propose a novel unified framework capable of synthesizing free-number dancers harmonically aligned with given music, namely $\textbf{\textit{FreeDance}}$. Considering the plausibility of arbitrary dancer generation while preserving the diverse dynamics of multiple individuals, we build the framework upon collaborative masked token modeling in 2D discrete space. In particular, we devise a $\textbf{\textit{Cross-modality Residual Alignment Module (CRAM)}}$ to diversify the movement of each individual and intensify its alignment with music.CRAM captures the spatial motion deformation of each dancer using residual learning and integrates it with rhythmic representation into a joint embedding. We leverage this joint embedding to enhance cross-entity alignment while reinforcing the intrinsic connection between motion and music.Moreover, recognizing the requirement of interactive coordination of generated multi-dancer motions, we design a $\textbf{\textit{Temporal Interaction Module (TIM)}}$.Benefiting from masked 2D motion tokens, TIM effectively models the temporal correlation between current individuals w.r.t neighboring dancers as interaction guidance to foster stronger inter-dancer dependencies.Extensive experiments demonstrate that our approach generates harmonic group dance with any number of individuals, outperforming state-of-the-art methods adapting number-fixed counterparts.
Live content is unavailable. Log in and register to view live content