Skip to yearly menu bar Skip to main content


Poster

MDD: A Dataset for Text-and-Music Conditioned Duet Dance Generation

Prerit Gupta · Jason Alexander Fotso-Puepi · Zhengyuan Li · Jay Mehta · Aniket Bera


Abstract:

We introduce Multimodal DuetDance (MDD), a diverse multimodal benchmark dataset designed for text-controlled and music-conditioned 3D duet dance motion generation. Our dataset comprises 620 minutes of high-quality motion capture data performed by professional dancers, synchronized with music, and detailed with over 10K fine-grained natural language descriptions. The annotations capture a rich movement vocabulary, detailing spatial relationships, body movements, and rhythm, making Text2Duet the first dataset to seamlessly integrate human motions, music, and text for duet dance synthesis. We introduce two novel tasks supported by our dataset: (1) Text-to-Duet, where given music and a textual prompt, both the leader and follower dance motion are generated (2) Text-to-Dance Accompaniment, where given music, textual prompt, and the leader's motion, the follower's motion is generated in a cohesive, text-aligned manner.

Live content is unavailable. Log in and register to view live content