Poster
Motion-2-to-3: Leveraging 2D Motion Data for 3D Motion Generation
Ruoxi Guo · Huaijin Pi · Zehong Shen · Qing Shuai · zechenhu zechenhu · Zhumei Wang · Yajiao Dong · Ruizhen Hu · Taku Komura · Sida Peng · Xiaowei Zhou
Text-driven human motion synthesis has showcased its potential for revolutionizing motion design in the movie and game industry.Existing methods often rely on 3D motion capture data, which requires special setups, resulting in high costs for data acquisition, ultimately limiting the diversity and scope of human motion. In contrast, 2D human videos offer a vast and accessible source of motion data, covering a wider range of styles and activities.In this paper, we explore the use of 2D human motion extracted from videos as an alternative data source to improve text-driven 3D motion generation.Our approach introduces a novel framework that disentangles local joint motion from global movements, enabling efficient learning of local motion priors from 2D data.We first train a single-view 2D local motion generator on a large dataset of text-2D motion pairs.Then we fine-tune the generator with 3D data, transforming it into a multi-view generator that predicts view-consistent local joint motion and root dynamics.Evaluations on the well-acknowledged datasets and novel text prompts demonstrate that our method can efficiently utilizes 2D data, supporting a wider range of realistic 3D human motion generation.
Live content is unavailable. Log in and register to view live content