Poster
FineMotion: A Dataset and Benchmark with both Spatial and Temporal Annotation for Fine-grained Motion Generation and Editing
Bizhu Wu · Jinheng Xie · Meidan Ding · Zhe Kong · Jianfeng Ren · Ruibin Bai · Rong Qu · Linlin Shen
Generating realistic human motions from given textual descriptions has undergone significant advancements owing to the prevalence of digital humans. Although recent studies have achieved notable success in this task, they omitted specific body part movements and their timing.In this paper, we address this issue by enriching the textual description with more details. Specifically, we propose the FineMotion dataset, which contains over 442k human motion snippets, short segments of the human motion sequences, and their corresponding detailed human body part movement descriptions. Additionally, the dataset includes about 95k detailed paragraphs describing the movements of human body parts throughout entire motion sequences. Experimental results demonstrate the significance of our dataset on the text-driven fine-grained human motion generation task, especially with a remarkable +15.3\% improvement in Top-3 accuracy for the MDM network. Notably, we further support a zero-shot pipeline of fine-grained motion editing, which focuses on detailed editing in both spatial and temporal dimensions via text. The dataset and code will be released on GitHub.
Live content is unavailable. Log in and register to view live content