Poster
Who is a Better Talker: Subjective and Objective Quality Assessment for AI-Generated Talking Heads
Yingjie Zhou · Jiezhang Cao · Zicheng Zhang · Farong Wen · Jiang Yanwei · Jun Jia · Xiaohong Liu · Xiongkuo Min · Guangtao Zhai
Speech-driven methods for portraits are figuratively known as ``Talkers" because of their capability to synthesize speaking mouth shapes and facial movements. Especially with the rapid development of the Text-to-Image (T2I) models, AI-Generated Talking Heads (AGTHs) have gradually become an emerging digital human media. However, challenges persist regarding the quality of these talkers and AGTHs they generate, and comprehensive studies addressing these issues remain limited. To address this gap, this paper \textbf{presents the largest AGTH quality assessment dataset THQA-10K} to date, which selects 12 prominent T2I models and 14 advanced talkers to generate AGTHs for 14 prompts. After excluding instances where AGTH generation is unsuccessful, the THQA-10K dataset contains 10,457 AGTHs, which provides rich material for AGTH quality assessment. Then, volunteers are recruited to subjectively rate the AGTHs and give the corresponding distortion categories. In our analysis for subjective experimental results, we evaluate the performance of talkers in terms of generalizability and quality, and also expose the distortions of existing AGTHs. Finally, \textbf{an objective quality assessment method based on the first frame, Y-T slice and tone-lip consistency is proposed}. Experimental results show that this method can achieve state-of-the-art (SOTA) performance in AGTH quality assessment. The work in this paper will be released.
Live content is unavailable. Log in and register to view live content