Poster
MR-FIQA: Face Image Quality Assessment with Multi-Reference Representations from Synthetic Data Generation
Fu-Zhao Ou · Chongyi Li · Shiqi Wang · Sam Kwong
Recent advancements in Face Image Quality Assessment (FIQA) models trained on real large-scale face datasets are pivotal in guaranteeing precise face recognition in unrestricted scenarios. Regrettably, privacy concerns lead to the discontinuation of real datasets, underscoring the pressing need for a tailored synthetic dataset dedicated to the FIQA task. However, creating satisfactory synthetic datasets for FIQA is challenging. It requires not only controlling the intra-class degradation of different quality factors (e.g., pose, blur, occlusion) for the pseudo-identity generation but also designing an optimized quality characterization method for quality annotations. This paper undertakes the pioneering initiative to establish a Synthetic dataset for FIQA (SynFIQA) based on a hypothesis: accurate quality labeling can be achieved through the utilization of quality priors across the diverse domains involved in quality-controllable generation. To validate this, we tailor the generation of reference and degraded samples by aligning pseudo-identity image features in stable diffusion latent space, editing 3D facial parameters, and customizing dual text prompts and post-processing. Furthermore, we propose a novel quality characterization method that thoroughly examines the relationship of Multiple Reference representations among recognition embedding, spatial, and visual-language domains to acquire annotations essential for fitting FIQA models (MR-FIQA). Extensive experiments confirm the validity of our hypothesis and demonstrate the advantages of our SynFIQA data and MR-FIQA method. Our dataset, source code, and models will be publicly available.
Live content is unavailable. Log in and register to view live content