Skip to yearly menu bar Skip to main content


Poster

FPEM: Face Prior Enhanced Facial Attractiveness Prediction for Live Videos with Face Retouching

Hui Li · Xiaoyu Ren · Hongjiu Yu · Ying Chen · Kai Li · L Wang · Xiongkuo Min · Huiyu Duan · Guangtao Zhai · Xu Liu


Abstract:

Facial attractiveness prediction (FAP) has long been an important computer vision task, which could be widely applied in live videos with facial retouching. However, previous FAP datasets are either small or closed-source. Moreover, the corresponding FAP models exhibit limited generalization and adaptation ability.To overcome these limitations, we introduce the first large-scale FAP dataset LiveBeauty specifically designed for live video scenarios wherein face images may be real-time processed for aesthetics purposes.10,000 face images are collected directly from a live streaming platform, with 200,000 corresponding attractiveness annotations obtained from a well-devised subjective experiment, making LiveBeauty the largest open-access FAP dataset. Based on the built dataset, a novel FAP method named Facial Prior Enhanced Multi-modal model (FPEM) is proposed to measure the attractiveness of facial images.Extensive experiments conducted on both LiveBeauty and other open-source FAP datasets demonstrate that our proposed method achieves state-of-the-art performance. The dataset will be available soon.

Live content is unavailable. Log in and register to view live content