Poster
OneGT: One-Shot Geometry-Texture Neural Rendering for Head Avatars
Jinshu Chen · Bingchuan Li · Fan Zhang · Songtao Zhao · Qian HE
Existing solutions for creating high-fidelity digital head avatars encounter various obstacles. Traditional rendering tools offer realistic results, while heavily requiring expert skills. Neural rendering methods are more efficient but often compromise between the generated fidelity and flexibility. We present OneGT that, for the first time, adheres to the frameworks of the rendering tools, while restructuring individual stages of the rendering pipeline through neural networks. OneGT maintains high systemic interpretability, inheriting the superior performances of neural rendering approaches. Specifically, OneGT contains a skeleton-anchoring stage and a texture-rendering stage, in which well-designed Transformers learn the geometric transformations and the proposed reference-perceptible DiT renders the textures respectively. Our framework learns geometric consistency from the innovatively introduced synthetic data, thus achieving superior performance while requiring only 10%-30% of the real-world data typically used by competitive methods. Experimental results demonstrate that OneGT achieves high fidelity in producing portrait avatars, meanwhile maintaining the flexibility of editing.
Live content is unavailable. Log in and register to view live content