Poster
UniGS: Modeling Unitary 3D Gaussians for Novel View Synthesis from Sparse-view Images
Jiamin WU · Kenkun Liu · Xiaoke Jiang · Yuan Yao · Lei Zhang
In this work, we introduce UniGS, a novel 3D Gaussian reconstruction and novel view synthesis model that predicts a high-fidelity representation of 3D Gaussians from arbitrary number of posed sparse-view images.Previous methods often regress 3D Gaussians locally on a per-pixel basis for each view and then transfer them to world space and merge them through point concatenation.In contrast, Our approach involves modeling unitary 3D Gaussians in world space and updating them layer by layer.To leverage information from multi-view inputs for updating the unitary 3D Gaussians, we develop a DETR (DEtection TRansformer)-like framework, which treats 3D Gaussians as queries and updates their parameters by performing multi-view cross-attention (MVDFA) across multiple input images, which are treated as keys and values.This approach effectively avoids 'ghosting' issue and allocates more 3D Gaussians to complex regions.Moreover, since the number of 3D Gaussians used as decoder queries is independent of the number of input views, our method allows arbitrary number of multi-view images as input without causing memory explosion or requiring retraining.Extensive experiments validate the advantages of our approach, showcasing superior performance over existing methods quantitatively (improving PSNR by 4.2 dB when trained on Objaverse and tested on the GSO benchmark) and qualitatively.
Live content is unavailable. Log in and register to view live content