Skip to yearly menu bar Skip to main content


Poster

HDR Image Generation via Gain Map Decomposed Diffusion

Yuanshen Guan · Ruikang Xu · Yinuo Liao · Mingde Yao · Lizhi Wang · Zhiwei Xiong


Abstract:

While diffusion models have demonstrated significant success in standard dynamic range (SDR) image synthesis, generating high dynamic range (HDR) images with higher luminance and broader color gamuts remains challenging. This arises primarily from two factors: (1) The incompatibility between pretrained SDR image auto-encoders and the high-bit-depth HDR images; (2) The lack of large-scale HDR image datasets for effective learning and supervision. In this paper, we propose a novel framework for HDR image generation with two key innovations: (1) Decomposed HDR Image Generation: We leverage a double-layer HDR image format to decompose the HDR image into two low-bit-depth components: an SDR image with a corresponding Gain Map (GM).This format is inherently compatible with pretrained SDR auto-encoders, motivating the decomposition of HDR image generation into SDR image and GM prediction. (2) Unsupervised Data Construction: We develop an automated pipeline to construct ``Text-SDR-GM" triplets from large-scale text-image datasets by brightness-aware compression and gamut-constrained reduction, enabling unsupervised learning of GMs without ground-truth data. Building upon these innovations, we adapt the Stable Diffusion model to jointly predict GMs and SDR images, enabling high-quality decomposed HDR image generation. Experiments show that our framework excels in HDR image generation and SDR-to-HDRTV up-conversion, generalizing well across diverse scenes and conditions.

Live content is unavailable. Log in and register to view live content