Single-Image HDR Reconstruction by Multi-Exposure Generation

Abstract

High dynamic range (HDR) imaging is an indispensable technique in modern photography. Traditional methods focus on HDR reconstruction from multiple images, solving the core problems of image alignment, fusion, and tone mapping, yet having a perfect solution due to ghosting and other visual artifacts in the reconstruction. Recent attempts at single-image HDR reconstruction show a promising alternative: by learning to map pixel values to their irradiance using a neural network, one can bypass the align-and-merge pipeline completely yet still obtain a high-quality HDR image. In this work, we propose a weakly supervised learning method that inverts the physical image formation process for HDR reconstruction via learning to generate multiple exposures from a single image. Our neural network can invert the camera response to reconstruct pixel irradiance before synthesizing multiple exposures and hallucinating details in under- and over-exposed regions from a single input image. To train the network, we propose a representation loss, a reconstruction loss, and a perceptual loss applied on pairs of under- and over-exposure images and thus do not require HDR images for training. Our experiments show that our proposed model can effectively reconstruct HDR images. Our qualitative and quantitative results show that our method achieves state-of-the-art performance on the DrTMO dataset. Our code is available at this link.

Publication
In Winter Conference on Applications of Computer Vision 2023

The result from our weakly supervised single-image HDR reconstruction method. DrTMO and Deep Recursive HDRI produces artifacts in saturated regions. Our method results in a morevisually pleasing HDR and outperforms both previous methods quantitatively.

Phuoc-Hieu Le
Phuoc-Hieu Le
Help machines become better at human tasks 🤖

My research interests include computational photography, computer vision and image processing.