Single-Image HDR Reconstruction

Single-Image HDR Reconstruction by Multi-Exposure Generation

Phuoc-Hieu Le1           Quynh Le1, 2           Rang Nguyen1           Binh-Son Hua1

1VinAI Research 2University of California San Diego

Winter Conference on Applications of Computer Vision (WACV), 2023

The result from our weakly supervised single-image HDR reconstruction method. DrTMO and Deep Recursive HDRI produces artifacts in saturated regions. Our method results in a more visually pleasing HDR and outperforms both previous methods quantitatively.

Abstract

High dynamic range (HDR) imaging is an indispensable technique in modern photography. Traditional methods focus on HDR reconstruction from multiple images, solving the core problems of image alignment, fusion, and tone mapping, yet having a perfect solution due to ghosting and other visual artifacts in the reconstruction. Recent attempts at single-image HDR reconstruction show a promising alternative: by learning to map pixel values to their irradiance using a neural network, one can bypass the align-and-merge pipeline completely yet still obtain a high-quality HDR image. In this work, we propose a weakly supervised learning method that inverts the physical image formation process for HDR reconstruction via learning to generate multiple exposures from a single image. Our neural network can invert the camera response to reconstruct pixel irradiance before synthesizing multiple exposures and hallucinating details in under- and over-exposed regions from a single input image. To train the network, we propose a representation loss, a reconstruction loss, and a perceptual loss applied on pairs of under- and over-exposure images and thus do not require HDR images for training. Our experiments show that our proposed model can effectively reconstruct HDR images. Our qualitative and quantitative results show that our method achieves state-of-the-art performance on the DrTMO dataset.

Video

Coming Soon!

Materials

Proposed Network

Training pipeline of our proposed framework. Given a pair of images in two different exposures, we predict latent invariant representation from the exposures by enforcing the exposure pair (X̂1 , X̂2 ) to have the same representation when scaled by a factor (network N1). This representation can then be scaled and passed to Up/Down-Exposure Net (N2 and N3) to reconstruct different exposure images.


Qualitative Results

Tone-mapped HDR images comparison between different methods. DrTMO and Deep Recursive HDRI produce artifacts in extremely high dynamic range regions, SingleHDR appears to have checkboard artifacts, while our method can recover details in these regions pleasingly.


Quantitative Results
Citation
        @inproceedings{le2023singlehdr,
          title={Single-Image HDR Reconstruction by Multi-Exposure Generation},
          author={Phuoc-Hieu Le and Quynh Le and Rang Nguyen and Binh-Son Hua},
          booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
          month={January},
          year={2023},
      }

Acknowledgements

This work is done when Quynh Le was a resident of the AI Residency program at VinAI Research.
The website is modified from this template.