Zixuan Chen1,3
·
Yujin Wang1
·
Xin Cai2
·
Zhiyuan You2
Zheming Lu3
·
Fan Zhang1
·
Shi Guo1
·
Tianfan Xue2,1
1Shanghai AI Laboratory, 2The Chinese University of Hong Kong,
3Zhejiang University
- 2024.4.23: Inference codes, benchmark and results are released.
- 2024.4.5: Our UltraFusion is selected to be presented as a ✨highlight✨ in CVPR 2025.
- 2025.2.27: Accepeted by CVPR 2025 🎉🎉🎉.
- 2025.1.21: Feel free to try online demos at Hugging Face and OpenXLab 😊.
- Release training codes.
- Release inference codes and pre-trained model.
- Release UltraFusion benchmark and visual results.
- Release more visual comparison in our project page
We capture 100 challenging real-world HDR scenes for performance evaluation. Our benchmark and results (include competing methods) are availble at Google Drive and Baidu Disk. Moreover, we also provide results of our method and the comparison methods on RealHDV and MEFB.
Note: The HDR reconstruction methods perform poorly in some scenes because we follow their setup to retrain 2-exposure version, while the training set they used only provide ground truth for the middle exposure, limiting the dynamic range. We believe that using training data with higher dynamic range can improve performance.
Installation
# clone this repo
git clone https://github.com/OpenImagingLab/UltraFusion.git
cd UltraFusion
# create environment
conda create -n UltraFusion python=3.10
conda activate UltraFusion
pip install -r requirements.txt
Prepare Data and Pre-trained Model
Download raft-sintel.pth, v2-1_512-ema-pruned.ckpt, fcb.pt and ultrafusion.pt, and put them in ckpts
folder. Download three benchmarks (Google Drive or Baidu Disk) and put them in data
folder.
Inference
Run the following scripts for inference.
# UltraFusion Benchmark
python inference.py --dataset UltraFusion --output results --tiled --tile_size 512 --tile_stride 256 --prealign --save_all
# RealHDRV
python inference.py --dataset RealHDRV --output results --tiled --tile_size 512 --tile_stride 256 --prealign --save_all
# MEFB (cancel pre-alignment for static scenes)
python inference.py --dataset MEFB --output results --tiled --tile_size 512 --tile_stride 256 --save_all
You can also use val_nriqa.py
for evaluation.
This project is developped on the codebase of DiffBIR. We appreciate their great work!
If you find our paper and repo are helpful for your research, please consider citing:
@article{chen2025ultrafusion,
title={UltraFusion: Ultra High Dynamic Imaging using Exposure Fusion},
author={Chen, Zixuan and Wang, Yujin and Cai, Xin and You, Zhiyuan and Lu, Zheming and Zhang, Fan and Guo, Shi and Xue, Tianfan},
journal={arXiv preprint arXiv:2501.11515},
year={2025}
}