A MULTI-EXPOSURE IMAGE FUSION BASED ON THE ADAPTIVE WEIGHTS
REFLECTING THE RELATIVE PIXEL INTENSITY AND GLOBAL GRADIENT
Sang-hoon Lee, Jae Sung Park, Nam Ik Cho
Seoul National University
Department of Electrical and Computer Engineering
1, Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
ABSTRACT
This paper presents a new multi-exposure fusion algorithm.
The conventional approach is to define a weight map for each
of the multi-exposure images, and then obtain the fusion im-
age as their weighted sum. Most of existing methods fo-
cused on finding weight functions that assign larger weights
to the pixels in better-exposed regions. While the conven-
tional methods apply the same function to each of the multi-
exposure images, we propose a function that considers all the
multi-exposure images simultaneously to reflect the relative
intensity between the images and global gradients. Specifi-
cally, we define two kinds of weight functions for this. The
first is to measure the importance of a pixel value relative to
the overall brightness and neighboring exposure images. The
second is to reflect the importance of a pixel value when it is
in a range with relatively large global gradient compared to
other exposures. The proposed method needs modest compu-
tational complexity owing to the simple weight functions, and
yet it achieves visually pleasing results and gets high scores
according to an image quality measure.
Index Terms— Image fusion, dynamic range, image en-
hancement
1. INTRODUCTION
The dynamic range of a camera is usually narrower than that
of most of the scenes that we wish to take. Whatever the bit-
depth of a camera is, it is considered to have the relatively low
dynamic range (LDR) compared to the scenes with high dy-
namic range (HDR). Hence, for capturing such HDR scenes
with the LDR camera, the most common approach is to take
pictures several times while changing the exposure time from
short to long [1] and merge to an HDR one.
On the other hand, for displaying the synthesized HDR
image on an LDR display device, we need a tone-mapping
process for compressing the HDR into the LDR [2, 3]. When
the target displays are only the LDR ones, we may directly
synthesize a tone-mapped-like LDR image from the multi-
exposure images. The most common approach for this pur-
pose is to define a weight map for each of the multi-exposure
(a) Multi-exposure image sequence
(b) Fused image
Fig. 1. Demonstration of MEF. (a) A multi-exposure image
sequence courtesy of Tom Mertens and (b) the result of our
fusion method.
images and synthesize a final tone-mapped-like image as
a weighted sum of the images, which is called the Multi-
Exposure image Fusion (MEF) algorithm.
The conventional MEF methods are mostly pixel-wise
ones, i.e., the weight maps are also images with the same size
as the input, and the weight reflects the importance of the
corresponding pixel in the input images. Hence, finding the
appropriate weight maps is the most important task in this ap-
proach. For examples, Burt et al. [4] used Laplacian pyramid
decomposition and computed weight maps using local effi-
cient energy and correlation between the pyramids. Mertens
et al. [5] defined several metrics that reflect the pixel quality,
such as contrast, saturation, and well-exposedness. Raman et
al. [6] and Zhang et al. [7] detected regions with rich infor-
mation in an image, which can be obtained from computing
gradient magnitudes [6] or a bilateral filtering process [7].
Vonikakis et al. [8] computed weight maps by illumination
estimation. Since weight maps are often noisy, Li et al. [9, 10]
refined weight maps using edge-preserving filters such as the
recursive filter [11] or the guided filter [12]. Shen et al. pro-
posed a random walk approach to fuse images. There are also
1737978-1-4799-7061-2/18/$31.00 ©2018 IEEE ICIP 2018