Guided Image Filtering
Kaiming He, Member, IEEE, Jian Sun, Member , IEEE, and Xiaoou Tang, Fellow, IEEE
Abstract—In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter
computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different
image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better
behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance
image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter
naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one
of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of
computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image
matting/feathering, dehazing, joint upsampling, etc.
Index Terms—Edge-preserving filtering, bilateral filter, linear time filtering
Ç
1INTRODUCTION
M
OST applications in computer vision and computer
graphics involve image filtering to suppress and/or
extract content in images. Simple linear translation-invariant
(LTI) filters with explicit kernels, such as the mean,
Gaussian, Laplacian, and Sobel filters [2], have been widely
used in image restoration, blurring/s harpening, edge
detection, feature extraction, etc. Alternatively, LTI filters
can be implicitly performed by solving a Poisson Equation
as in high dynamic range (HDR) compression [3], image
stitching [4], image matting [5], and gradient domain
manipulation [6]. The filtering kernels are implicitly defined
by the inverse of a homogenous Laplacian matrix.
The LTI filtering kernels are spatially invariant and
independent of image content. But usually one may want
to consider additional information from a given guidance
image. The pioneer work of anisotropic diffusion [7] uses the
gradients of the filtering image itself to guide a diffusion
process, avoiding smoothing edges. The wei ghted least
squares (WLS) filter [8] utilizes the filtering input (instead
of intermediate results, as in [7]) as the guidance, and
optimizes a quadratic function, which is equivalent to
anisotropic diffusion with a nontrivial steady state. The
guidance image can also be another image besides the
filt ering input in many applications. Fo r example, in
colorization [9] the chrominance channels should not bleed
across luminance edges; in image matting [10] the alpha
matte should capture the thin structures in a composite
image; in haze removal [11] the depth layer should be
consistent with the scene. In these cases, we regard the
chrominance/alpha/depth layers as the image to be
filtered, and the luminance/composite/scene as the gui-
dance image, respectively. The filtering process in [9], [10],
and [11] is achieved by optimizing a quadratic cost
function weighted by the guidance image. The solution is
give n by solving a larg e sparse mat rix which solel y
depends on the guide. This inhomogeneous matrix im-
plicitly defines a translation-variant filtering kernel. While
these optimization-based approaches [8], [9], [10], [11] often
yield state-of-the-art quality, it comes with the price of
expensive computational time.
Another way to take advantage of the guidance image is
to explicitly build it into the filter kernels. The bilateral
filter, independently proposed in [12], [13], and [1] and
later generalized in [14], is perhaps the most popular one
of such explicit filters. Its output at a pixel is a weighted
average of the nearby pixels, where the weights depend on
the intensity/color similarities in the guidance image. The
guidance image can be the filter input itself [1] or another
image [14]. The bilateral filter can smooth small fluctua-
tions and while preserving edges. Though this filter is
effective in many situations, it may have unwanted gradient
reversal artifacts [15], [16], [8] near edges (discussed in
Section 3.4). The fast implementation of the bilateral filter
is also a challenging problem. Recent techniques [17], [18],
[19], [20], [21] rely on quantization methods to accelerate
but may sacrifice accuracy.
In this paper, we propose a novel explicit image filter
called guided filter. The filtering output is locally a linear
transform of the guidance image. On one hand, the guided
filter has good edge-preserving smoothing properties like the
bilateral filter, but it does not suffer from the gradient
reversal artifacts. On the other hand, the guided filter can be
used beyond smoothing: With the help of the guidance
image, it can make the filtering output more structured and
less smoothed than the input. We demonstrate that the
guided filter performs ve ry well in a great variety of
applications, including image smoothing/enhancement,
HDR compression, flash/no-flash imaging, matting/feath-
ering, dehazing, and joint upsampling. Moreover, the guided
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. X, XXXXXXX 2013 1
. K. He and J. Sun are with the Visual Computing Group, Microsoft
Research Asia, Microsoft Building 2, #5 Dan Leng Street, Hai Dian
District, Beijing 100080, China. E-mail: {kahe, jiansun}@microsoft.com.
. X. Tang is with the Department of Information Engineering, Chinese
University of Hong Kong, 809 SHB, Shatin, N.T., Hong Kong.
E-mail: xtang@ie.cuhk.edu.hk.
Manuscript received 13 June 2012; revised 6 Sept. 2012; accepted 16 Sept.
2012; published online 1 Oct. 2012.
Recommended for acceptance by J. Jia.
For information on obtaining reprints of this article, please send e-mail to:
tpami@computer.org, and reference IEEECS Log Number
TPAMI-2012-06-0447.
Digital Object Identifier no. 10.1109/TPAMI.2012.213.
0162-8828/13/$31.00 ß 2013 IEEE Published by the IEEE Computer Society