Super-Resolution Restoration of Motion Blurred Images
Qinchun Qian
†
and Bahadir K. Gunturk
†‡
†
School of Electrical Engineering and Computer Science
Louisiana State University, Baton Rouge, LA 70803
‡
College of Engineering and Natural Sciences
Istanbul Medipol University, Istanbul, Turkey
ABSTRACT
In this paper, we investigate super-resolution image restoration from multiple images, which are possibly degraded
with large motion blur. The blur kernel for each input image is separately estimated. This is unlike many existing
super-resolution algorithms, which assume identical blur kernel for all input images. We also do not make any
restrictions on the motion fields among images; that is, we estimate dense motion field without simplifications
such as parametric motion. We present a two-step algorithm: In the first step, each input image is deblurred
using the estimated blur kernel. In the second step, super-resolution restoration is applied to the deblurred
images. Because the estimated blur kernels may not be accurate, we propose a weighted cost function for the
super-resolution restoration step, where a weight associated with an input image reflects the reliability of the
corresponding kernel estimate and the deblurred image. We provide experimental results from real video data
captured with a hand-held camera, and show that the proposed weighting scheme is robust to motion deblurring
errors.
Keywords: blind super-resolution, motion blur
1. INTRODUCTION
Super-resolution (SR) image restoration
1–6
has been extensively studied over the last two decades, however there
are still open problems and challenges that need to be researched further. One of the most important challenges
in multi-frame super-resolution restoration has been the motion estimation problem. Accurate motion field is
essential in the success of SR restoration; many super-resolution papers report results with data where the motion
field is restricted to a parametric model, such as perspective model or even translational model. This is obviously
very limiting and may not be used in most real-life applications. To deal with more realistic motion, block-based
motion estimation or optical flow methods
7, 8
could be utilized. In recent years, highly accurate and practical
optical flow methods
4, 9
have been proposed; and these optical flow methods definitely improve the performance
in super-resolution applications. A second challenge in super-resolution restoration is blur kernel estimation.
The majority of existing super-resolution algorithms assume identical blur kernel for all input images; the blur
kernel is typically modeled as a symmetric Gaussian function, whose standard deviation is estimated empirically
or by some parametric estimation method. The assumption of identical blur kernel does not hold when there are
fast moving objects in the scene or the camera is shaken during the exposure time. Ideally, we should estimate
the blur kernel for each input image separately. There are few super-resolution methods that estimate blur kernel
for each image. In,
10
a region based matching is first used to track the moving object of interest in the image
sequence and then the motion blur direction and magnitude are estimated from tracked displacements. The
motion field is limited to affine model, and so is the motion blur kernel. In,
11
a Bayesian approach is proposed
for adaptive super resolution that incorporates high-resolution image restoration, optical flow, noise level and
blur kernel estimation. The estimation process is reduced to an individual component given the other terms; and
the Bayesian inference iterates between optical flow, noise estimation, blur estimation and image restoration.
The drawback of the method is that the blur kernels are limited to a Gaussian function with possibly different
standard deviations. In,
12
the motion field is used to construct the motion blur for each frame; however, it is
obvious that such an approach cannot account for intra-frame motion blur.
Contact information: bahadir@ece.lsu.edu.
Digital Photography X, edited by Nitin Sampat, Radka Tezaur, et. al. Proc. of SPIE-IS&T
Electronic Imaging, SPIE Vol. 9023, 90230F · © 2014 SPIE-IS&T
Proc. of SPIE-IS&T/ Vol. 9023 90230F-1