没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
大多数地球观测卫星,例如IKONOS,QuickBird,GeoEye和WorldView-2,都以较低的空间分辨率(LR)提供高空间分辨率(HR)的全色(Pan)图像和多光谱(MS)图像。 图像融合是获取在各种应用中广泛使用的HR MS图像的有效方法。 在本文中,我们提出了一种用于图像融合的在线耦合字典学习(OCDL)方法,其中采用叠加策略来构建耦合字典。 通过迭代更新进一步构建构造的耦合字典,以确保通过将HR字典和稀疏系数向量相乘,可以几乎完全重建HR MS图像补丁,这可以通过在LR上稀疏表示其对应的LR MS图像补丁来解决。字典。 IKONOS和WorldView-2数据的融合结果表明,所提出的融合方法具有竞争力,甚至优于其他最新融合方法。
资源推荐
资源详情
资源评论
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![rar](https://img-home.csdnimg.cn/images/20241231044955.png)
![rar](https://img-home.csdnimg.cn/images/20241231044955.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![application/x-rar](https://img-home.csdnimg.cn/images/20210720083606.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![7z](https://img-home.csdnimg.cn/images/20241231044736.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![application/x-rar](https://img-home.csdnimg.cn/images/20210720083606.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![application/x-rar](https://img-home.csdnimg.cn/images/20210720083606.png)
![rar](https://img-home.csdnimg.cn/images/20241231044955.png)
![rar](https://img-home.csdnimg.cn/images/20241231044955.png)
![rar](https://img-home.csdnimg.cn/images/20241231044955.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![rar](https://img-home.csdnimg.cn/images/20241231044955.png)
![rar](https://img-home.csdnimg.cn/images/20241231044955.png)
![](https://csdnimg.cn/release/download_crawler_static/16529932/bg1.jpg)
An Online Coupled Dictionary Learning Approach
for Remote Sensing Image Fusion
Min Guo, Hongyan Zhang, Member, IEEE, Jiayi Li, Student Member, IEEE,
Liangpei Zhang, Senior Member, IEEE, and Huanfeng Shen, Senior Member, IEEE
Abstract—Most earth observation satellites, such as IKONOS,
QuickBird, GeoEye, and WorldView-2, provide a high spatial
resolution (HR) panchromatic (Pan) image and a multispectral
(MS) image at a lower spatial resolution (LR). Image fusion is an
effective way to acquire the HR MS images that are widely used in
various applications. In this paper, we propose an online coupled
dictionary learning (OCDL) approach for image fusion, in which a
superposition strategy is applied to construct the coupled dictio-
naries. The constructed coupled dictionaries are further developed
via an iterative update to ensure that the HR MS image patch can be
almost identically reconstructed by multiplying the HR dictionary
and the sparse coefficient vector, which is solved by sparsely
representing its counterpart LR MS image patch over the LR
dictionary. The fusion results from IKONOS and WorldView-2
data show that the proposed fusion method is competitive or even
superior to the other state-of-the-art fusion methods.
Index Terms—Coupled dictionary, image fusion, remote sensing
imagery, sparse representation (SR).
I. INTRODUCTION
A
S A POWERFUL quality improvement technique, data
fusion has been gradually improved in recent years. In [1],
data fusion is defined as a formal framewo rk which includes
expressed means and tools for combining and utilizing data
originating from different sources. Accounting for most of the
data fusion studies, image fusion is the integration of different
information sources by taking advantage of the complementary
spatial/spectral resolution characteristics of remote sensing im-
agery. For most earth observation satellites, such as IKONOS,
QuickBird, GeoEye, and WorldView-2, the data provided are
composed of a high spatial resolution (HR) panchromatic (Pan)
image and a low spatial resolution (LR) multispectral (MS)
image. The process of acquiring an HR MS image by blending
an HR Pan image and its corresponding LR MS image is referred
to as “image pan-sharpening.” In practice, images with high
spectral and spatial resolutions are useful in an increasing
number of applications, such as feature detection [2], segmenta-
tion/classification [3], [4], and so on.
During the past two decades, a large amount of image fusion
methods have been developed [5]–[7]. In [8] and [9], the
fusion methods are grouped into three categories: 1) projection-
substitution methods, 2) relative spectral contribution methods,
and 3) methods that belong to the Amélioration de la Résolution
Spatiale par Injection de Structures (ARSIS) concept. Projection-
substitution methods, which transform the MS image into an-
other space and exchange one structural component with the Pan
image, are widely used and have been integrated into some
commercial software packages. Among these methods, the most
popular are intensity hue saturation (IHS) transformation [10],
[11], principal component analysis (PCA) [12], and the Gram-
Schmidt transform-based methods [13]. The relative spectral
contribution met hods are based on the assumption that the LR
Pan image can be written as a linear combination of the original
MS image, of which the Brovey transform [14] and the
[15] method are two successful application instances. These two
types of methods can produce a noticeable increase in visual
impression with a good geometrical quality, but a major draw-
back comes from the nonignorable spectral distortion. As for the
ARSIS concept-based methods, it is assumed that the missing
spatial informat ion in the LR MS image can be inferred from the
high frequencies of the HR Pan image. To be specific, details
extracted from the HR Pan image by certain multi-scale or multi-
resolution decomposition algorithms are injected into the LR MS
image [8], [16]. The significant advantage of the ARSIS concept-
based methods is the preservation of the spectral content of the
original MS image. The à trous wavelet pan-sharpening (AWLP)
[17] method and the context-based decision (CBD) [18] method
are two effective ARSIS concept-based methods, which both
lead to good fusion results.
Recently, a new image fusion branch, which transfers the
image fusion problem into an image-related inverse problem
resolved with the help of sparse representation (SR) and com-
pressed sensing (CS) theory, has emerged and shown impressive
fusion performances. Li and Yang [19] wer e the first to perfor m
the remote sensing image fusion task from the perspective of CS
[20] theory. Subsequently, Jiang et al. [21] extended the above
model by learning a joint dictionary from the LR MS image and
Pan image to make it more practical. Nevertheless, these CS-
based methods require a large collection of images to train the
dictionary, which is computationally expensive. To deal with this
problem, Li et al. [22] developed a restoration-based remote
sensing image fusion method with sparsity regularization, in
which the dictionary is adaptively learned with the source image.
Manuscript received November 01, 2013; revised January 27, 2014; accepted
February 28, 2014. Date of publication April 03, 2014; date of current version
April 18, 2014. This work was supported in part by the National Basic Research
Program of China (973 Program) under Grant 2011CB707105, in part by the 863
program under Grant 2013AA12A301, and in part by the National Natural
Science Foundation of China under Grant 61201342 and Grant 61261130587.
(Corresponding author: H. Zhang.)
M. Guo, H. Zhang, J. Li, and L. Zhang are with the State Key Laboratory of
Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan
University, Wuhan 430079, China (e-mail: zhanghongyan@whu.edu.cn).
H. Shen is with the School of Resource and Environmental Science, Wuhan
University, Wuhan 430079, China.
Color versions of one or more of the figures in this paper are available online at
http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/JSTARS.2014.2310781
1284 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 4, APRIL 2014
1939-1404 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
![](https://csdnimg.cn/release/download_crawler_static/16529932/bg2.jpg)
With an effective and robust performance, the method still needs
to assume a spectral composition model, which is a little com-
plicated to implement. In [23], Zhu and Bamler proposed an
image fusion method named sparse fusion of images (SparseFI)
which explores the same sparse coefficient vector of the corre-
sponding HR/LR MS image patches over the coupled dictionar-
ies, which are offline constructed from the Pan image and its
down-sampled LR version. Due to its ease of impleme ntation and
the lack of requirement for external image data, SparseFI has
been considered as a promising approach with a broader appli-
cation range. Recently, a two-step sparse coding strategy for the
pan-sharpening of remote sensing images was proposed in [24]
on the basis of the SparseFI method.
In this paper, we propose an online coupled dictionary learn-
ing (OCDL) approach for image fusion, in which we make full
use of the available LR MS image and the HR Pan image to
decrease the spectral distortion and preserve the spatial informa-
tion of the LR MS image. In the proposed OCDL method, a
superposition strategy is adopted to produce two intermediate
images for the coupled dictionary construction for each band. In
order to ensure that the HR MS image patch can be almost
identically reconstructed by multiplying the HR dictionary and
the same sparse coefficient vector, which is solved by sparsely
representing its counterpart LR MS image patch ov er the LR
dictionary, an iter ative update method is utilized to update the
coupled dictionaries, which can be referred to as an online
dictionary learning process. The theoretical analyses and experi-
mental results in this paper indicate that the proposed method can
produce competitive fusion results, even if the Pan image has a
low correlation with some of the MS bands.
The rest of the paper is structured as follows. Section II briefly
describes SR in ima ge processing a nd the coupled dictionary
model for image fusion. Thereafter, the scheme of the proposed
algorithm is reported in Section III. In Section IV, experiments
with two IKONOS data sets and one WorldView-2 data set verify
the effectiveness of the proposed method, with respect to the
visual, spatial, and spectral quality. Finally, the conclusions are
drawn in Section V.
II. R
ELATED WORKS
A. SR in Image Processing
Sparsity has recently been the subject of intensive research,
and the field of ima ge processing has benefitted a lot from the
progress in both theory and practice [25], [26]. In the image
processing approach, each signal R
lexicographically
stacking the pixels can be sparsely represented by
a suitable overcomplete dictionary R
[27], each
column of which corresponds to a possible image patch (also
lexicographically stacking the pixel values in this patch as a
vector). That is to say, signal can be represented as ,
which simultaneously assumes the sparsity of the coefficient
vector . This problem can be formulated as
where
denotes the number of nonzero components in .
This optimization probl em is NP-hard. It has been shown that
the optimization problem can be converted to an
-norm mini-
mization problem if the desired coefficient is sufficiently
sparse [28], which converts (1) to
A large number of solution algorithms have been developed to
solve the
-norm optimization problem [29], [30], with one of
the classic algorithms being the LASSO algorithm [30].
B. The Coupled Dictionary Model and Its Application
in Image Fusion
The coupled dictionary model was designed to solve the cross-
style image synth esis problem [31], in which each style for the
scene can be mutually transferred by learning the underlying
mapping from the example image pairs. Suppose that we have
some example image pairs from the coupled feature spaces. For
convenience, we assume that the images in one space follow the
style
, and the images in the other space follow style .
The image cross-style synthesis problem can then be formulated
as follows: recover the image in style when its corresponding
description in style is given.
The working mechanism of this model is that there is a
corresponding relationship between the counterpart atoms in
the coupled dictionaries, which leads to a mapping function
between the sparse coefficient vectors of the image patch pairs in
the coupled feature spaces. Clearly, the coupled dictionaries play
an important role in this model. In general, the coupled dict io-
naries are simply generated by randomly sampling raw patches
from the training image pairs of the same scene in the coupled
spaces, or learned from the above raw patch dictionaries. Once
the coupled dictionaries are constructed, each patch of style is
sparsely represented over the dictionary in the space
. The
commonly used and effective mapping function refers to the
assumption that the sparse coefficient vectors in different styles
should be the same, with respect to the delicate coupled dictio-
nary construction [22], [23], [32]. Therefore, the associated patch
of style can be reconstructed with the same sparse coefficient
vector and the dictionary in the space
.
Image resolution enhancement is one of the classic cross-style
image synthesis problems, where the coupled dictionaries refer
to coupled spaces: the high- and low-resolution signal spaces in
the patch-based SR [32]. Image fusion is a common method of
image resolution enhancement, and the coupled dictionary
model can be used to solve this problem. The SparseFI [23]
method has recently been proposed as an application of the
coupled dictionary model in image fusion. Since an HR Pan
image
and its down -sampled LR version can be directly
utilized, we are able to directly construct the coupled dictionaries
without an extra image data set. In this way, in the SparseFI
method, the coupled dictionaries, which consist of an LR dictio-
nary
and an HR dictionary , are directly constructed from
the Pan image and its down-sampled LR version. To be specific,
we down-sample the HR Pan image to the same scale as the LR
MS image by using bicubic interpolation, and we then get an LR
Pan image. The LR dictionary
is generated by sampling
raw patches from the LR Pan image with overlapping areas.
GUO et al.: OCDL APPROACH FOR REMOTE SENSING IMAGE FUSION 1285
剩余10页未读,继续阅读
资源评论
![avatar-default](https://csdnimg.cn/release/downloadcmsfe/public/img/lazyLogo2.1882d7f4.png)
![avatar](https://profile-avatar.csdnimg.cn/default.jpg!1)
weixin_38668672
- 粉丝: 6
- 资源: 907
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助
![voice](https://csdnimg.cn/release/downloadcmsfe/public/img/voice.245cc511.png)
![center-task](https://csdnimg.cn/release/downloadcmsfe/public/img/center-task.c2eda91a.png)
最新资源
- 农业无人机巡检实战:YOLOv11实现作物病虫害实时识别与定位.pdf
- 生物医药突破:YOLOv11显微镜下细胞分裂过程动态追踪算法.pdf
- 实时视频结构化:YOLOv11+Transformer的行人属性识别在公安系统的落地.pdf
- 食品工业应用:YOLOv11生产线异物检测与包装完整性验证系统.pdf
- 水产养殖智能化:YOLOv11实时鱼群行为分析与密度监测方案.pdf
- 水产养殖智能化:YOLOv11水下生物密度统计与异常行为监测.pdf
- 体育赛事分析:YOLOv11多运动员动作识别与战术推演系统.pdf
- 水产养殖智能化:YOLOv11鱼类行为分析与投喂决策系统.pdf
- 体育赛事分析:YOLOv11球类轨迹预测与运动员动作识别模型融合实践.pdf
- 体育赛事分析:YOLOv11运动员动作捕捉与战术识别系统开发实录.pdf
- 卫星遥感新范式:YOLOv11在农作物长势监测与产量预测中的端到端方案.pdf
- 卫星遥感新应用:YOLOv11实现多分辨率图像中的建筑物自动提取与变化检测.pdf
- 卫星遥感应用:YOLOv11在土地利用分类中的跨模态数据融合实践.pdf
- 无人机巡检:YOLOv11输电线缆异物检测的多尺度优化策略.pdf
- 无人机巡检场景下YOLOv11输电线路异物检测模型轻量化改造方案.pdf
- 无人机巡检场景下YOLOv11的优化:电力设备缺陷检测与实时告警系统搭建.pdf
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
![feedback](https://img-home.csdnimg.cn/images/20220527035711.png)
![feedback](https://img-home.csdnimg.cn/images/20220527035711.png)
![feedback-tip](https://img-home.csdnimg.cn/images/20220527035111.png)
安全验证
文档复制为VIP权益,开通VIP直接复制
![dialog-icon](https://csdnimg.cn/release/downloadcmsfe/public/img/green-success.6a4acb44.png)