,蔺素珍
1*
,禄晓飞
2 ,
王丽芳
1
, 李大威
1
, 王斌
1
1
中北大学大数据学院,山西 太原 030051;
2
酒泉卫星发射中心,甘肃 酒泉 735000
摘 要 针对多模态图像融合中多尺度几何工具和融合规则设计困难的问题,提出一种基于生成对抗网络
(Generative Adversarial Networks,GANs)的图像融合方法,实现了多模态图像端到端自适应融合。首先
将多模态源图像同步输入生成网络,网络结构采用本文提出的一种基于残差的卷积神经网络,通过网络的
自适应学习生成融合图像;再将该融合图像和标签图像分别送入判别网络,通过判别器的特征表示和分类
识别,逐渐优化生成器;在生成器和判别器的动态平衡中,得到最终融合图像。通过与目前具有代表性的
融合方法相比,实验结果表明,所提出方法的融合结果更干净且没有伪影,提供了更好的视觉质量。
关键词 图像融合; 多模态图像;深度学习;生成对抗网络
中图分类号 TP391 文献标识码 A
Multi-Modal Image Fusion Based On Generative Adversarial
Networks
Yang Xiaoli
1
, Lin Suzhen
1*
, Lu Xiaofei
2
, Wang Lifang
1
, Li Dawei
1
, Wang Bin
1
1
College of Big Data, North University of China, Shanxi Taiyuan 030051, China;
2
Jiuquan Satellite Launch Center, Jiuquan, Gansu 735000, China;
Abstract In order to solve the excessive reliance on prior knowledge problems of multi-scale geometric tool
selection and the designing fusion rules in multi-model image fusion, a new network based on Generative
Adversarial Networks (GANs) is proposed, which can achieve end-to-end image fusion. Firstly, the multi-
modal source image is synchronously input into the generative network which structure is created by a residual-
based convolutional neural network proposed in this paper . The network can generate the fused image through
adaptive learning . Secondly, the fused image and the label image are respectively sent to the discriminant.
Through the feature representation and classification identification of the discriminator network, gradually
optimizes the generator. And then the final fused image is obtained in the dynamic balance of the generator
and the discriminator. Compared with the existing representative fusion methods, the experimental results
demonstrate that the fusion results of the proposed algorithm are cleaner and have no artifacts, providing better
visual quality.
Key words image fusion; multi-modal image;deep learning; generative adversarial networks
OCIS codes 100.4996; 100.2960;
评论0
最新资源