# 基于非同源数据的SAR图像生成方法研究(Research on SAR image generation method based on non-homologous data)
## 一、Introduction
合成孔径雷达作为遥感领域的研究重点之一,不仅在民用方面应用广泛,在军事应用领域上也发挥着越来越大的作用。由于合成孔径雷达具有全天侯、全天时、多视角、多分辨率数据获取的能力,对合成孔径雷达的研究已成为图像数据获取、处理研究发展的“热点”问题。现有方法解决SAR图像处理问题时,常借助辅助数据或辅助信息对原有的数据集进行数据扩充或特征增强。生成对抗网络(GAN)是一种灵活高效的生成模型,可以学习真实数据的分布。循环生成对抗网络(CycleGAN)是GAN的一种模型,它可以学习将图像从源域X转向目标域Y,并且不需要成对的图片作为训练数据,虽然该模型生成的图像质量不如基于条件生成对抗网络的有监督学习模型(Pix2pixGAN),但其应用场景更加灵活。本研究方法在对光学图像和SAR图像特征的异同进行比较并且对不同的生成对抗网络模型进行比较后,采用光学图像作为他源信息辅助,利用非同源的图像转换技术,采用循环生成对抗网络模型,将光学目标图像转换为SAR目标图像,并且对生成图像进行评估量化方案的设计,使最终生成数据的识别性能能在SAR标准数据的基础上有一定程度的提高。
Synthetic aperture radar, as one of the research focuses in the field of remote sensing, is not only widely used in civilian applications, but also plays an increasingly important role in military applications. Due to the ability of synthetic aperture radar to acquire all-weather, all-day, multi-view and multi-resolution data, the research on synthetic aperture radar has become a "hot" issue in the development of image data acquisition and processing. When the existing methods solve the problem of SAR image processing, they often use auxiliary data or auxiliary information to perform data expansion or feature enhancement on the original data set. Generative Adversarial Networks (GANs) are flexible and efficient generative models that can learn the distribution of real data. The Cyclic Generative Adversarial Network (CycleGAN) is a model of GAN that can learn to transfer images from the source domain X to the target domain Y, and does not require paired images as training data, although the image quality generated by this model is not as good as conditional generation. A supervised learning model for adversarial networks (Pix2pixGAN), but its application scenarios are more flexible. In this research method, after comparing the similarities and differences of optical image and SAR image features and comparing different generative adversarial network models, the optical image is used as the aid of other source information, the non-homologous image conversion technology is used, and the recurrent generative adversarial network is used. Model, convert the optical target image into SAR target image, and design the quantification scheme for the evaluation of the generated image, so that the recognition performance of the final generated data can be improved to a certain extent based on the SAR standard data.
## 二、Experiment procedure
### 搭建环境
1、安装Anaconda
2、下载CycleGAN-pytorch源码,下载地址:[CycleGAN](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)
3、运行如下代码创建虚拟环境
```
cd pytorch-CycleGAN-and-pix2pix
conda env create -f environment.yml
```
4、关于visdom无法成功安装的问题
在虚拟环境下运行代码
```
pip install visdom
python -m visdom.server
```
首次安装时间过长导致安装失败的解决办法:
①进入Anaconda的目录:D:\anaconda3\Lib\site-packages\visdom
②替换其中的static文件夹
③进入python文件server,将第1917行`download_scripts()`注释掉
visdom安装失败截图:
<img src=./images/visdom-fail.png>
### CycleGAN部分
#### 数据集
1、数据集结构
```
datasets---opt2sar---trainA
---trainB
---testA
---testB
```
2、数据集准备
光学数据集:[LEVIR](http://levir.buaa.edu.cn/Code.htm)
SAR数据集:[SAR-ship-detection](https://aistudio.baidu.com/aistudio/datasetdetail/54361)
也可以使用其他数据集
光学数据集汇总:[optical](https://blog.csdn.net/qq_27930679/article/details/110631002
)
SAR数据集汇总:[SAR](https://blog.csdn.net/qq_40181592/article/details/120276322)
#### 使用CycleGAN进行训练和测试
运行如下命令进行训练:
`cd C:\Users\amin\Desktop\pytorch-CycleGAN-and-pix2pix-master`
`conda activate pytorch-CycleGAN-and-pix2pix`
`python -m visdom.server`
`python train.py --dataroot ./datasets/opt2sar --name opt2sar_cyclegan --model cycle_gan`
中断训练后继续训练:
`python train.py --dataroot ./datasets/opt2sar --name opt2sar_cyclegan --model cycle_gan --continue_train --epoch_count 4`
测试:
`python test.py --dataroot ./datasets/opt2sar --name opt2sar_cyclegan --model cycle_gan`
关闭环境:
`conda deactivate`
一些建议:[tips](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/tips.md#training%20test-tips)
训练过程截图:
<img src=./images/cyclegan-train-success.png>
#### 参数的修改
example:
1、修改lambdaA、lambdaB
进入models/cycle_gan_model.py找到如下代码进行修改:
```python
parser.set_defaults(no_dropout=True) # default CycleGAN did not use dropout
if is_train:
parser.add_argument('--lambda_A', type=float, default=10.0, help='weight for cycle loss (A -> B -> A)')
parser.add_argument('--lambda_B', type=float, default=10.0, help='weight for cycle loss (B -> A -> B)')
parser.add_argument('--lambda_identity', type=float, default=0.5, help='use identity mapping. Setting lambda_identity other than 0 has an effect of scaling the weight of the identity mapping loss. For example, if the weight of the identity loss should be 10 times smaller than the weight of the reconstruction loss, please set lambda_identity = 0.1')
return parser
```
2、修改epoch、learning rate等
进入options/train_options.py,找到如下代码进行修改:
```python
def initialize(self, parser):
parser = BaseOptions.initialize(self, parser)
# visdom and HTML visualization parameters
parser.add_argument('--display_freq', type=int, default=400, help='frequency of showing training results on screen')
parser.add_argument('--display_ncols', type=int, default=4, help='if positive, display all images in a single visdom web panel with certain number of images per row.')
parser.add_argument('--display_id', type=int, default=1, help='window id of the web display')
parser.add_argument('--display_server', type=str, default="http://localhost", help='visdom server of the web display')
parser.add_argument('--display_env', type=str, default='main', help='visdom display environment name (default is "main")')
parser.add_argument('--display_port', type=int, default=8097, help='visdom port of the web display')
parser.add_argument('--update_html_freq', type=int, default=1000, help='frequency of saving training results to html')
parser.add_argument('--print_freq', type=int, default=100, help='frequency of showing training results on console')
parser.add_argument('--no_html', action='store_true', help='do not save intermediate training results to [opt.checkpoints_dir]/[opt.name]/web/')
# network saving and loading parameters
parser.add_argument('--save_latest_freq', type=int, default=5000, help='frequency of saving the latest results')
parser.add_argument('--save_epoch_freq', type=int, default=5, help='frequency of saving checkpoints at
没有合适的资源?快使用搜索试试~ 我知道了~
基于非同源数据的SAR图像生成方法研究源码+项目说明.zip
共91个文件
mat:17个
m:14个
js:9个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 81 浏览量
2024-03-17
22:50:41
上传
评论
收藏 4.26MB ZIP 举报
温馨提示
【资源说明】 1、该资源包括项目的全部源码,下载可以直接使用! 2、本项目适合作为计算机、数学、电子信息等专业的课程设计、期末大作业和毕设项目,作为参考资料学习借鉴。 3、本资源作为“参考资料”如果需要实现其他功能,需要能看懂代码,并且热爱钻研,自行调试。 基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip基于非同源数据的SAR图像生成方法研究源码+项目说明.zip
资源推荐
资源详情
资源评论
收起资源包目录
基于非同源数据的SAR图像生成方法研究源码+项目说明.zip (91个子文件)
code_20105
resnet18
resnet18.py 7KB
test.py 2KB
IQA
score_list_FR_MSSIM_f2853BSD100.mat 346B
SCOOT
ScootMeasure.m 2KB
main.m 655B
README.md 481B
score_list_FR_MSSIM_f2853BSD100 233B
score_list_SCOOT_f2853BSD100.mat 347B
FR_MSSIM
score_list_FR_MSSIM_f2853BSD100.mat 346B
score_list_SCOOT_f2853BSD100.mat 346B
FR_MSSIM.m 6KB
score_list_PSNR_f2853BSD100.mat 343B
score_list_FR_FSIMc_f2853BSD100.mat 348B
score_list_FR_MAEf2853BSD100.mat 277B
score_list_PSNR_f2853BSD100 229B
FR_MAD
ical_std.mexmaci 12KB
ical_stat.obj 4KB
Example.m 434B
ical_std.c 5KB
ical_stat.c 3KB
MAD_index_april_2010.zip 21KB
MAD_index_nov2010.zip 30KB
ical_std.mexw64 10KB
make.m 153B
ical_std.mexw32 8KB
ical_stat.mexmaci 12KB
MAD_index_april_2010.m 12KB
MAD_index_2011_10_07.zip 17KB
ical_stat.mexw32 7KB
ical_stat.mexw64 9KB
ical_std.obj 5KB
FR_MAD.m 14KB
main.m 3KB
score_list_FR_MAEf2853BSD100 219B
score_list_PSNR_f2853BSD100.mat 345B
score_list_FR_FPMf2853BSD100 232B
FR_FSIMc
FSIMOnLIVE.mat 17KB
FSIMOnMICT.mat 3KB
FR_FSIMc.m 18KB
FSIMOnA57.mat 964B
FSIMOnCSIQ.mat 16KB
FSIMOnIVC.mat 3KB
FSIMOnTID2008.mat 31KB
TIP_IQA_FSIM.pdf 1.08MB
readme.txt 2KB
score_list_FR_FSIMc_f2853BSD100 235B
score_list_FR_FPMf2853BSD100.mat 219B
FR_PSNR
FR_PSNR.m 245B
score_list_FR_FSIMc_f2853BSD100.mat 347B
FR_FPM
FR_FPM.m 315B
FR_RMSE
FR_RMSE.m 242B
score_list_FR_MAEf2853BSD100.mat 283B
README.md 611B
FR_GMSD
FR_GMSD.m 1KB
GMSD score.zip 45KB
GMSD.htm 55KB
GMSD.pdf 754KB
FR_MAE
FR_MAE.m 313B
images
resnet18predict.png 54KB
cyclegan-generation17.png 175KB
resnet18train.png 60KB
original-sar17.png 178KB
IQAresult.png 6KB
visdom-fail.png 218KB
cyclegan-train-success.png 393KB
static
version.built 7B
js
react-modal.min.js 27KB
bootstrap.min.js 36KB
react-dom.min.js 92KB
main.js 592KB
react-react.min.js 6KB
sjcl.js 25KB
plotly-plotly.min.js 2.68MB
jquery.min.js 85KB
mathjax-MathJax.js 62KB
css
style.css 4KB
bootstrap.min.css 118KB
react-grid-layout-styles.css 1KB
react-resizable-styles.css 1KB
login.css 792B
login.html 2KB
index.html 2KB
README.md 2KB
fonts
classnames 1KB
glyphicons-halflings-regular.ttf 44KB
glyphicons-halflings-regular.woff 23KB
glyphicons-halflings-regular.eot 20KB
glyphicons-halflings-regular.woff2 18KB
layout_bin_packer 12KB
glyphicons-halflings-regular.svg#glyphicons_halflingsregular 106KB
README.md 11KB
共 91 条
- 1
资源评论
土豆片片
- 粉丝: 1529
- 资源: 5641
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功