# SRGAN-tensorflow
### Introduction
This project is a tensorflow implementation of the impressive work [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](https://arxiv.org/pdf/1609.04802.pdf). <br />
The result is obtained following to same setting from the v5 edition of the [paper on arxiv](https://arxiv.org/pdf/1609.04802.pdf). However, due to limited resources, I train my network on the [RAISE dataset](http://mmlab.science.unitn.it/RAISE/) which contains 8156 high resoution images captured by good cameras. As the results showed below, the performance is close to the result presented in the paper without using the imagenet training set. <br />
The result on BSD100, Set14, Set5 will be reported later. The code is highly inspired by the [pix2pix-tensorflow](https://github.com/affinelayer/pix2pix-tensorflow).
#### Some results:
* The comparison of some result form my implementation and the paper
<table >
<tr >
<td><center>Inputs</center></td>
<td><center>Our result</center></td>
<td><center>SRGAN result</center></td>
<td><center>Original</center></td>
</tr>
<tr>
<td>
<center><img src="./pic/SRGAN/comic_LR.png" height="280"></center>
</td>
<td>
<center><img src="./pic/images/img_005-outputs.png" height="280"></center>
</td>
<td>
<center><img src="./pic/SRGAN/comic_SRGAN-VGG54.png" height="280"></center>
</td>
<td>
<center><img src="./pic/SRGAN/comic_HR.png" height="280"></center>
</td>
</tr>
<tr>
<td><center>Inputs</center></td>
<td><center>Our result</center></td>
<td><center>SRGAN result</center></td>
<td><center>Original</center></td>
</tr>
<tr>
<td>
<center><img src="./pic/SRGAN/baboon_LR.png" height="200"></center>
</td>
<td>
<center><img src="./pic/images/img_001-outputs.png" height="200"></center>
</td>
<td>
<center><img src="./pic/SRGAN/baboon_SRGAN-VGG54.png" height="200"></center>
</td>
<td>
<center><img src="./pic/images/img_001-targets.png" height="200"></center>
</td>
</tr>
</table>
### Denpendency
* python2.7
* tensorflow (tested on r1.0, r1.2)
* Download and extract the pre-trained model from my [google drive](https://drive.google.com/a/gapp.nthu.edu.tw/uc?id=0BxRIhBA0x8lHNDJFVjJEQnZtcmc&export=download)
* Download the VGG19 weights from the [TF-slim models](http://download.tensorflow.org/models/vgg_19_2016_08_28.tar.gz)
* The code is tested on:
* Ubuntu 14.04 LTS with CPU architecture x86_64 + Nvidia Titan X
* Ubuntu 16.04 LTS with CPU architecture x86_64 + Nvidia 1080, 1080Ti or Titan X
### Recommended
* Ubuntu 16.04 with tensorflow GPU edition
### Getting Started
Throughout the project, we denote the directory you cloned the repo as ```SRGAN-tensorflow_ROOT```<br />
* #### Run test using pre-trained model
```bash
# clone the repository from github
git clone https://github.com/brade31919/SRGAN-tensorflow.git
cd $SRGAN-tensorflow_ROOT/
# Download the pre-trained model from the google-drive
# Go to https://drive.google.com/a/gapp.nthu.edu.tw/uc?id=0BxRIhBA0x8lHNDJFVjJEQnZtcmc&export=download
# Download the pre-trained model to SRGAN-tensorflow/
tar xvf SRGAN_pre-trained.tar
# Run the test mode
sh test_SRGAN.sh
#The result can be viewed at $SRGAN-tensorflow_ROOT/result/images/
```
<br />
* #### Run the inference using pre-trained model on your own image
```bash
cd $SRGAN-tensorflow_ROOT/
# Download the pre-trained model from the google-drive
# Go to https://drive.google.com/a/gapp.nthu.edu.tw/uc?id=0BxRIhBA0x8lHNDJFVjJEQnZtcmc&export=download
# Download the pre-trained model to SRGAN-tensorflow/
tar xvf SRGAN_pre-trained.tar
# put your png images in the your own directory
# For example
mkdir myImages
# put some images in it
```
modify the path in inference_SRGAN.sh
```bash
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python main.py \
--output_dir ./result/ \
--summary_dir ./result/log/ \
--mode inference \
--is_training False \
--task SRGAN \
--input_dir_LR ./data/myImages/ \ # Modify the path to your image path
--num_resblock 16 \
--perceptual_mode VGG54 \
--pre_trained_model True \
--checkpoint ./SRGAN_pre-trained/model-200000
```
```bash
# Run the test mode
sh inference_SRGAN.sh
#The result can be viewed at $SRGAN-tensorflow_ROOT/result/images/
```
<br />
* #### Run the training process
#### Data and checkpoint preparation
To run the training process, things will become a little complicated. Follow the steps below carefully!!<br />
Go to the project root directory. Download the vgg weight from [TF-silm model](http://download.tensorflow.org/models/vgg_19_2016_08_28.tar.gz)<br />
```bash
# make the directory to put the vgg19 pre-trained model
mkdir vgg19/
cd vgg19/
wget http://download.tensorflow.org/models/vgg_19_2016_08_28.tar.gz
tar xvf ./vgg19_2016_08_28.tar.gz
```
Download the training dataset. The dataset contains the 8156 images from the RAISE dataset. I preprocess all the TIFF images into png with 5x downscale as the high-resolution images. The low-resolution image is obtained by 4x downscale of the high-resolution image. <br />
Download the two file from the google drive link: <br />
[High-resolution images](https://drive.google.com/file/d/0BxRIhBA0x8lHYXNNVW5YS0I2eXM/view?usp=sharing)<br />
[Low-resolution images](https://drive.google.com/file/d/0BxRIhBA0x8lHNnJFVUR1MjdMWnc/view?usp=sharing)<br />
Put the two .tar files to SRGAN/data/. Go to project root (SRGAN/)<br />
Typically, we need to follow the training process in the paper
1. Train the SRResnet with 1000000 iterations
2. [optional] Train the SRGAN with the weights from the generator of SRResnet for 500000 iterations using the **MSE loss**.
3. Train the SRGAN with the weights from the generator and discriminator of SRGAN (MSE loss) for 200000 iterations using the **VGG54 perceptual loss**.
#### Train SRResnet
Edit the train_SRResnet.sh
```bash
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python main.py \ #Set CUDA devices correctly if you use multiple gpu system
--output_dir ./experiment_SRResnet/ \ #Set the place to put the checkpoint and log. You can put it anywhere you like.
--summary_dir ./experiment_SRResnet/log/ \
--mode train \
--is_training True \
--task SRResnet \
--batch_size 16 \
--flip True \ #flip and random_crop are online data augmentation method
--random_crop True \
--crop_size 24 \
--input_dir_LR ./data/RAISE_LR/ \ #Set the training data path correctly
--input_dir_HR ./data/RAISE_HR/ \
--num_resblock 16 \
--name_queue_capacity 4096 \
--image_queue_capacity 4096 \
--perceptual_mode MSE \ #We use MSE loss in SRResnet training
--queue_thread 12 \ #Cpu threads for the data provider. We suggest >4 to speedup the training
--ratio 0.001 \
--learning_rate 0.0001 \
--decay_step 500000 \
--decay_rate 0.1 \
--stair True \
--beta 0.9 \
--max_iter 1000000 \
--save_freq 20000
```
After ensuring the configuration. execute the script:
```bash
# Executing the script
cd $SRGAN-tensorflow_ROOT/
sh train_SRResnet.sh
```
Launch tensorboard to monitor the training process
```bash
# Launch the tensorboard
cd ./experiment_SRResnet/log/
tensorboard --logdir .
# Now you can navigate to tensorboard in your browser
```
The training process in the tensorboard should be like this
<table>
<tr>
<td><center>PSNR</center></td>
<td><center>content loss</center></td>
</tr>
<tr>
<td>
<center><img src="./pic/result/SRResnet_PSNR.png" width="500"></center>
</td>
<td>
<center><img src="./pic/result/SRResnet_content_loss.png" width="500"></center>
</td>
</tr>
</table>
#### Train SRGAN with MSE loss
Edit the train_SRGAN.sh
```bash
#!/usr/bin/env bash
CU
没有合适的资源?快使用搜索试试~ 我知道了~
srgan-tensorflow超分辨率图像重建
共93个文件
png:66个
py:8个
xml:4个
需积分: 13 3 下载量 134 浏览量
2022-03-26
21:29:20
上传
评论 1
收藏 14.15MB ZIP 举报
温馨提示
srgan-tensorflow超分辨率图像重建
资源详情
资源评论
资源推荐
收起资源包目录
srgan-tensorflow-master.zip (93个子文件)
srgan-tensorflow-master
main.py 18KB
pic
SRGAN
comic_LR.png 14KB
baboon_LR.png 33KB
baboon_HR.png 462KB
comic_SRGAN-VGG54.png 200KB
comic_HR.png 186KB
baboon_SRGAN-VGG54.png 538KB
images
img_006-inputs.png 9KB
img_010-inputs.png 24KB
img_003-inputs.png 22KB
img_012-outputs.png 376KB
img_006-outputs.png 110KB
img_005-targets.png 186KB
img_009-inputs.png 30KB
img_003-targets.png 277KB
img_008-targets.png 128KB
img_004-inputs.png 11KB
img_001-outputs.png 490KB
img_007-targets.png 338KB
img_002-outputs.png 671KB
img_012-inputs.png 31KB
img_013-inputs.png 26KB
img_001-inputs.png 33KB
img_011-inputs.png 48KB
img_005-outputs.png 189KB
img_008-inputs.png 12KB
img_001-targets.png 522KB
img_009-targets.png 394KB
img_011-targets.png 605KB
img_010-targets.png 311KB
img_007-inputs.png 24KB
img_014-outputs.png 442KB
img_007-outputs.png 312KB
img_014-targets.png 427KB
img_011-outputs.png 578KB
img_003-outputs.png 544KB
img_012-targets.png 505KB
img_008-outputs.png 153KB
img_014-inputs.png 33KB
img_006-targets.png 108KB
img_002-targets.png 800KB
img_002-inputs.png 51KB
img_010-outputs.png 472KB
img_013-outputs.png 437KB
img_005-inputs.png 14KB
img_004-outputs.png 190KB
img_009-outputs.png 383KB
img_004-targets.png 151KB
img_013-targets.png 263KB
result
SRGAN_VGG_cl.png 25KB
SRGAN_VGG_PSNR.png 21KB
SRResnet_content_loss.png 86KB
SRResnet_PSNR.png 90KB
SRGAN_VGG_ad.png 19KB
SRGAN_VGG_dl.png 20KB
SRGAN_MSE_ad.png 22KB
SRGAN_MSE_dl.png 23KB
SRGAN_MSE_PSNR.png 21KB
SRGAN_MSE_cl.png 22KB
.DS_Store 6KB
inference_SRGAN.sh 362B
data
test_HR
img_005.png 186KB
img_002.png 800KB
img_003.png 277KB
img_001.png 522KB
.DS_Store 6KB
test_LR
img_005.png 14KB
img_002.png 51KB
img_003.png 22KB
img_001.png 33KB
test_SRGAN.sh 418B
.DS_Store 6KB
tool
convertPNG.py 0B
__init__.py 0B
resizeImage.py 1KB
.idea
srgan-tensorflow-master.iml 284B
misc.xml 288B
modules.xml 298B
workspace.xml 2KB
.gitignore 176B
inspectionProfiles
profiles_settings.xml 174B
train_SRResnet.sh 699B
.gitignore 150B
lib
model_dense.py 3KB
__pycache__
__init__.cpython-38.pyc 151B
ops.cpython-38.pyc 7KB
model.cpython-38.pyc 14KB
__init__.py 0B
ops.py 8KB
model.py 24KB
README.md 13KB
LICENSE.txt 1KB
train_SRGAN.sh 795B
共 93 条
- 1
pfhdskjbzl
- 粉丝: 2
- 资源: 5
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0