# DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better
Code for this paper [DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better](https://arxiv.org/abs/1908.03826)
Orest Kupyn, Tetiana Martyniuk, Junru Wu, Zhangyang Wang
In ICCV 2019
## Overview
We present a new end-to-end generative adversarial network (GAN) for single image motion deblurring, named
DeblurGAN-v2, which considerably boosts state-of-the-art deblurring efficiency, quality, and flexibility. DeblurGAN-v2
is based on a relativistic conditional GAN with a double-scale discriminator. For the first time, we introduce the
Feature Pyramid Network into deblurring, as a core building block in the generator of DeblurGAN-v2. It can flexibly
work with a wide range of backbones, to navigate the balance between performance and efficiency. The plug-in of
sophisticated backbones (e.g., Inception-ResNet-v2) can lead to solid state-of-the-art deblurring. Meanwhile,
with light-weight backbones (e.g., MobileNet and its variants), DeblurGAN-v2 reaches 10-100 times faster than
the nearest competitors, while maintaining close to state-of-the-art results, implying the option of real-time
video deblurring. We demonstrate that DeblurGAN-v2 obtains very competitive performance on several popular
benchmarks, in terms of deblurring quality (both objective and subjective), as well as efficiency. Besides,
we show the architecture to be effective for general image restoration tasks too.
<!---We also study the effect of DeblurGAN-v2 on the task of general image restoration - enhancement of images degraded
jointly by noise, blur, compression, etc. The picture below shows the visual quality superiority of DeblurGAN-v2 with
Inception-ResNet-v2 backbone over DeblurGAN. It is drawn from our new synthesized Restore Dataset
(refer to Datasets subsection below).-->
![](./doc_images/kohler_visual.png)
![](./doc_images/restore_visual.png)
![](./doc_images/gopro_table.png)
![](./doc_images/lai_table.png)
<!---![](./doc_images/dvd_table.png)-->
<!---![](./doc_images/kohler_table.png)-->
## DeblurGAN-v2 Architecture
![](./doc_images/pipeline.jpg)
<!---Our architecture consists of an FPN backbone from which we take five final feature maps of different scales as the
output. Those features are later up-sampled to the same 1/4 input size and concatenated into one tensor which contains
the semantic information on different levels. We additionally add two upsampling and convolutional layers at the end of
the network to restore the original image size and reduce artifacts. We also introduce a direct skip connection from
the input to the output, so that the learning focuses on the residue. The input images are normalized to \[-1, 1\].
e also use a **tanh** activation layer to keep the output in the same range.-->
<!---The new FPN-embeded architecture is agnostic to the choice of feature extractor backbones. With this plug-and-play
property, we are entitled with the flexibility to navigate through the spectrum of accuracy and efficiency.
By default, we choose ImageNet-pretrained backbones to convey more semantic-related features.-->
## Datasets
The datasets for training can be downloaded via the links below:
- [DVD](https://drive.google.com/file/d/1bpj9pCcZR_6-AHb5aNnev5lILQbH8GMZ/view)
- [GoPro](https://drive.google.com/file/d/1KStHiZn5TNm2mo3OLZLjnRvd0vVFCI0W/view)
- [NFS](https://drive.google.com/file/d/1Ut7qbQOrsTZCUJA_mJLptRMipD8sJzjy/view)
## Training
#### Command
```python train.py```
training script will load config under config/config.yaml
#### Tensorboard visualization
![](./doc_images/tensorboard2.png)
## Testing
To test on a single image,
```python predict.py IMAGE_NAME.jpg```
By default, the name of the pretrained model used by Predictor is 'best_fpn.h5'. One can change it in the code ('weights_path' argument). It assumes that the fpn_inception backbone is used. If you want to try it with different backbone pretrain, please specify it also under ['model']['g_name'] in config/config.yaml.
## Pre-trained models
<table align="center">
<tr>
<th>Dataset</th>
<th>G Model</th>
<th>D Model</th>
<th>Loss Type</th>
<th>PSNR/ SSIM</th>
<th>Link</th>
</tr>
<tr>
<td rowspan="3">GoPro Test Dataset</td>
<td>InceptionResNet-v2</td>
<td>double_gan</td>
<td>ragan-ls</td>
<td>29.55/ 0.934</td>
<td><a href="">https://drive.google.com/open?id=1UXcsRVW-6KF23_TNzxw-xC0SzaMfXOaR</a></td>
</tr>
<tr>
<td>MobileNet</td>
<td>double_gan</td>
<td>ragan-ls</td>
<td>28.17/ 0.925</td>
<td><a href="">https://drive.google.com/open?id=1JhnT4BBeKBBSLqTo6UsJ13HeBXevarrU</a></td>
</tr>
<tr>
<td>MobileNet-DSC</td>
<td>double_gan</td>
<td>ragan-ls</td>
<td>28.03/ 0.922</td>
<td><a href=""></a></td>
</tr>
</table>
## Parent Repository
The code was taken from <a href="">https://github.com/KupynOrest/RestoreGAN</a> . This repository contains flexible pipelines for different Image Restoration tasks.
## Citation
If you use this code for your research, please cite our paper.
```
```
@InProceedings{Kupyn_2019_ICCV,
author = {Orest Kupyn and Tetiana Martyniuk and Junru Wu and Zhangyang Wang},
title = {DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2019}
}
```
```
---
需要安装如下的包:
```shell
python3 -m pip install glog
python3 -m pip install albumentations
python3 -m pip install tensorboardX
python3 -m pip install pretrainedmodels
python3 -m pip install torchsummary
```
没有合适的资源?快使用搜索试试~ 我知道了~
DeblurGAN-master_基于生成对抗网络的图像去模糊算法_deblurgan-master_生成对抗网络_GAN盲去模
共94个文件
py:61个
md:17个
sh:5个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
5星 · 超过95%的资源 18 下载量 169 浏览量
2021-09-11
05:15:12
上传
评论 6
收藏 105KB ZIP 举报
温馨提示
采用生成对抗网络(GAN)实现图像去模糊,用于图像的清晰化处理
资源推荐
资源详情
资源评论
收起资源包目录
DeblurGAN-master_基于生成对抗网络的图像去模糊算法_deblurgan-master_生成对抗网络_GAN盲去模糊_deblurGAN_源码.zip (94个子文件)
DeblurGAN-master
.vscode
settings.json 38B
code
DeblurGANv2
train.py 8KB
models
senet.py 16KB
mobilenet_v2.py 4KB
unet_seresnext.py 5KB
fpn_mobilenet.py~HEAD 6KB
__init__.py 0B
fpn_inception.py 6KB
models.py 1KB
losses.py 10KB
fpn_mobilenet.py 6KB
fpn_inception_simple.py 6KB
fpn_densenet.py 5KB
networks.py 12KB
requirements.txt 163B
dataset.py 5KB
aug.py 3KB
predict.py 3KB
datasets
train
README.md 0B
A
README.md 0B
B
README.md 0B
png2jpg.py 849B
README.md 0B
getDataSet.sh 77B
val
README.md 0B
A
README.md 0B
B
README.md 0B
util
__init__.py 0B
metrics.py 2KB
image_pool.py 990B
config
config.yaml 1KB
LICENSE 3KB
QA.md 351B
metric_counter.py 2KB
test.sh 57B
README.md 6KB
test_dataset.py 3KB
schedulers.py 2KB
adversarial_trainer.py 3KB
test_metrics.py 3KB
test_aug.py 568B
DeblurGANv1
train.py 2KB
motion_blur
generate_trajectory.py 5KB
__init__.py 0B
generate_PSF.py 4KB
blur_image.py 5KB
models
test_model.py 2KB
__init__.py 0B
models.py 342B
conditional_gan_model.py 4KB
losses.py 6KB
base_model.py 2KB
networks.py 17KB
test.py 2KB
run.sh 95B
datasets
helper functions
grayscale.py 1KB
A
box_in_scene
getDataSet.sh 77B
combine_A_and_B.py 2KB
B
box_in_scene
getDataSet.sh 93B
util
get_data.py 3KB
util.py 2KB
__init__.py 0B
metrics.py 1KB
html.py 2KB
image_pool.py 1KB
visualizer.py 6KB
png.py 978B
LICENSE 3KB
options
train_options.py 2KB
__init__.py 0B
base_options.py 5KB
test_options.py 847B
README.md 4KB
data
single_dataset.py 815B
aligned_dataset.py 2KB
unaligned_dataset.py 1KB
__init__.py 0B
base_dataset.py 2KB
custom_dataset_data_loader.py 1KB
data_loader.py 234B
base_data_loader.py 195B
image_folder.py 2KB
.gitignore 43B
note
3-源码-学习笔记
README.md 71B
combine_A_and_B.ipynb 946B
博客-学习笔记
基于深度学习的图像去模糊(两篇经典的文献阅读笔记).md 255B
图像去模糊算法 循序渐进 附完整代码.md 1KB
Restoration of defocused and blurred images.md 2KB
论文-学习笔记
README.md 234B
LICENSE 34KB
README.md 34B
data
image_augmentation.ipynb 12KB
README.md 187B
.gitignore 2KB
共 94 条
- 1
心梓
- 粉丝: 807
- 资源: 8057
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
- 1
- 2
- 3
- 4
前往页