# pix2pix-tensorflow
Based on [pix2pix](https://phillipi.github.io/pix2pix/) by Isola et al.
[Article about this implemention](https://affinelayer.com/pix2pix/)
Tensorflow implementation of pix2pix. Learns a mapping from input images to output images, like these examples from the original paper:
<img src="docs/examples.jpg" width="900px"/>
This port is based directly on the torch implementation, and not on an existing Tensorflow implementation. It is meant to be a faithful implementation of the original work and so does not add anything. The processing speed on a GPU with cuDNN was equivalent to the Torch implementation in testing.
## Setup
### Prerequisites
- Tensorflow 1.0.0
### Recommended
- Linux with Tensorflow GPU edition + cuDNN
### Getting Started
```sh
# clone this repo
git clone https://github.com/affinelayer/pix2pix-tensorflow.git
cd pix2pix-tensorflow
# download the CMP Facades dataset (generated from http://cmp.felk.cvut.cz/~tylecr1/facade/)
python tools/download-dataset.py facades
# train the model (this may take 1-8 hours depending on GPU, on CPU you will be waiting for a bit)
python pix2pix.py \
--mode train \
--output_dir facades_train \
--max_epochs 200 \
--input_dir facades/train \
--which_direction BtoA
# test the model
python pix2pix.py \
--mode test \
--output_dir facades_test \
--input_dir facades/val \
--checkpoint facades_train
```
The test run will output an HTML file at `facades_test/index.html` that shows input/output/target image sets.
If you have Docker installed, you can use the provided Docker image to run pix2pix without installing the correct version of Tensorflow:
```sh
# train the model
python tools/dockrun.py python pix2pix.py \
--mode train \
--output_dir facades_train \
--max_epochs 200 \
--input_dir facades/train \
--which_direction BtoA
# test the model
python tools/dockrun.py python pix2pix.py \
--mode test \
--output_dir facades_test \
--input_dir facades/val \
--checkpoint facades_train
```
## Datasets and Trained Models
The data format used by this program is the same as the original pix2pix format, which consists of images of input and desired output side by side like:
<img src="docs/ab.png" width="256px"/>
For example:
<img src="docs/418.png" width="256px"/>
Some datasets have been made available by the authors of the pix2pix paper. To download those datasets, use the included script `tools/download-dataset.py`. There are also links to pre-trained models alongside each dataset, note that these pre-trained models require the Tensorflow 0.12.1 version of pix2pix.py since they have not been regenerated with the 1.0.0 release:
| dataset | example |
| --- | --- |
| `python tools/download-dataset.py facades` <br> 400 images from [CMP Facades dataset](http://cmp.felk.cvut.cz/~tylecr1/facade/). (31MB) <br> Pre-trained: [BtoA](https://mega.nz/#!2xpyQBoK!GVtkZN7lqY4aaZltMFdZsPNVE6bUsWyiVUN6RwJtIxQ) | <img src="docs/facades.jpg" width="256px"/> |
| `python tools/download-dataset.py cityscapes` <br> 2975 images from the [Cityscapes training set](https://www.cityscapes-dataset.com/). (113M) <br> Pre-trained: [AtoB](https://mega.nz/#!rxByxK6S!W9ZBUqgdGTFDWVlOE_ljVt1G3bU89bdu_nS9Bi1ujiA) [BtoA](https://mega.nz/#!b1olDbhL!mxsYC5AF_WH64CXoukN0KB-nw15kLQ0Etii-F-HDTps) | <img src="docs/cityscapes.jpg" width="256px"/> |
| `python tools/download-dataset.py maps` <br> 1096 training images scraped from Google Maps (246M) <br> Pre-trained: [AtoB](https://mega.nz/#!i8pkkBJT!3NKLar9sUr-Vh_vNVQF-xwK9-D9iCqaCmj1T27xRf4w) [BtoA](https://mega.nz/#!r8xwCBCD!lNBrY_2QO6pyUJziGj7ikPheUL_yXA8xGXFlM3GPL3c) | <img src="docs/maps.jpg" width="256px"/> |
| `python tools/download-dataset.py edges2shoes` <br> 50k training images from [UT Zappos50K dataset](http://vision.cs.utexas.edu/projects/finegrained/utzap50k/). Edges are computed by [HED](https://github.com/s9xie/hed) edge detector + post-processing. (2.2GB) <br> Pre-trained: [AtoB](https://mega.nz/#!OoYT3QiQ!8y3zLESvhOyeA6UsjEbcJphi3_uEt534waSL5_f_D4Y) | <img src="docs/edges2shoes.jpg" width="256px"/> |
| `python tools/download-dataset.py edges2handbags` <br> 137K Amazon Handbag images from [iGAN project](https://github.com/junyanz/iGAN). Edges are computed by [HED](https://github.com/s9xie/hed) edge detector + post-processing. (8.6GB) <br> Pre-trained: [AtoB](https://mega.nz/#!KlpBHKrZ!iJ3x6xzgk0wnJkPiAf0UxPzhYSmpC3kKH1DY5n_dd0M) | <img src="docs/edges2handbags.jpg" width="256px"/> |
The `facades` dataset is the smallest and easiest to get started with.
### Creating your own dataset
#### Example: creating images with blank centers for [inpainting](https://people.eecs.berkeley.edu/~pathak/context_encoder/)
<img src="docs/combine.png" width="900px"/>
```sh
# Resize source images
python tools/process.py \
--input_dir photos/original \
--operation resize \
--output_dir photos/resized
# Create images with blank centers
python tools/process.py \
--input_dir photos/resized \
--operation blank \
--output_dir photos/blank
# Combine resized images with blanked images
python tools/process.py \
--input_dir photos/resized \
--b_dir photos/blank \
--operation combine \
--output_dir photos/combined
# Split into train/val set
python tools/split.py \
--dir photos/combined
```
The folder `photos/combined` will now have `train` and `val` subfolders that you can use for training and testing.
#### Creating image pairs from existing images
If you have two directories `a` and `b`, with corresponding images (same name, same dimensions, different data) you can combine them with `process.py`:
```sh
python tools/process.py \
--input_dir a \
--b_dir b \
--operation combine \
--output_dir c
```
This puts the images in a side-by-side combined image that `pix2pix.py` expects.
#### Colorization
For colorization, your images should ideally all be the same aspect ratio. You can resize and crop them with the resize command:
```sh
python tools/process.py \
--input_dir photos/original \
--operation resize \
--output_dir photos/resized
```
No other processing is required, the colorization mode (see Training section below) uses single images instead of image pairs.
## Training
### Image Pairs
For normal training with image pairs, you need to specify which directory contains the training images, and which direction to train on. The direction options are `AtoB` or `BtoA`
```sh
python pix2pix.py \
--mode train \
--output_dir facades_train \
--max_epochs 200 \
--input_dir facades/train \
--which_direction BtoA
```
### Colorization
`pix2pix.py` includes special code to handle colorization with single images instead of pairs, using that looks like this:
```sh
python pix2pix.py \
--mode train \
--output_dir photos_train \
--max_epochs 200 \
--input_dir photos/train \
--lab_colorization
```
In this mode, image A is the black and white image (lightness only), and image B contains the color channels of that image (no lightness information).
### Tips
You can look at the loss and computation graph using tensorboard:
```sh
tensorboard --logdir=facades_train
```
<img src="docs/tensorboard-scalar.png" width="250px"/> <img src="docs/tensorboard-image.png" width="250px"/> <img src="docs/tensorboard-graph.png" width="250px"/>
If you wish to write in-progress pictures as the network is training, use `--display_freq 50`. This will update `facades_train/index.html` every 50 steps with the current training inputs and outputs.
## Testing
Testing is done with `--mode test`. You should specify the checkpoint to use with `--checkpoint`, this should point to the `output_dir` that you created previously with `--mode train`:
```sh
python pix2pix.py \
--mode test \
--output_dir facades_test \
--input_dir facades/val \
--checkpoint facades_train
```
The testing mode will load some of the configuration options from the checkpoint provided so you do not need to speci
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
10.图像清晰化(超级分辨率).zip (71个子文件)
chapter_10
README.md 2KB
pix2pix-tensorflow
.gitignore 89B
README.md 11KB
pix2pix.py 35KB
docs
5-tensorflow.png 96KB
1-inputs.png 25KB
1-tensorflow.png 98KB
ab.png 13KB
5-inputs.png 18KB
combine.png 1.31MB
1-torch.jpg 10KB
tensorboard-graph.png 448KB
tensorboard-image.png 347KB
95-inputs.png 35KB
maps.jpg 141KB
test-html.png 4.18MB
cityscapes.jpg 31KB
51-targets.png 98KB
examples.jpg 469KB
51-tensorflow.png 110KB
tensorboard-scalar.png 277KB
95-targets.png 79KB
1-targets.png 99KB
95-torch.jpg 12KB
5-targets.png 93KB
facades.jpg 47KB
edges2shoes.jpg 34KB
95-tensorflow.png 111KB
51-torch.jpg 13KB
edges2handbags.jpg 27KB
51-inputs.png 50KB
5-torch.jpg 8KB
418.png 115KB
server
Dockerfile 853B
README.md 7KB
static
facades-sheet.jpg 1.27MB
facades-input.png 40KB
facades-output.png 117KB
edges2cats-output.png 59KB
edges2cats-sheet.jpg 807KB
edges2cats-input.png 3KB
edges2handbags-input.png 5KB
edges2handbags-output.png 74KB
edges2handbags-sheet.jpg 933KB
editor.png 35KB
edges2shoes-sheet.jpg 677KB
index.html 21KB
edges2shoes-output.png 57KB
edges2shoes-input.png 3KB
deployment.tf 3KB
tools
export-example-model.py 1KB
rolling-update.py 817B
process-cloud.py 2KB
process-local.py 2KB
process-remote.py 766B
upload-image.py 810B
upload-model.py 4KB
terraform.tfvars.example 105B
serve.py 11KB
docker
Dockerfile 4KB
LICENSE.txt 1KB
tools
split.py 1KB
dockrun.py 4KB
test.py 4KB
process.py 10KB
0370031790.jpg 70KB
tfimage.py 3KB
download-dataset.py 645B
test.ipynb 2KB
Untitled.ipynb 72B
delete_broken_img.py 1KB
共 71 条
- 1
资源评论
- LindaFan2020-06-02没啥用,不推荐
- magic-halo2020-04-24打开之后内容很多,但是不晓得我要的在哪里。您能多说一点吗
- bod222022-02-04和下面的评论一样,什么都不是,根本不是超分图片,骗我
AndrewCq
- 粉丝: 21
- 资源: 145
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- IMG_20240425_120538.jpg
- My Complete Genome_6k Base-Pairs of Phenotype SNPs_Complete Raw Data.zip
- qt 的mqtt测试demo
- 移动应用开发教程-zip.zip
- mosquitto-2.018-install-windows-x64
- FTPServer FTP 服务器,绿色免安装,单文件
- 梦畅语音点名软件,上课点名
- 利用ADNI数据集和标签,在tensorflow框架上使用tensorlayer接口,通过架构u-net实现海马体的分割
- Kutools for Word v9.0 office word 插件
- 修复Windows 10 LTSC 2021资源占用率高
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功