<img src='imgs/horse2zebra.gif' align="right" width=384>
<br><br><br>
# CycleGAN and pix2pix in PyTorch
We provide PyTorch implementations for both unpaired and paired image-to-image translation.
The code was written by [Jun-Yan Zhu](https://github.com/junyanz) and [Taesung Park](https://github.com/taesung), and supported by [Tongzhou Wang](https://ssnl.github.io/).
This PyTorch implementation produces results comparable to or better than our original Torch software. If you would like to reproduce the same results as in the papers, check out the original [CycleGAN Torch](https://github.com/junyanz/CycleGAN) and [pix2pix Torch](https://github.com/phillipi/pix2pix) code
**Note**: The current software works well with PyTorch 0.4+. Check out the older [branch](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/tree/pytorch0.3.1) that supports PyTorch 0.1-0.3.
You may find useful information in [training/test tips](docs/tips.md) and [frequently asked questions](docs/qa.md).
**CycleGAN: [Project](https://junyanz.github.io/CycleGAN/) | [Paper](https://arxiv.org/pdf/1703.10593.pdf) | [Torch](https://github.com/junyanz/CycleGAN)**
<img src="https://junyanz.github.io/CycleGAN/images/teaser_high_res.jpg" width="800"/>
**Pix2pix: [Project](https://phillipi.github.io/pix2pix/) | [Paper](https://arxiv.org/pdf/1611.07004.pdf) | [Torch](https://github.com/phillipi/pix2pix)**
<img src="https://phillipi.github.io/pix2pix/images/teaser_v3.png" width="800px"/>
**[EdgesCats Demo](https://affinelayer.com/pixsrv/) | [pix2pix-tensorflow](https://github.com/affinelayer/pix2pix-tensorflow) | by [Christopher Hesse](https://twitter.com/christophrhesse)**
<img src='imgs/edges2cats.jpg' width="400px"/>
If you use this code for your research, please cite:
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
[Jun-Yan Zhu](https://people.eecs.berkeley.edu/~junyanz/)\*, [Taesung Park](https://taesung.me/)\*, [Phillip Isola](https://people.eecs.berkeley.edu/~isola/), [Alexei A. Efros](https://people.eecs.berkeley.edu/~efros)
In ICCV 2017. (* equal contributions) [[Bibtex]](https://junyanz.github.io/CycleGAN/CycleGAN.txt)
Image-to-Image Translation with Conditional Adversarial Networks
[Phillip Isola](https://people.eecs.berkeley.edu/~isola), [Jun-Yan Zhu](https://people.eecs.berkeley.edu/~junyanz), [Tinghui Zhou](https://people.eecs.berkeley.edu/~tinghuiz), [Alexei A. Efros](https://people.eecs.berkeley.edu/~efros)
In CVPR 2017. [[Bibtex]](http://people.csail.mit.edu/junyanz/projects/pix2pix/pix2pix.bib)
## Course
CycleGAN course assignment [code](http://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/assignments/a4-code.zip) and [handout](http://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/assignments/a4-handout.pdf) designed by Prof. [Roger Grosse](http://www.cs.toronto.edu/~rgrosse/) for [CSC321](http://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/) "Intro to Neural Networks and Machine Learning" at University of Toronto. Please contact the instructor if you would like to adopt it in your course.
## Other implementations
### CycleGAN
<p><a href="https://github.com/leehomyc/cyclegan-1"> [Tensorflow]</a> (by Harry Yang),
<a href="https://github.com/architrathore/CycleGAN/">[Tensorflow]</a> (by Archit Rathore),
<a href="https://github.com/vanhuyz/CycleGAN-TensorFlow">[Tensorflow]</a> (by Van Huy),
<a href="https://github.com/XHUJOY/CycleGAN-tensorflow">[Tensorflow]</a> (by Xiaowei Hu),
<a href="https://github.com/LynnHo/CycleGAN-Tensorflow-Simple"> [Tensorflow-simple]</a> (by Zhenliang He),
<a href="https://github.com/luoxier/CycleGAN_Tensorlayer"> [TensorLayer]</a> (by luoxier),
<a href="https://github.com/Aixile/chainer-cyclegan">[Chainer]</a> (by Yanghua Jin),
<a href="https://github.com/yunjey/mnist-svhn-transfer">[Minimal PyTorch]</a> (by yunjey),
<a href="https://github.com/Ldpe2G/DeepLearningForFun/tree/master/Mxnet-Scala/CycleGAN">[Mxnet]</a> (by Ldpe2G),
<a href="https://github.com/tjwei/GANotebooks">[lasagne/keras]</a> (by tjwei)</p>
</ul>
### pix2pix
<p><a href="https://github.com/affinelayer/pix2pix-tensorflow"> [Tensorflow]</a> (by Christopher Hesse),
<a href="https://github.com/Eyyub/tensorflow-pix2pix">[Tensorflow]</a> (by Eyyüb Sariu),
<a href="https://github.com/datitran/face2face-demo"> [Tensorflow (face2face)]</a> (by Dat Tran),
<a href="https://github.com/awjuliani/Pix2Pix-Film"> [Tensorflow (film)]</a> (by Arthur Juliani),
<a href="https://github.com/kaonashi-tyc/zi2zi">[Tensorflow (zi2zi)]</a> (by Yuchen Tian),
<a href="https://github.com/pfnet-research/chainer-pix2pix">[Chainer]</a> (by mattya),
<a href="https://github.com/tjwei/GANotebooks">[tf/torch/keras/lasagne]</a> (by tjwei),
<a href="https://github.com/taey16/pix2pixBEGAN.pytorch">[Pytorch]</a> (by taey16)
</p>
</ul>
## Prerequisites
- Linux or macOS
- Python 2 or 3
- CPU or NVIDIA GPU + CUDA CuDNN
## Getting Started
### Installation
- Clone this repo:
```bash
git clone https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
cd pytorch-CycleGAN-and-pix2pix
```
- Install PyTorch 0.4+ and torchvision from http://pytorch.org and other dependencies (e.g., [visdom](https://github.com/facebookresearch/visdom) and [dominate](https://github.com/Knio/dominate)). You can install all the dependencies by
```bash
pip install -r requirements.txt
```
- For Conda users, we include a script `./scripts/conda_deps.sh` to install PyTorch and other libraries.
### CycleGAN train/test
- Download a CycleGAN dataset (e.g. maps):
```bash
bash ./datasets/download_cyclegan_dataset.sh maps
```
- Train a model:
```bash
#!./scripts/train_cyclegan.sh
python train.py --dataroot ./datasets/maps --name maps_cyclegan --model cycle_gan
```
- To view training results and loss plots, run `python -m visdom.server` and click the URL http://localhost:8097. To see more intermediate results, check out `./checkpoints/maps_cyclegan/web/index.html`.
- Test the model:
```bash
#!./scripts/test_cyclegan.sh
python test.py --dataroot ./datasets/maps --name maps_cyclegan --model cycle_gan
```
- The test results will be saved to a html file here: `./results/maps_cyclegan/latest_test/index.html`.
### pix2pix train/test
- Download a pix2pix dataset (e.g.facades):
```bash
bash ./datasets/download_pix2pix_dataset.sh facades
```
- Train a model:
```bash
#!./scripts/train_pix2pix.sh
python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA
```
- To view training results and loss plots, run `python -m visdom.server` and click the URL http://localhost:8097. To see more intermediate results, check out `./checkpoints/facades_pix2pix/web/index.html`.
- Test the model (`bash ./scripts/test_pix2pix.sh`):
```bash
#!./scripts/test_pix2pix.sh
python test.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA
```
- The test results will be saved to a html file here: `./results/facades_pix2pix/test_latest/index.html`. You can find more scripts at `scripts` directory.
### Apply a pre-trained model (CycleGAN)
- You can download a pretrained model (e.g. horse2zebra) with the following script:
```bash
bash ./scripts/download_cyclegan_model.sh horse2zebra
```
- The pretrained model is saved at `./checkpoints/{name}_pretrained/latest_net_G.pth`. Check [here](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/scripts/download_cyclegan_model.sh#L3) for all the available CycleGAN models.
- To test the model, you also need to download the horse2zebra dataset:
```bash
bash ./datasets/download_cyclegan_dataset.sh horse2zebra
```
- Then generate the results using
```bash
python test.py --dataroot datasets/horse2zebra/testA --name horse2zebra_pretrained --model test --no_dropout
```
- The option `--model test` is used for generating results of CycleGAN only for one side. This option will automatically set `--dataset_mode single`, which only loads the images from one set. On the contrary, using `--model cycle_gan` requ
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
【资源说明】 毕设项目基于Qt实现的基于openpose和pix2pix对抗网络的游戏人物动作模拟python源码+项目说明.zip 毕设项目基于Qt实现的基于openpose和pix2pix对抗网络的游戏人物动作模拟python源码+项目说明.zip Requirements - Linux(Ubuntu18.04) - CPU - for test NVIDIA GPU(12GB) + CUDA CuDNN7.4 - for train - Python >= 3.4 - PyTorch >= 1.0.0 - python-opencv >= 3.4.0 - Keras - Qt 5.12 CMU - OpenPose 使用预训练好的OpenPose COCO模型,使用[下载脚本][3]自动下载到`pytorch_pix2pix/keras_openpose/model/keras/`,大约209MB,在[`keras_openpose_test.py`][5]中提供API,请自行查看注释,支持视频和图片直接输入,计算输出结果图/视频。 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用! 2、本项目适合计算机相关专业(如计科、人工智能、通信工程、自动化、电子信息等)的在校学生、老师或者企业员工下载使用,也适合小白学习进阶,当然也可作为毕设项目、课程设计、作业、项目初期立项演示等。 3、如果基础还行,也可在此代码基础上进行修改,以实现其他功能,也可直接用于毕设、课设、作业等。 欢迎下载,沟通交流,互相学习,共同进步!
资源推荐
资源详情
资源评论
收起资源包目录
毕设项目基于Qt实现的基于openpose和pix2pix对抗网络的游戏人物动作模拟python源码+项目说明.zip (93个子文件)
项目说明.md 9KB
mainwindow.h 2KB
QtPoseImitate.pro 2KB
fake.m4v 2MB
aboutdialog.ui 6KB
mainwindow.cpp 7KB
pytorch_pix2pix
pubgPoseFake.py 16KB
logswap 0B
pix2pix_class.py 2KB
data
__init__.py 2KB
base_data_loader.py 171B
image_dataset.py 2KB
base_dataset.py 3KB
unaligned_dataset.py 2KB
image_folder.py 2KB
aligned_dataset.py 3KB
single_dataset.py 1KB
LICENSE 3KB
normalization.py 2KB
options
__init__.py 0B
train_options.py 3KB
test_options.py 1KB
base_options.py 6KB
docs
datasets.md 5KB
qa.md 13KB
tips.md 8KB
pix2pix_test.py 2KB
environment.yml 247B
requirements.txt 64B
models
__init__.py 1KB
networks.py 15KB
base_model.py 6KB
test_model.py 2KB
cycle_gan_model.py 7KB
pix2pix_model.py 4KB
.gitignore 84B
train.py 2KB
test.py 2KB
util
__init__.py 0B
get_data.py 3KB
util.py 2KB
image_pool.py 1KB
visualizer.py 7KB
html.py 2KB
README.md 11KB
log
swap 72B
pubg.log 1KB
pbug_pix2pix
test_latest
images
curPose_real_B.jpg 7KB
curPose_real_A.jpg 4KB
curPose_fake_B.jpg 12KB
keras_openpose
util.py 3KB
model.py 9KB
model
get_keras_model.sh 110B
keras_openpose_test.py 15KB
config 763B
config_reader.py 1KB
config.py 6KB
config 793B
scripts
eval_cityscapes
evaluate.py 3KB
util.py 1KB
caffemodel
deploy.prototxt 11KB
cityscapes.py 6KB
download_fcn8s.sh 217B
edges
PostprocessHED.m 2KB
batch_hed.py 3KB
download_pix2pix_model.sh 331B
test_cyclegan.sh 115B
test_single.sh 164B
train_cyclegan.sh 118B
test_pix2pix.sh 161B
train_pix2pix.sh 203B
download_cyclegan_model.sh 577B
test_before_push.py 2KB
conda_deps.sh 223B
install_deps.sh 48B
main.cpp 176B
mainwindow.ui 8KB
getModels.sh 1KB
aboutdialog.h 297B
QtPoseImitate.pro.user 24KB
pose.m4v 1.12MB
.gitignore 83B
images.qrc 100B
images
fake.m4v 2MB
2019-04-2020-45-24_src.jpg 14KB
pose.m4v 1.12MB
2019-04-2020-45-24_pose.jpg 6KB
pose_128px.ico 66KB
src.m4v 2.17MB
2019-04-2020-45-24_fake.jpg 25KB
poster.png 2.62MB
src.m4v 2.17MB
aboutdialog.cpp 222B
共 93 条
- 1
资源评论
Make程序设计
- 粉丝: 5638
- 资源: 3568
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功