# TFill
[paper](https://openaccess.thecvf.com/content/CVPR2022/html/Zheng_Bridging_Global_Context_Interactions_for_High-Fidelity_Image_Completion_CVPR_2022_paper.html) | [arXiv](https://arxiv.org/abs/2104.00845) | [Project](https://chuanxiaz.com/tfill/) | [Video](https://www.youtube.com/watch?v=efB1fw0jiLs&feature=youtu.be)
This repository implements the training, testing and editing tools for "Bridging Global Context Interactions for High-Fidelity Image Completion (CVPR2022, scores: 1, 1, 2, 2)" by [Chuanxia Zheng](https://www.chuanxiaz.com), [Tat-Jen Cham](https://personal.ntu.edu.sg/astjcham/), [Jianfei Cai](https://jianfei-cai.github.io/) and [Dinh Phung](https://research.monash.edu/en/persons/dinh-phung). Given masked images, the proposed **TFill** model is able to generate high-fidelity plausible results on various settings.
## Examples
![teaser](images/example.png)
## Object Removal
![teaser](images/tfill_removal.gif)
## Object Repair
![teaser](images/tfill_repair.gif)
## Framework
We propose the two-stages image completion framework, where the upper content inference network (TFill-*Coarse*) generates semantically correct content using a transformer encoder to directly capture the global context information; the lower appearance refinement network (TFill-*refined*) copies global visible and generated features to holes.
![teaser](images/framework.png)
# Getting started
- Clone this repo:
```
git clone https://github.com/lyndonzheng/TFill
cd TFill
```
## Requirements
The original model is trained and evaluated with Pytorch v1.9.1, which cannot be visited in current [PyTorch](https://pytorch.org/get-started/previous-versions/). Therefore, we create a new environment with Pytorch v1.10.0 to test the model, where the performance is the same.
A suitable [conda](https://conda.io/) environment named `Tfill` can be created and activated with:
```
conda env create -f environment.yaml
conda activate TFill
```
## Runing pretrained models
Download the pre-trained models using the following links ([CelebA-HQ](https://drive.google.com/drive/folders/1ntbVDjJ7-nAt4nLGuu7RNi3QpLfh40gk?usp=sharing), [FFHQ](https://drive.google.com/drive/folders/1xuAsShrw9wI5Be0sQka3vZEsfwnq0pPT?usp=sharing), [ImageNet](https://drive.google.com/drive/folders/1B4RswBUD6_jXAu3MVz3LtuNfoV4wTmGf?usp=sharing), [Plcases2](https://drive.google.com/drive/folders/154ikacQ8A2JLC8iIGda8jiZN-ysL1xh5?usp=sharing)
) and put them under```checkpoints/``` directory. It should have the following structure:
```
./checkpoints/
├── celeba
│ ├── latest_net_D.pth
│ ├── latest_net_D_Ref.pth
│ ├── latest_net_E.pth
│ ├── latest_net_G.pth
│ ├── latest_net_G_Ref.pth
│ ├── latest_net_T.pth
├── ffhq
│ ├── ...
├── ...
```
- Test the model
```
sh ./scripts/test.sh
```
For different models, the users just need to modify lines 2-4, including ```name```,```img_file```,```mask_file```. For instance, we can replace the *celeba* to *imagenet*.
The default results will be stored under the ```results/``` folder, in which:
- ```examples/```: shows original and masked images;
- ```img_out/```: shows upsampled *Coarse* outputs;
- ```img_ref_out/```: shows the final *Refined* outputs.
## Datasets
- ```face dataset```:
- 24,183 training images and 2,824 test images from [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) and use the algorithm of [Growing GANs](https://github.com/tkarras/progressive_growing_of_gans) to get the high-resolution CelebA-HQ dataset.
- 60,000 training images and 10,000 test images from [FFHQ](https://github.com/NVlabs/ffhq-dataset) provided by [StyleGAN](https://github.com/NVlabs/stylegan).
- ```natural scenery```: original training and val images from [Places2](http://places2.csail.mit.edu/).
- ```object``` original training images from [ImageNet](http://www.image-net.org/).
## Traning
- Train a model (two stage: *Coarse* and *Refinement*)
```
sh ./scripts/train.sh
```
The default setting is for the top *Coarse* training. The users just need to replace the *coarse* with *refine* at line 6. Then, the model can continue training for high-resolution image completion.
More hyper-parameter can be in ```options/```.
The coarse results using transformer and restrictive CNN is impressive, which provides plausible results for both **foreground** objects and **background** scene.
![teaser](images/center_imagenet.jpg)
![teaser](images/center_places2.jpg)
# GUI
The GUI operation is similar to our previous GUI in [PIC](https://github.com/lyndonzheng/Pluralistic-Inpainting), where steps are also the same.
Basic usage is:
```
sh ./scripts/ui.sh
```
In ```gui/ui_model.py```, users can modify the ```img_root```(line 30) and the corresponding ```img_files```(line 31) to randomly edit images from the testing dataset.
## Editing Examples
- **Results (original, output) for face editing**
![teaser](images/free_face.jpg)
- **Results (original, masked input, output) for nature scene editing**
![teaser](images/free_nature.jpg)
## Next
- Higher-resolution pluralistic image completion
## License
This work is licensed under a MIT License.
This software is for educational and academic research purpose only. If you wish to obtain a commercial royalty bearing license to this software, please contact us at chuanxia001@e.ntu.edu.sg.
## Citation
The code also uses our previous [PIC](https://github.com/lyndonzheng/Pluralistic-Inpainting). If you use this code for your research, please cite our papers.
```
@InProceedings{Zheng_2022_CVPR,
author = {Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei and Phung, Dinh},
title = {Bridging Global Context Interactions for High-Fidelity Image Completion},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {11512-11522}
}
@inproceedings{zheng2019pluralistic,
title={Pluralistic Image Completion},
author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={1438--1447},
year={2019}
}
@article{zheng2021pluralistic,
title={Pluralistic Free-From Image Completion},
author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
journal={International Journal of Computer Vision},
pages={1--20},
year={2021},
publisher={Springer}
}
```
没有合适的资源?快使用搜索试试~ 我知道了~
Tfill-swust-期末作业
共161个文件
jpg:46个
png:39个
py:32个
需积分: 0 0 下载量 143 浏览量
2023-12-20
14:08:32
上传
评论
收藏 550.91MB ZIP 举报
温馨提示
Tfill-swust-期末作业
资源推荐
资源详情
资源评论
收起资源包目录
Tfill-swust-期末作业 (161个子文件)
upfirdn2d.cpp 966B
fused_bias_act.cpp 826B
upfirdn2d_kernel.cu 9KB
fused_bias_act_kernel.cu 3KB
tfill_removal.gif 5.64MB
tfill_repair.gif 2.73MB
.gitignore 3KB
.gitignore 190B
TFill-main.iml 454B
ILSVRC2012_test_00074872.JPEG 78KB
ILSVRC2012_test_00098832.JPEG 64KB
ILSVRC2012_test_00038608.JPEG 64KB
ILSVRC2012_test_00079136.JPEG 60KB
ILSVRC2012_test_00031325.JPEG 53KB
ILSVRC2012_test_00081141.JPEG 53KB
ILSVRC2012_test_00038546.JPEG 48KB
ILSVRC2012_test_00007239.JPEG 48KB
ILSVRC2012_test_00055197.JPEG 47KB
ILSVRC2012_test_00068490.JPEG 42KB
ILSVRC2012_test_00051208.JPEG 39KB
ILSVRC2012_test_00057270.JPEG 37KB
ILSVRC2012_test_00088244.JPEG 35KB
ILSVRC2012_test_00061469.JPEG 27KB
ILSVRC2012_test_00076650.JPEG 25KB
free_nature.jpg 1.63MB
center_places2.jpg 1.51MB
center_imagenet.jpg 1.42MB
free_face.jpg 656KB
Places365_test_00062927.jpg 108KB
NTU_logo.jpg 83KB
Places365_test_00278728.jpg 77KB
Places365_test_00077923.jpg 75KB
Places365_test_00094308.jpg 72KB
Places365_test_00011914.jpg 71KB
Places365_test_00248483.jpg 69KB
Places365_test_00060508.jpg 61KB
Places365_test_00054598.jpg 59KB
Places365_test_00240745.jpg 59KB
Places365_test_00193369.jpg 59KB
Places365_test_00216720.jpg 59KB
Places365_test_00088998.jpg 56KB
celeba_HQ_test_2130.jpg 45KB
Places365_test_00195617.jpg 45KB
celeba_HQ_test_1871.jpg 41KB
celeba_HQ_test_983.jpg 39KB
celeba_HQ_test_212.jpg 38KB
celeba_HQ_test_2102.jpg 36KB
celeba_HQ_test_97.jpg 36KB
celeba_HQ_test_197.jpg 34KB
celeba_HQ_test_871.jpg 34KB
celeba_HQ_test_1905.jpg 30KB
Places365_test_00287696.jpg 29KB
celeba_HQ_test_1956.jpg 29KB
celeba_HQ_test_2158.jpg 27KB
celeba_HQ_test_308.jpg 26KB
celeba_HQ_test_2714.jpg 25KB
Places365_test_00027890.jpg 22KB
mask_celeba_HQ_test_871.jpg 16KB
mask_celeba_HQ_test_1905.jpg 15KB
mask_celeba_HQ_test_983.jpg 14KB
mask_celeba_HQ_test_2714.jpg 14KB
mask_celeba_HQ_test_2130.jpg 14KB
mask_celeba_HQ_test_197.jpg 12KB
mask_celeba_HQ_test_1871.jpg 12KB
mask_celeba_HQ_test_308.jpg 12KB
mask_celeba_HQ_test_2102.jpg 11KB
mask_celeba_HQ_test_212.jpg 11KB
mask_celeba_HQ_test_97.jpg 9KB
mask_celeba_HQ_test_1956.jpg 8KB
mask_celeba_HQ_test_2158.jpg 8KB
LICENSE 1KB
README.md 6KB
example.png 11.82MB
framework.png 537KB
ffhq_val_33453.png 356KB
ffhq_val_10343.png 339KB
ffhq_val_10819.png 328KB
BTC_logo.png 25KB
mask_ILSVRC2012_test_00038546.png 10KB
mask_ILSVRC2012_test_00081141.png 10KB
mask_Places365_test_00088998.png 9KB
mask_Places365_test_00240745.png 9KB
mask_ILSVRC2012_test_00057270.png 8KB
mask_Places365_test_00027890.png 7KB
mask_ILSVRC2012_test_00079136.png 7KB
mask_Places365_test_00278728.png 6KB
mask_Places365_test_00054598.png 6KB
mask_Places365_test_00216720.png 6KB
mask_Places365_test_00195617.png 5KB
mask_ILSVRC2012_test_00055197.png 5KB
mask_ILSVRC2012_test_00007239.png 5KB
mask_ILSVRC2012_test_00051208.png 5KB
mask_ILSVRC2012_test_00076650.png 5KB
mask_Places365_test_00193369.png 5KB
mask_Places365_test_00287696.png 5KB
mask_ILSVRC2012_test_00068490.png 4KB
mask_ILSVRC2012_test_00088244.png 4KB
mask_Places365_test_00248483.png 3KB
mask_ILSVRC2012_test_00031325.png 3KB
mask_Places365_test_00060508.png 3KB
共 161 条
- 1
- 2
资源评论
白小白(ง•_•)ง
- 粉丝: 6
- 资源: 1
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功