# TokenFlow: Consistent Diffusion Features for Consistent Video Editing
## [<a href="https://diffusion-tokenflow.github.io/" target="_blank">Project Page</a>]
[![arXiv](https://img.shields.io/badge/arXiv-TokenFlow-b31b1b.svg)](https://arxiv.org/abs/2307.10373) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/tokenflow)
![Pytorch](https://img.shields.io/badge/PyTorch->=1.10.0-Red?logo=pytorch)
[//]: # ([![Replicate](https://replicate.com/cjwbw/multidiffusion/badge)](https://replicate.com/cjwbw/multidiffusion))
[//]: # ([![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/text2live))
https://github.com/omerbt/TokenFlow/assets/52277000/93dccd63-7e9a-4540-a941-31962361b0bb
**TokenFlow** is a framework that enables consistent video editing, using a pre-trained text-to-image diffusion model, without any further training or finetuning.
[//]: # (as described in <a href="https://arxiv.org/abs/2302.08113" target="_blank">(link to paper)</a>.)
[//]: # (. It can be used for localized and global edits that change the texture of existing objects or augment the scene with semi-transparent effects (e.g. smoke, fire, snow).)
[//]: # (### Abstract)
>The generative AI revolution has been recently expanded to videos. Nevertheless, current state-of-the-art video mod- els are still lagging behind image models in terms of visual quality and user control over the generated content. In this work, we present a framework that harnesses the power of a text-to-image diffusion model for the task of text-driven video editing. Specifically, given a source video and a target text-prompt, our method generates a high-quality video that adheres to the target text, while preserving the spatial lay- out and dynamics of the input video. Our method is based on our key observation that consistency in the edited video can be obtained by enforcing consistency in the diffusion feature space. We achieve this by explicitly propagating diffusion features based on inter-frame correspondences, readily available in the model. Thus, our framework does not require any training or fine-tuning, and can work in con- junction with any off-the-shelf text-to-image editing method. We demonstrate state-of-the-art editing results on a variety of real-world videos.
For more see the [project webpage](https://diffusion-tokenflow.github.io).
## Sample results
<td><img src="assets/videos.gif"></td>
## Environment
```
conda create -n tokenflow python=3.9
conda activate tokenflow
pip install -r requirements.txt
```
## Preprocess
Preprocess you video by running using the following command:
```
python preprocess.py --data_path <data/myvideo.mp4> \
--inversion_prompt <'' or a string describing the video content>
```
Additional arguments:
```
--save_dir <latents>
--H <video height>
--W <video width>
--sd_version <Stable-Diffusion version>
--steps <number of inversion steps>
--save_steps <number of sampling steps that will be used later for editing>
--n_frames <number of frames>
```
more information on the arguments can be found here.
### Note:
The video reconstruction will be saved as inverted.mp4. A good reconstruction is required for successfull editing with our method.
## Editing
- TokenFlow is designed for video for structure-preserving edits.
- Our method is built on top of an image editing technique (e.g., Plug-and-Play, ControlNet, etc.) - therefore, it is important to ensure that the edit works with the chosen base technique.
- The LDM decoder may introduce some jitterness, depending on the original video.
To edit your video, first create a yaml config as in ``configs/config_pnp.yaml``.
Then run
```
python run_tokenflow_pnp.py
```
Similarly, if you want to use ControlNet or SDEedit, create a yaml config as in ``config/config_controlnet.yaml`` or ```configs/config_SDEdit.yaml``` and run ```python run_tokenflow_controlnet.py``` or ``python run_tokenflow_SDEdit.py`` respectivly.
## Citation
```
@article{tokenflow2023,
title = {TokenFlow: Consistent Diffusion Features for Consistent Video Editing},
author = {Geyer, Michal and Bar-Tal, Omer and Bagon, Shai and Dekel, Tali},
journal={arXiv preprint arxiv:2307.10373},
year={2023}
}
```
没有合适的资源?快使用搜索试试~ 我知道了~
前沿的人工智能模型移动模型 2 前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的
共14个文件
py:5个
yaml:2个
mp4:2个
0 下载量 103 浏览量
2024-01-06
23:45:53
上传
评论
收藏 28.06MB ZIP 举报
温馨提示
前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2前沿的人工智能模型移动模型 2
资源推荐
资源详情
资源评论
收起资源包目录
TokenFlow-master.zip (14个子文件)
TokenFlow-master
preprocess.py 16KB
tokenflow_utils.py 21KB
assets
videos.gif 26.47MB
util.py 3KB
data
wolf.mp4 370KB
woman-running.mp4 1.23MB
LICENSE 1KB
configs
config_sdedit.yaml 601B
config_pnp.yaml 559B
run_tokenflow_sdedit.py 13KB
requirements.txt 110B
.gitignore 10B
run_tokenflow_pnp.py 14KB
README.md 5KB
共 14 条
- 1
资源评论
qq_39305263
- 粉丝: 176
- 资源: 61
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功