# Sequence to Sequence models with PyTorch
This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch
At present it has implementations for :
* Vanilla Sequence to Sequence models
* Attention based Sequence to Sequence models from https://arxiv.org/abs/1409.0473 and https://arxiv.org/abs/1508.04025
* Faster attention mechanisms using dot products between the **final** encoder and decoder hidden states
* Sequence to Sequence autoencoders (experimental)
## Sequence to Sequence models
A vanilla sequence to sequence model presented in https://arxiv.org/abs/1409.3215, https://arxiv.org/abs/1406.1078 consits of using a recurrent neural network such as an LSTM (http://dl.acm.org/citation.cfm?id=1246450) or GRU (https://arxiv.org/abs/1412.3555) to encode a sequence of words or characters in a *source* language into a fixed length vector representation and then deocoding from that representation using another RNN in the *target* language.
![Sequence to Sequence](/images/Seq2Seq.png)
An extension of sequence to sequence models that incorporate an attention mechanism was presented in https://arxiv.org/abs/1409.0473 that uses information from the RNN hidden states in the source language at each time step in the deocder RNN. This attention mechanism significantly improves performance on tasks like machine translation. A few variants of the attention model for the task of machine translation have been presented in https://arxiv.org/abs/1508.04025.
![Sequence to Sequence with attention](/images/Seq2SeqAttention.png)
The repository also contains a simpler and faster variant of the attention mechanism that doesn't attend over the hidden states of the encoder at each time step in the deocder. Instead, it computes the a single batched dot product between all the hidden states of the decoder and encoder once after the decoder has processed all inputs in the target. This however comes at a minor cost in model performance. One advantage of this model is that it is possible to use the cuDNN LSTM in the attention based decoder as well since the attention is computed after running through all the inputs in the decoder.
## Results on English - French WMT14
The following presents the model architecture and results obtained when training on the WMT14 English - French dataset. The training data is the english-french bitext from Europral-v7. The validation dataset is newstest2011
The model was trained with following configuration
* Source and target word embedding dimensions - 512
* Source and target LSTM hidden dimensions - 1024
* Encoder - 2 Layer Bidirectional LSTM
* Decoder - 1 Layer LSTM
* Optimization - ADAM with a learning rate of 0.0001 and batch size of 80
* Decoding - Greedy decoding (argmax)
| Model | BLEU | Train Time Per Epoch |
| ------------- | ------------- | ------------- |
| Seq2Seq | 11.82 | 2h 50min |
| Seq2Seq FastAttention | 18.89 | 3h 45min |
| Seq2Seq Attention | 22.60 | 4h 47min |
Times reported are using a Pre 2016 Nvidia GeForce Titan X
## Running
To run, edit the config file and execute python nmt.py --config <your_config_file>
NOTE: This only runs on a GPU for now.
没有合适的资源?快使用搜索试试~ 我知道了~
Seq2Seq-PyTorch, 使用PyTorch序列序列序列.zip
共15个文件
py:9个
png:2个
json:2个
需积分: 23 9 下载量 148 浏览量
2019-09-17
14:53:03
上传
评论
收藏 80KB ZIP 举报
温馨提示
Seq2Seq-PyTorch, 使用PyTorch序列序列序列 带PyTorch的序列序列的序列这个库包含了PyTorch中序列( Seq2Seq ) 模型序列的实现目前它具有以下实现:* Vanilla Sequence to Sequence models* Attention
资源推荐
资源详情
资源评论
收起资源包目录
Seq2Seq-PyTorch.zip (15个子文件)
Seq2Seq-PyTorch-master
README.md 3KB
decode.py 18KB
LICENSE 475B
dialog.py 7KB
beam_search.py 3KB
evaluate.py 10KB
nmt.py 9KB
config_en_fr_attention_wmt14.json 1KB
summarization.py 6KB
model.py 36KB
data_utils.py 7KB
nmt_autoencoder.py 7KB
images
Seq2SeqAttention.png 34KB
Seq2Seq.png 28KB
config_en_autoencoder_1_billion.json 964B
共 15 条
- 1
资源评论
weixin_38744207
- 粉丝: 343
- 资源: 2万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- springboot-mavenBaseDemo 内容包含:springboot的maven基础状态,1.8JDK可以直接运行
- otis rsl远程串行接口协议标准.pdf
- buildx构建镜像时所需的镜像文件
- F103-霸道开发板2.8寸电阻触摸屏例程.rar
- Google(高德)地图瓦片python代码下载
- Python实现输出杨辉三角形
- polsarpro官方教程、操作说明 PolSARpro v5.0 Software Training Course
- STM32 TouchGFX的使用二图片显示
- buildx镜像文件,也可以通过网上其他方式获取
- 【中级软件设计师】上午题12-软件工程(2):单元测试、黑盒测试、白盒测试、软件运行与维护
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功