[![Build Status](https://api.travis-ci.org/OpenNMT/OpenNMT-tf.svg?branch=master)](https://travis-ci.org/OpenNMT/OpenNMT-tf) [![PyPI version](https://badge.fury.io/py/OpenNMT-tf.svg)](https://badge.fury.io/py/OpenNMT-tf) [![Documentation](https://img.shields.io/badge/docs-latest-blue.svg)](https://opennmt.net/OpenNMT-tf/) [![Gitter](https://badges.gitter.im/OpenNMT/OpenNMT-tf.svg)](https://gitter.im/OpenNMT/OpenNMT-tf?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
# OpenNMT-tf
OpenNMT-tf is a general purpose sequence learning toolkit using TensorFlow 2.0. While neural machine translation is the main target task, it has been designed to more generally support:
* sequence to sequence mapping
* sequence tagging
* sequence classification
* language modeling
The project is production-oriented and comes with [backward compatibility guarantees](https://github.com/OpenNMT/OpenNMT-tf/blob/master/CHANGELOG.md).
## Key features
### Modular model architecture
Models are described with code to allow training custom architectures and overriding default behavior. For example, the following instance defines a sequence to sequence model with 2 concatenated input features, a self-attentional encoder, and an attentional RNN decoder sharing its input and output embeddings:
```python
opennmt.models.SequenceToSequence(
source_inputter=opennmt.inputters.ParallelInputter(
[opennmt.inputters.WordEmbedder(embedding_size=256),
opennmt.inputters.WordEmbedder(embedding_size=256)],
reducer=opennmt.layers.ConcatReducer(axis=-1)),
target_inputter=opennmt.inputters.WordEmbedder(embedding_size=512),
encoder=opennmt.encoders.SelfAttentionEncoder(num_layers=6),
decoder=opennmt.decoders.AttentionalRNNDecoder(
num_layers=4,
num_units=512,
attention_mechanism_class=tfa.seq2seq.LuongAttention),
share_embeddings=opennmt.models.EmbeddingsSharingLevel.TARGET)
```
The [`opennmt`](https://opennmt.net/OpenNMT-tf/package/opennmt.html) package exposes other building blocks that can be used to design:
* [multiple input features](https://opennmt.net/OpenNMT-tf/package/opennmt.inputters.ParallelInputter.html)
* [mixed embedding representation](https://opennmt.net/OpenNMT-tf/package/opennmt.inputters.MixedInputter.html)
* [multi-source context](https://opennmt.net/OpenNMT-tf/package/opennmt.inputters.ParallelInputter.html)
* [cascaded](https://opennmt.net/OpenNMT-tf/package/opennmt.encoders.SequentialEncoder.html) or [multi-column](https://opennmt.net/OpenNMT-tf/package/opennmt.encoders.ParallelEncoder.html) encoder
* [hybrid sequence to sequence models](https://opennmt.net/OpenNMT-tf/package/opennmt.models.SequenceToSequence.html)
Standard models such as the Transformer are defined in a [model catalog](https://github.com/OpenNMT/OpenNMT-tf/blob/master/opennmt/models/catalog.py) and can be used without additional configuration.
*Find more information about model configuration in the [documentation](https://opennmt.net/OpenNMT-tf/model.html).*
### Full TensorFlow 2.0 integration
OpenNMT-tf is fully integrated in the TensorFlow 2.0 ecosystem:
* Reusable layers extending [`tf.keras.layers.Layer`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer)
* Multi-GPU training with [`tf.distribute`](https://www.tensorflow.org/api_docs/python/tf/distribute)
* Mixed precision support via a [graph optimization pass](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/train/experimental/enable_mixed_precision_graph_rewrite)
* Visualization with [TensorBoard](https://www.tensorflow.org/tensorboard)
* `tf.function` graph tracing that can be [exported to a SavedModel](https://opennmt.net/OpenNMT-tf/serving.html) and served with [TensorFlow Serving](https://github.com/OpenNMT/OpenNMT-tf/tree/master/examples/serving/tensorflow_serving) or [Python](https://github.com/OpenNMT/OpenNMT-tf/tree/master/examples/serving/python)
### Dynamic data pipeline
OpenNMT-tf does not require to compile the data before the training. Instead, it can directly read text files and preprocess the data when needed by the training. This allows [on-the-fly tokenization](https://opennmt.net/OpenNMT-tf/tokenization.html) and data augmentation by injecting random noise.
### Model fine-tuning
OpenNMT-tf supports model fine-tuning workflows:
* Model weights can be transferred to new word vocabularies, e.g. to inject domain terminology before fine-tuning on in-domain data
* [Contrastive learning](https://ai.google/research/pubs/pub48253/) to reduce word omission errors
### Source-target alignment
Sequence to sequence models can be trained with [guided alignment](https://arxiv.org/abs/1607.01628) and alignment information are returned as part of the translation API.
---
OpenNMT-tf also implements most of the techniques commonly used to train and evaluate sequence models, such as:
* automatic evaluation during the training
* multiple decoding strategy: greedy search, beam search, random sampling
* N-best rescoring
* gradient accumulation
* scheduled sampling
* checkpoint averaging
* ... and more!
*See the [documentation](https://opennmt.net/OpenNMT-tf/) to learn how to use these features.*
## Usage
OpenNMT-tf requires:
* Python >= 3.5
We recommend installing it with `pip`:
```bash
pip install --upgrade pip
pip install OpenNMT-tf
```
*See the [documentation](https://opennmt.net/OpenNMT-tf/installation.html) for more information.*
### Command line
OpenNMT-tf comes with several command line utilities to prepare data, train, and evaluate models.
For all tasks involving a model execution, OpenNMT-tf uses a unique entrypoint: `onmt-main`. A typical OpenNMT-tf run consists of 3 elements:
* the **model** type
* the **parameters** described in a YAML file
* the **run** type such as `train`, `eval`, `infer`, `export`, `score`, `average_checkpoints`, or `update_vocab`
that are passed to the main script:
```
onmt-main --model_type <model> --config <config_file.yml> --auto_config <run_type> <run_options>
```
*For more information and examples on how to use OpenNMT-tf, please visit [our documentation](https://opennmt.net/OpenNMT-tf).*
### Library
OpenNMT-tf also exposes [well-defined and stable APIs](https://opennmt.net/OpenNMT-tf/package/opennmt.html), from high-level training utilities to low-level model layers and dataset transformations.
For example, the `Runner` class can be used to train and evaluate models with few lines of code:
```python
import opennmt
config = {
"model_dir": "/data/wmt-ende/checkpoints/,
"data": {
"source_vocabulary": "/data/wmt-ende/joint-vocab.txt",
"target_vocabulary": "/data/wmt-ende/joint-vocab.txt",
"train_features_file": "/data/wmt-ende/train.en",
"train_labels_file": "/data/wmt-ende/train.de",
"eval_features_file": "/data/wmt-ende/valid.en",
"eval_labels_file": "/data/wmt-ende/valid.de",
}
}
model = opennmt.models.TransformerBase()
runner = opennmt.Runner(model, config, auto_config=True)
runner.train(num_devices=2, with_eval=True)
```
Here is another example using OpenNMT-tf to run efficient beam search with a self-attentional decoder:
```python
decoder = opennmt.decoders.SelfAttentionDecoder(num_layers=6)
decoder.initialize(vocab_size=32000)
initial_state = decoder.initial_state(
memory=memory,
memory_sequence_length=memory_sequence_length)
batch_size = tf.shape(memory)[0]
start_ids = tf.fill([batch_size], opennmt.START_OF_SENTENCE_ID)
decoding_result = decoder.dynamic_decode(
target_embedding,
start_ids=start_ids,
initial_state=initial_state,
decoding_strategy=opennmt.utils.BeamSearch(4))
```
More examples using OpenNMT-tf as a library can be found online:
* The directory [examples/library](https://github.com/OpenNMT/OpenNMT-tf/tree/master/examples/library) contains additional examples that use OpenNMT-tf as a library
* [nmt-wizard-docker](https://github.com/OpenNMT/nmt-wizard-docker) uses the high-le
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
共73个文件
py:64个
txt:5个
pkg-info:2个
资源分类:Python库 所属语言:Python 资源全名:OpenNMT-tf-2.8.1.tar.gz 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
资源推荐
资源详情
资源评论
收起资源包目录
OpenNMT-tf-2.8.1.tar.gz (73个子文件)
OpenNMT-tf-2.8.1
PKG-INFO 11KB
OpenNMT_tf.egg-info
PKG-INFO 11KB
requires.txt 203B
SOURCES.txt 2KB
entry_points.txt 313B
top_level.txt 8B
dependency_links.txt 1B
setup.cfg 38B
opennmt
constants.py 235B
models
catalog.py 12KB
language_model.py 10KB
__init__.py 1KB
sequence_tagger.py 8KB
model.py 13KB
sequence_to_sequence.py 21KB
sequence_classifier.py 3KB
transformer.py 6KB
bin
ark_to_records.py 4KB
main.py 9KB
detokenize_text.py 610B
build_vocab.py 3KB
__init__.py 0B
tokenize_text.py 602B
merge_config.py 471B
runner.py 18KB
optimizers
utils.py 4KB
__init__.py 150B
data
text.py 4KB
dataset.py 23KB
__init__.py 1KB
noise.py 8KB
vocab.py 9KB
schedules
lr_schedules.py 7KB
__init__.py 476B
evaluation.py 12KB
inputters
text_inputter.py 21KB
record_inputter.py 3KB
__init__.py 1KB
inputter.py 21KB
encoders
encoder.py 7KB
__init__.py 647B
rnn_encoder.py 8KB
self_attention_encoder.py 3KB
mean_encoder.py 491B
conv_encoder.py 2KB
__init__.py 542B
layers
common.py 5KB
reducer.py 6KB
position.py 3KB
bridge.py 3KB
rnn.py 9KB
__init__.py 1KB
transformer.py 18KB
decoders
rnn_decoder.py 9KB
__init__.py 387B
self_attention_decoder.py 7KB
decoder.py 16KB
config.py 9KB
utils
exporters.py 3KB
tensor.py 664B
decoding.py 19KB
misc.py 12KB
compat.py 648B
losses.py 7KB
__init__.py 1KB
scorers.py 2KB
checkpoint.py 10KB
tokenizers
tokenizer.py 11KB
opennmt_tokenizer.py 1KB
__init__.py 512B
training.py 18KB
setup.py 2KB
README.md 8KB
共 73 条
- 1
资源评论
挣扎的蓝藻
- 粉丝: 13w+
- 资源: 15万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功