# BERT
**\*\*\*\*\* New May 31st, 2019: Whole Word Masking Models \*\*\*\*\***
This is a release of several new models which were the result of an improvement
the pre-processing code.
In the original pre-processing code, we randomly select WordPiece tokens to
mask. For example:
`Input Text: the man jumped up , put his basket on phil ##am ##mon ' s head`
`Original Masked Input: [MASK] man [MASK] up , put his [MASK] on phil
[MASK] ##mon ' s head`
The new technique is called Whole Word Masking. In this case, we always mask
*all* of the the tokens corresponding to a word at once. The overall masking
rate remains the same.
`Whole Word Masked Input: the man [MASK] up , put his basket on [MASK] [MASK]
[MASK] ' s head`
The training is identical -- we still predict each masked WordPiece token
independently. The improvement comes from the fact that the original prediction
task was too 'easy' for words that had been split into multiple WordPieces.
This can be enabled during data generation by passing the flag
`--do_whole_word_mask=True` to `create_pretraining_data.py`.
Pre-trained models with Whole Word Masking are linked below. The data and
training were otherwise identical, and the models have identical structure and
vocab to the original models. We only include BERT-Large models. When using
these models, please make it clear in the paper that you are using the Whole
Word Masking variant of BERT-Large.
* **[`BERT-Large, Uncased (Whole Word Masking)`](https://storage.googleapis.com/bert_models/2019_05_30/wwm_uncased_L-24_H-1024_A-16.zip)**:
24-layer, 1024-hidden, 16-heads, 340M parameters
* **[`BERT-Large, Cased (Whole Word Masking)`](https://storage.googleapis.com/bert_models/2019_05_30/wwm_cased_L-24_H-1024_A-16.zip)**:
24-layer, 1024-hidden, 16-heads, 340M parameters
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Uncased (Original) | 91.0/84.3 | 86.05
BERT-Large, Uncased (Whole Word Masking) | 92.8/86.7 | 87.07
BERT-Large, Cased (Original) | 91.5/84.8 | 86.09
BERT-Large, Cased (Whole Word Masking) | 92.9/86.7 | 86.46
**\*\*\*\*\* New February 7th, 2019: TfHub Module \*\*\*\*\***
BERT has been uploaded to [TensorFlow Hub](https://tfhub.dev). See
`run_classifier_with_tfhub.py` for an example of how to use the TF Hub module,
or run an example in the browser on
[Colab](https://colab.sandbox.google.com/github/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb).
**\*\*\*\*\* New November 23rd, 2018: Un-normalized multilingual model + Thai +
Mongolian \*\*\*\*\***
We uploaded a new multilingual model which does *not* perform any normalization
on the input (no lower casing, accent stripping, or Unicode normalization), and
additionally inclues Thai and Mongolian.
**It is recommended to use this version for developing multilingual models,
especially on languages with non-Latin alphabets.**
This does not require any code changes, and can be downloaded here:
* **[`BERT-Base, Multilingual Cased`](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)**:
104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
**\*\*\*\*\* New November 15th, 2018: SOTA SQuAD 2.0 System \*\*\*\*\***
We released code changes to reproduce our 83% F1 SQuAD 2.0 system, which is
currently 1st place on the leaderboard by 3%. See the SQuAD 2.0 section of the
README for details.
**\*\*\*\*\* New November 5th, 2018: Third-party PyTorch and Chainer versions of
BERT available \*\*\*\*\***
NLP researchers from HuggingFace made a
[PyTorch version of BERT available](https://github.com/huggingface/pytorch-pretrained-BERT)
which is compatible with our pre-trained checkpoints and is able to reproduce
our results. Sosuke Kobayashi also made a
[Chainer version of BERT available](https://github.com/soskek/bert-chainer)
(Thanks!) We were not involved in the creation or maintenance of the PyTorch
implementation so please direct any questions towards the authors of that
repository.
**\*\*\*\*\* New November 3rd, 2018: Multilingual and Chinese models available
\*\*\*\*\***
We have made two new BERT models available:
* **[`BERT-Base, Multilingual`](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip)
(Not recommended, use `Multilingual Cased` instead)**: 102 languages,
12-layer, 768-hidden, 12-heads, 110M parameters
* **[`BERT-Base, Chinese`](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip)**:
Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M
parameters
We use character-based tokenization for Chinese, and WordPiece tokenization for
all other languages. Both models should work out-of-the-box without any code
changes. We did update the implementation of `BasicTokenizer` in
`tokenization.py` to support Chinese character tokenization, so please update if
you forked it. However, we did not change the tokenization API.
For more, see the
[Multilingual README](https://github.com/google-research/bert/blob/master/multilingual.md).
**\*\*\*\*\* End new information \*\*\*\*\***
## Introduction
**BERT**, or **B**idirectional **E**ncoder **R**epresentations from
**T**ransformers, is a new method of pre-training language representations which
obtains state-of-the-art results on a wide array of Natural Language Processing
(NLP) tasks.
Our academic paper which describes BERT in detail and provides full results on a
number of tasks can be found here:
[https://arxiv.org/abs/1810.04805](https://arxiv.org/abs/1810.04805).
To give a few numbers, here are the results on the
[SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) question answering
task:
SQuAD v1.1 Leaderboard (Oct 8th 2018) | Test EM | Test F1
------------------------------------- | :------: | :------:
1st Place Ensemble - BERT | **87.4** | **93.2**
2nd Place Ensemble - nlnet | 86.0 | 91.7
1st Place Single Model - BERT | **85.1** | **91.8**
2nd Place Single Model - nlnet | 83.5 | 90.1
And several natural language inference tasks:
System | MultiNLI | Question NLI | SWAG
----------------------- | :------: | :----------: | :------:
BERT | **86.7** | **91.1** | **86.3**
OpenAI GPT (Prev. SOTA) | 82.2 | 88.1 | 75.0
Plus many other tasks.
Moreover, these results were all obtained with almost no task-specific neural
network architecture design.
If you already know what BERT is and you just want to get started, you can
[download the pre-trained models](#pre-trained-models) and
[run a state-of-the-art fine-tuning](#fine-tuning-with-bert) in only a few
minutes.
## What is BERT?
BERT is a method of pre-training language representations, meaning that we train
a general-purpose "language understanding" model on a large text corpus (like
Wikipedia), and then use that model for downstream NLP tasks that we care about
(like question answering). BERT outperforms previous methods because it is the
first *unsupervised*, *deeply bidirectional* system for pre-training NLP.
*Unsupervised* means that BERT was trained using only a plain text corpus, which
is important because an enormous amount of plain text data is publicly available
on the web in many languages.
Pre-trained representations can also either be *context-free* or *contextual*,
and contextual representations can further be *unidirectional* or
*bidirectional*. Context-free models such as
[word2vec](https://www.tensorflow.org/tutorials/representation/word2vec) or
[GloVe](https://nlp.stanford.edu/projects/glove/) generate a single "word
embedding" representation for each word in the vocabulary, so `bank` would have
the same representation in `bank deposit` and `river bank`. Contextual models
instead generate a representati
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
谷歌开源项目BERT源码吉数据(包含详细解读).zip (80个子文件)
GLUE
newsoutput
glue_data
MRPC
train.tsv 923KB
dev.tsv 104KB
msr_paraphrase_test.txt 431KB
msr_paraphrase_train.txt 1023KB
dev_ids.tsv 6KB
test.tsv 437KB
QQP
train.tsv 49.93MB
dev.tsv 5.55MB
original
quora_duplicate_questions.tsv 55.48MB
test.tsv 48.41MB
NEWS
train.tsv 214KB
dev.tsv 53KB
SST-2
train.tsv 3.63MB
dev.tsv 93KB
original
datasetSentences.txt 1.23MB
SOStr.txt 1.17MB
README.txt 2KB
sentiment_labels.txt 3.11MB
original_rt_snippets.txt 1.14MB
dictionary.txt 11.45MB
STree.txt 1.25MB
datasetSplit.txt 82KB
test.tsv 193KB
mydata
dev.csv 26KB
train_sentiment.txt 3.7MB
test_sentiment.txt 894KB
train.csv 26KB
CoLA
train.tsv 419KB
dev.tsv 52KB
original
tokenized
in_domain_train.tsv 428KB
out_of_domain_dev.tsv 28KB
in_domain_dev.tsv 26KB
raw
in_domain_train.tsv 419KB
out_of_domain_dev.tsv 27KB
in_domain_dev.tsv 25KB
test.tsv 48KB
STS-B
train.tsv 961KB
dev.tsv 270KB
LICENSE.txt 6KB
original
sts-test.tsv 276KB
sts-dev.tsv 250KB
sts-train.tsv 880KB
test.tsv 286KB
readme.txt 6KB
output
download_data.py 8KB
BERT_BASE_DIR
chinese_L-12_H-768_A-12
bert_model.ckpt.data-00000-of-00001 392.47MB
bert_model.ckpt.index 8KB
bert_config.json 520B
bert_model.ckpt.meta 884KB
vocab.txt 107KB
uncased_L-12_H-768_A-12
bert_model.ckpt.data-00000-of-00001 420.02MB
bert_model.ckpt.index 8KB
bert_config.json 313B
bert_model.ckpt.meta 883KB
vocab.txt 226KB
chineseoutput
MNLI.zip 165.27MB
bert-master
modeling_test.py 9KB
__init__.py 616B
extract_features.py 14KB
LICENSE 11KB
run_pretraining.py 18KB
sample_text.txt 4KB
CONTRIBUTING.md 1KB
optimization_test.py 2KB
modeling.py 38KB
optimization.py 6KB
tokenization_test.py 4KB
tokenization.py 12KB
requirements.txt 110B
predicting_movie_reviews_with_bert_on_tf_hub.ipynb 65KB
create_pretraining_data.py 16KB
.gitignore 1KB
__pycache__
tokenization.cpython-36.pyc 10KB
modeling.cpython-36.pyc 25KB
optimization.cpython-36.pyc 4KB
run_classifier_with_tfhub.py 11KB
README.md 44KB
multilingual.md 11KB
run_classifier.py 34KB
run_squad.py 45KB
共 80 条
- 1
资源评论
南山南北山北
- 粉丝: 470
- 资源: 50
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- Sigrity-XcitePI What’s New in Sigrity 2018.rar
- Sigrity-XcitePI User Guide.rar
- Sigrity-What's New in Sigrity Translators.rar
- Sigrity-Translators User Guide.rar
- Sigrity-CAD Interfaces Translator Notes.rar
- 一些模型数据 关于live2d得到
- CPT102 Data Structure and Algorithm 数据结构和算法 学习大纲
- Sigrity-altium-eagle-proj translators guid.rar
- 创建一个基于UDP协议的简单传输系统
- jdk - 22.0.2 - linux
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功