---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
BERT,全称Bidirectional Encoder Representations from Transformers,是谷歌在2018年提出的一种基于Transformer的双向编码器表示模型。这一模型在自然语言处理(NLP)领域引起了巨大的关注,并推动了该领域的发展。BERT模型通过预训练大量无标签的文本数据,学习到了丰富的语言知识和语义信息,使得它在各种NLP任务上都能取得出色的表现。 BERT模型的核心特点在于其双向性。与传统的基于RNN或LSTM的模型不同,BERT采用了Transformer架构,这种架构允许模型同时考虑上下文信息,从而更准确地理解文本的含义。此外,BERT还使用了大量的预训练数据,通过自监督学习的方式,让模型学习到了丰富的语言知识和语义信息。 谷歌将BERT模型免费开源,使得广大研究者可以方便地使用这一强大的工具进行NLP任务的研究。通过微调BERT模型,研究者们可以在各种NLP任务上取得显著的性能提升,包括文本分类、命名实体识别、问答系统、情感分析等。此外,BERT模型还催生了一系列基于其架构的改进模型,如RoBERTa、ALBERT等
资源推荐
资源详情
资源评论
收起资源包目录
sbert.zip (12个子文件)
1_Pooling
config.json 196B
config_sentence_transformers.json 129B
tokenizer.json 429KB
pytorch_model.bin 390.19MB
sentence_bert_config.json 56B
config.json 862B
tokenizer_config.json 356B
modules.json 242B
special_tokens_map.json 132B
entry_embs.bin 73KB
README.md 3KB
vocab.txt 107KB
共 12 条
- 1
资源评论
OpenYuan开袁
- 粉丝: 1w+
- 资源: 16
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功