# seqeval
seqeval is a Python framework for sequence labeling evaluation.
seqeval can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, semantic role labeling and so on.
This is well-tested by using the Perl script [conlleval](https://www.clips.uantwerpen.be/conll2002/ner/bin/conlleval.txt),
which can be used for measuring the performance of a system that has processed the CoNLL-2000 shared task data.
## Support features
seqeval supports following schemes:
- IOB1
- IOB2
- IOE1
- IOE2
- IOBES(only in strict mode)
- BILOU(only in strict mode)
and following metrics:
| metrics | description |
|---|---|
| accuracy_score(y\_true, y\_pred) | Compute the accuracy. |
| precision_score(y\_true, y\_pred) | Compute the precision. |
| recall_score(y\_true, y\_pred) | Compute the recall. |
| f1_score(y\_true, y\_pred) | Compute the F1 score, also known as balanced F-score or F-measure. |
| classification_report(y\_true, y\_pred, digits=2) | Build a text report showing the main classification metrics. `digits` is number of digits for formatting output floating point values. Default value is `2`. |
## Usage
seqeval supports the two evaluation modes. You can specify the following mode to each metrics:
- default
- strict
The default mode is compatible with [conlleval](https://www.clips.uantwerpen.be/conll2002/ner/bin/conlleval.txt). If you want to use the default mode, you don't need to specify it:
```python
>>> from seqeval.metrics import accuracy_score
>>> from seqeval.metrics import classification_report
>>> from seqeval.metrics import f1_score
>>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> f1_score(y_true, y_pred)
0.50
>>> classification_report(y_true, y_pred)
precision recall f1-score support
MISC 0.00 0.00 0.00 1
PER 1.00 1.00 1.00 1
micro avg 0.50 0.50 0.50 2
macro avg 0.50 0.50 0.50 2
weighted avg 0.50 0.50 0.50 2
```
In strict mode, the inputs are evaluated according to the specified schema. The behavior of the strict mode is different from the default one which is designed to simulate conlleval. If you want to use the strict mode, please specify `mode='strict'` and `scheme` arguments at the same time:
```python
>>> from seqeval.scheme import IOB2
>>> classification_report(y_true, y_pred, mode='strict', scheme=IOB2)
precision recall f1-score support
MISC 0.00 0.00 0.00 1
PER 1.00 1.00 1.00 1
micro avg 0.50 0.50 0.50 2
macro avg 0.50 0.50 0.50 2
weighted avg 0.50 0.50 0.50 2
```
A minimum case to explain differences between the default and strict mode:
```python
>>> from seqeval.metrics import classification_report
>>> from seqeval.scheme import IOB2
>>> y_true = [['B-NP', 'I-NP', 'O']]
>>> y_pred = [['I-NP', 'I-NP', 'O']]
>>> classification_report(y_true, y_pred)
precision recall f1-score support
NP 1.00 1.00 1.00 1
micro avg 1.00 1.00 1.00 1
macro avg 1.00 1.00 1.00 1
weighted avg 1.00 1.00 1.00 1
>>> classification_report(y_true, y_pred, mode='strict', scheme=IOB2)
precision recall f1-score support
NP 0.00 0.00 0.00 1
micro avg 0.00 0.00 0.00 1
macro avg 0.00 0.00 0.00 1
weighted avg 0.00 0.00 0.00 1
```
## Installation
To install seqeval, simply run:
```bash
pip install seqeval
```
## License
[MIT](https://github.com/chakki-works/seqeval/blob/master/LICENSE)
## Citation
```tex
@misc{seqeval,
title={{seqeval}: A Python framework for sequence labeling evaluation},
url={https://github.com/chakki-works/seqeval},
note={Software available from https://github.com/chakki-works/seqeval},
author={Hiroki Nakayama},
year={2018},
}
```
没有合适的资源?快使用搜索试试~ 我知道了~
Python 3.10版本可用的seqeval-1.2.1版本源码
共31个文件
py:12个
md:8个
yml:3个
需积分: 5 15 下载量 71 浏览量
2023-01-13
14:47:21
上传
评论
收藏 50KB ZIP 举报
温馨提示
Python 3.10版本可用的seqeval-1.2.1版本源码,用于解决“关于Python 3.10在使用百度飞桨 NLP 时 报错 ModuleNotFoundError: No module named 'seqeval' ”
资源推荐
资源详情
资源评论
收起资源包目录
seqeval-1.2.1.zip (31个子文件)
seqeval-1.2.1
.flake8 31B
.travis.yml 818B
setup.py 2KB
.github
ISSUE_TEMPLATE
02-bug.md 539B
04-request.md 250B
03-install.md 597B
01-question.md 631B
workflows
ci.yml 588B
codeql-analysis.yml 3KB
PULL_REQUEST_TEMPLATE.md 1KB
Pipfile 438B
LICENSE 1KB
tests
__init__.py 0B
data
ground_truth.txt 17KB
ground_truth_inv.txt 17KB
test_reporters.py 2KB
conlleval.pl 12KB
test_metrics.py 9KB
test_scheme.py 23KB
test_v1.py 7KB
CONTRIBUTING.md 2KB
seqeval
__init__.py 0B
reporters.py 2KB
metrics
__init__.py 289B
v1.py 17KB
sequence_labeling.py 28KB
scheme.py 11KB
CODE_OF_CONDUCT.md 3KB
Pipfile.lock 15KB
.gitignore 3KB
README.md 4KB
共 31 条
- 1
资源评论
czwhit
- 粉丝: 2007
- 资源: 10
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功