<div align="center"><img src="assets/logo.png" width="350"></div>
<img src="assets/demo.png" >
## Introduction
YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities.
For more details, please refer to our [report on Arxiv](https://arxiv.org/abs/2107.08430).
<img src="assets/git_fig.png" width="1000" >
## Updates!!
* 【2021/07/28】 We fix the fatal error of [memory leak](https://github.com/Megvii-BaseDetection/YOLOX/issues/103)
* 【2021/07/26】 We now support [MegEngine](https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/MegEngine) deployment.
* 【2021/07/20】 We have released our technical report on [Arxiv](https://arxiv.org/abs/2107.08430).
## Comming soon
- [ ] YOLOX-P6 and larger model.
- [ ] Objects365 pretrain.
- [ ] Transformer modules.
- [ ] More features in need.
## Benchmark
#### Standard Models.
|Model |size |mAP<sup>test<br>0.5:0.95 | Speed V100<br>(ms) | Params<br>(M) |FLOPs<br>(G)| weights |
| ------ |:---: | :---: |:---: |:---: | :---: | :----: |
|[YOLOX-s](./exps/default/yolox_s.py) |640 |39.6 |9.8 |9.0 | 26.8 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EW62gmO2vnNNs5npxjzunVwB9p307qqygaCkXdTO88BLUg?e=NMTQYw)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s.pth) |
|[YOLOX-m](./exps/default/yolox_m.py) |640 |46.4 |12.3 |25.3 |73.8| [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ERMTP7VFqrVBrXKMU7Vl4TcBQs0SUeCT7kvc-JdIbej4tQ?e=1MDo9y)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_m.pth) |
|[YOLOX-l](./exps/default/yolox_l.py) |640 |50.0 |14.5 |54.2| 155.6 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EWA8w_IEOzBKvuueBqfaZh0BeoG5sVzR-XYbOJO4YlOkRw?e=wHWOBE)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_l.pth) |
|[YOLOX-x](./exps/default/yolox_x.py) |640 |**51.2** | 17.3 |99.1 |281.9 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdgVPHBziOVBtGAXHfeHI5kBza0q9yyueMGdT0wXZfI1rQ?e=tABO5u)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_x.pth) |
|[YOLOX-Darknet53](./exps/default/yolov3.py) |640 | 47.4 | 11.1 |63.7 | 185.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EZ-MV1r_fMFPkPrNjvbJEMoBLOLAnXH-XKEB77w8LhXL6Q?e=mf6wOc)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_darknet53.pth) |
#### Light Models.
|Model |size |mAP<sup>val<br>0.5:0.95 | Params<br>(M) |FLOPs<br>(G)| weights |
| ------ |:---: | :---: |:---: |:---: | :---: |
|[YOLOX-Nano](./exps/default/nano.py) |416 |25.3 | 0.91 |1.08 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdcREey-krhLtdtSnxolxiUBjWMy6EFdiaO9bdOwZ5ygCQ?e=yQpdds)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano.pth) |
|[YOLOX-Tiny](./exps/default/yolox_tiny.py) |416 |32.8 | 5.06 |6.45 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EbZuinX5X1dJmNy8nqSRegABWspKw3QpXxuO82YSoFN1oQ?e=Q7V7XE)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_tiny_32dot8.pth) |
## Quick Start
<details>
<summary>Installation</summary>
Step1. Install YOLOX.
```shell
git clone git@github.com:Megvii-BaseDetection/YOLOX.git
cd YOLOX
pip3 install -U pip && pip3 install -r requirements.txt
pip3 install -v -e . # or python3 setup.py develop
```
Step2. Install [apex](https://github.com/NVIDIA/apex).
```shell
# skip this step if you don't want to train model.
git clone https://github.com/NVIDIA/apex
cd apex
pip3 install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
```
Step3. Install [pycocotools](https://github.com/cocodataset/cocoapi).
```shell
pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
```
</details>
<details>
<summary>Demo</summary>
Step1. Download a pretrained model from the benchmark table.
Step2. Use either -n or -f to specify your detector's config. For example:
```shell
python tools/demo.py image -n yolox-s -c /path/to/your/yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
```
or
```shell
python tools/demo.py image -f exps/default/yolox_s.py -c /path/to/your/yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
```
Demo for video:
```shell
python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pth --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
```
</details>
<details>
<summary>Reproduce our results on COCO</summary>
Step1. Prepare COCO dataset
```shell
cd <YOLOX_HOME>
ln -s /path/to/your/COCO ./datasets/COCO
```
Step2. Reproduce our results on COCO by specifying -n:
```shell
python tools/train.py -n yolox-s -d 8 -b 64 --fp16 -o
yolox-m
yolox-l
yolox-x
```
* -d: number of gpu devices
* -b: total batch size, the recommended number for -b is num-gpu * 8
* --fp16: mixed precision training
**Multi Machine Training**
We also support multi-nodes training. Just add the following args:
* --num\_machines: num of your total training nodes
* --machine\_rank: specify the rank of each node
When using -f, the above commands are equivalent to:
```shell
python tools/train.py -f exps/default/yolox-s.py -d 8 -b 64 --fp16 -o
exps/default/yolox-m.py
exps/default/yolox-l.py
exps/default/yolox-x.py
```
</details>
<details>
<summary>Evaluation</summary>
We support batch testing for fast evaluation:
```shell
python tools/eval.py -n yolox-s -c yolox_s.pth -b 64 -d 8 --conf 0.001 [--fp16] [--fuse]
yolox-m
yolox-l
yolox-x
```
* --fuse: fuse conv and bn
* -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
* -b: total batch size across on all GPUs
To reproduce speed test, we use the following command:
```shell
python tools/eval.py -n yolox-s -c yolox_s.pth -b 1 -d 1 --conf 0.001 --fp16 --fuse
yolox-m
yolox-l
yolox-x
```
</details>
<details open>
<summary>Tutorials</summary>
* [Training on custom data](docs/train_custom_data.md).
</details>
## Deployment
1. [MegEngine in C++ and Python](./demo/MegEngine)
2. [ONNX export and an ONNXRuntime](./demo/ONNXRuntime)
3. [TensorRT in C++ and Python](./demo/TensorRT)
4. [ncnn in C++ and Java](./demo/ncnn)
5. [OpenVINO in C++ and Python](./demo/OpenVINO)
## Third-party resources
* The ncnn android app with video support: [ncnn-android-yolox](https://github.com/FeiGeChuanShu/ncnn-android-yolox) from [FeiGeChuanShu](https://github.com/FeiGeChuanShu)
* YOLOX with Tengine support: [Tengine](https://github.com/OAID/Tengine/blob/tengine-lite/examples/tm_yolox.cpp) from [BUG1989](https://github.com/BUG1989)
* YOLOX + ROS2 Foxy: [YOLOX-ROS](https://github.com/Ar-Ray-code/YOLOX-ROS) from [Ar-Ray](https://github.com/Ar-Ray-code)
* YOLOX Deploy DeepStream: [YOLOX-deepstream](https://github.com/nanmi/YOLOX-deepstream) from [nanmi](https://github.com/nanmi)
* YOLOX ONNXRuntime C++ Demo: [lite.ai](https://github.com/DefTruth/lite.ai/blob/main/ort/cv/yolox.cpp) from [DefTruth](https://github.com/DefTruth)
* Converting darknet or yolov5 datasets to COCO format for YOLOX: [YOLO2COCO](https://github.com/RapidAI/YOLO2COCO) from [Daniel](https://github.com/znsoftm)
## Cite YOLOX
If you use YOLOX in
没有合适的资源?快使用搜索试试~ 我知道了~
支持yoloX的各种方式的部署
共160个文件
py:80个
md:15个
sample:11个
需积分: 17 4 下载量 143 浏览量
2022-11-08
15:26:58
上传
评论
收藏 5.78MB RAR 举报
温馨提示
支持yoloX的各种方式的部署,有C++版本和python版本,代码易懂,环境配置便捷,编译方便,在demo文件夹中,包含了目前常用的部署方式,tensorrt,onnx,openVINO,ncnn等,非常好用
资源推荐
资源详情
资源评论
收起资源包目录
支持yoloX的各种方式的部署 (160个子文件)
gradlew.bat 2KB
setup.cfg 615B
config 283B
cocoeval.cpp 20KB
yolox_openvino.cpp 18KB
yolox.cpp 17KB
yolox.cpp 16KB
yoloXncnn_jni.cpp 14KB
yolox.cpp 13KB
vision.cpp 524B
custom.css 556B
description 73B
exclude 240B
.gitignore 3KB
.gitignore 6B
build.gradle 496B
build.gradle 335B
settings.gradle 15B
gradlew 5KB
logging.h 16KB
cocoeval.h 3KB
HEAD 212B
HEAD 212B
HEAD 30B
HEAD 21B
pack-78895f024c23fcf9d6d2ed24fce5dcb62c1d141b.idx 28KB
index 15KB
gradle-wrapper.jar 53KB
MainActivity.java 8KB
yoloXncnn.java 566B
dog.jpg 160KB
LICENSE 11KB
main 212B
main 41B
Makefile 629B
README.md 8KB
train_custom_data.md 7KB
README.md 6KB
README.md 4KB
README.md 4KB
README.md 4KB
README.md 3KB
quick_run.md 3KB
model_zoo.md 3KB
README.md 1KB
README.md 1KB
README.md 1KB
README.md 868B
README.md 767B
README.md 68B
pack-78895f024c23fcf9d6d2ed24fce5dcb62c1d141b.pack 2.92MB
packed-refs 343B
yolox.param 17KB
demo.png 2.09MB
git_fig.png 390KB
logo.png 15KB
gradle-wrapper.properties 232B
yolo_head.py 23KB
conf.py 13KB
trainer.py 11KB
voc.py 10KB
demo.py 9KB
mosaicdetection.py 9KB
data_augment.py 9KB
yolox_base.py 8KB
coco_evaluator.py 7KB
dist.py 7KB
launch.py 7KB
voc_evaluator.py 7KB
demo.py 7KB
lr_scheduler.py 6KB
yolo_head.py 6KB
dataloading.py 6KB
eval.py 6KB
network_blocks.py 6KB
darknet.py 6KB
network_blocks.py 6KB
openvino_inference.py 6KB
fast_coco_eval_api.py 6KB
voc_eval.py 6KB
darknet.py 6KB
coco.py 4KB
boxes.py 4KB
datasets_wrapper.py 4KB
yolox_voc_s.py 4KB
visualize.py 4KB
visualize.py 4KB
yolo_pafpn.py 3KB
train.py 3KB
yolo_pafpn.py 3KB
samplers.py 3KB
model_utils.py 3KB
export_onnx.py 3KB
metric.py 3KB
yolov3.py 3KB
allreduce_norm.py 3KB
demo_utils.py 3KB
onnx_inference.py 3KB
logger.py 3KB
process.py 3KB
共 160 条
- 1
- 2
资源评论
YOULANSHENGMENG
- 粉丝: 511
- 资源: 36
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功