# Yet Another EfficientDet Pytorch
The pytorch re-implement of the official [EfficientDet](https://github.com/google/automl/tree/master/efficientdet) with SOTA performance in real time, original paper link: <https://arxiv.org/abs/1911.09070>
## Performance
## Pretrained weights and benchmark
The performance is very close to the paper's, it is still SOTA.
The speed/FPS test includes the time of post-processing with no jit/data precision trick.
| coefficient | pth_download | GPU Mem(MB) | FPS | Extreme FPS (Batchsize 32) | mAP 0.5:0.95(this repo) | mAP 0.5:0.95(paper) |
| :-----: | :-----: | :------: | :------: | :------: | :-----: | :-----: |
| D0 | [efficientdet-d0.pth](https://github.com/zylo117/Yet-Another-Efficient-Pytorch/releases/download/1.0/efficientdet-d0.pth) | 1049 | 36.20 | 163.14 | 33.1 | 33.8
| D1 | [efficientdet-d1.pth](https://github.com/zylo117/Yet-Another-Efficient-Pytorch/releases/download/1.0/efficientdet-d1.pth) | 1159 | 29.69 | 63.08 | 38.8 | 39.6
| D2 | [efficientdet-d2.pth](https://github.com/zylo117/Yet-Another-Efficient-Pytorch/releases/download/1.0/efficientdet-d2.pth) | 1321 | 26.50 | 40.99 | 42.1 | 43.0
| D3 | [efficientdet-d3.pth](https://github.com/zylo117/Yet-Another-Efficient-Pytorch/releases/download/1.0/efficientdet-d3.pth) | 1647 | 22.73 | - | 45.6 | 45.8
| D4 | [efficientdet-d4.pth](https://github.com/zylo117/Yet-Another-Efficient-Pytorch/releases/download/1.0/efficientdet-d4.pth) | 1903 | 14.75 | - | 48.8 | 49.4
| D5 | [efficientdet-d5.pth](https://github.com/zylo117/Yet-Another-Efficient-Pytorch/releases/download/1.0/efficientdet-d5.pth) | 2255 | 7.11 | - | 50.2 | 50.7
| D6 | [efficientdet-d6.pth](https://github.com/zylo117/Yet-Another-Efficient-Pytorch/releases/download/1.0/efficientdet-d6.pth) | 2985 | 5.30 | - | 50.7 | 51.7
| D7 | [efficientdet-d7.pth](https://github.com/zylo117/Yet-Another-Efficient-Pytorch/releases/download/1.0/efficientdet-d7.pth) | 3819 | 3.73 | - | 51.2 | 52.2
## Speed Test
This pure-pytorch implement is up to 2 times faster than the official Tensorflow version without any trick.
Recorded on 2020-04-26,
official git version: <https://github.com/google/automl/commit/006668f2af1744de0357ca3d400527feaa73c122>
| coefficient | FPS(this repo, tested on RTX2080Ti) | FPS(official, tested on T4) | Ratio |
| :------: | :------: | :------: | :-----: |
| D0 | 36.20 | 42.1 | 0.86X |
| D1 | 29.69 | 27.7 | 1.07X |
| D2 | 26.50 | 19.7 | 1.35X |
| D3 | 22.73 | 11.8 | 1.93X |
| D4 | 14.75 | 7.1 | 2.08X |
| D5 | 7.11 | 3.6 | 1.98X |
| D6 | 5.30 | 2.6 | 2.03X |
| D7 | 3.73 | - | - |
Test method (this repo):
Run this test on 2080Ti, Ubuntu 19.10 x64.
1. Prepare a image tensor with the same content, size (1,3,512,512)-pytorch.
2. Initiate everything by inferring once.
3. Run 10 times with batchsize 1 and calculate the average time, including post-processing and visualization, to make the test more practical.
___
## Update Log
[2020-05-11] add boolean string convertion to make sure head_only works
[2020-05-10] replace nms with batched_nms to further improve mAP by 0.5~0.7, thanks [Laughing-q](https://github.com/Laughing-q).
[2020-05-04] fix coco category id mismatch bug, but it shouldn't affect training on custom dataset.
[2020-04-14] fixed loss function bug. please pull the latest code.
[2020-04-14] for those who needs help or can't get a good result after several epochs, check out this [tutorial](tutorial/train_shape.ipynb). You can run it on colab with GPU support.
[2020-04-10] warp the loss function within the training model, so that the memory usage will be balanced when training with multiple gpus, enabling training with bigger batchsize.
[2020-04-10] add D7 (D6 with larger input size and larger anchor scale) support and test its mAP
[2020-04-09] allow custom anchor scales and ratios
[2020-04-08] add D6 support and test its mAP
[2020-04-08] add training script and its doc; update eval script and simple inference script.
[2020-04-07] tested D0-D5 mAP, result seems nice, details can be found [here](benchmark/coco_eval_result)
[2020-04-07] fix anchors strategies.
[2020-04-06] adapt anchor strategies.
[2020-04-05] create this repository.
## Demo
# install requirements
pip install pycocotools numpy opencv-python tqdm tensorboard tensorboardX pyyaml webcolors
pip install torch==1.4.0
pip install torchvision==0.5.0
# run the simple inference script
python efficientdet_test.py
## Training
Training EfficientDet is a painful and time-consuming task. You shouldn't expect to get a good result within a day or two. Please be patient.
Check out this [tutorial](tutorial/train_shape.ipynb) if you are new to this. You can run it on colab with GPU support.
### 1. Prepare your dataset
# your dataset structure should be like this
datasets/
-your_project_name/
-train_set_name/
-*.jpg
-val_set_name/
-*.jpg
-annotations
-instances_{train_set_name}.json
-instances_{val_set_name}.json
# for example, coco2017
datasets/
-coco2017/
-train2017/
-000000000001.jpg
-000000000002.jpg
-000000000003.jpg
-val2017/
-000000000004.jpg
-000000000005.jpg
-000000000006.jpg
-annotations
-instances_train2017.json
-instances_val2017.json
### 2. Manual set project's specific parameters
# create a yml file {your_project_name}.yml under 'projects'folder
# modify it following 'coco.yml'
# for example
project_name: coco
train_set: train2017
val_set: val2017
num_gpus: 4 # 0 means using cpu, 1-N means using gpus
# mean and std in RGB order, actually this part should remain unchanged as long as your dataset is similar to coco.
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
# this is coco anchors, change it if necessary
anchors_scales: '[2 ** 0, 2 ** (1.0 / 3.0), 2 ** (2.0 / 3.0)]'
anchors_ratios: '[(1.0, 1.0), (1.4, 0.7), (0.7, 1.4)]'
# objects from all labels from your dataset with the order from your annotations.
# its index must match your dataset's category_id.
# category_id is one_indexed,
# for example, index of 'car' here is 2, while category_id of is 3
obj_list: ['person', 'bicycle', 'car', ...]
### 3.a. Train on coco from scratch(not necessary)
# train efficientdet-d0 on coco from scratch
# with batchsize 12
# This takes time and requires change
# of hyperparameters every few hours.
# If you have months to kill, do it.
# It's not like someone going to achieve
# better score than the one in the paper.
# The first few epoches will be rather unstable,
# it's quite normal when you train from scratch.
python train.py -c 0 --batch_size 64 --optim sgd --lr 8e-2
### 3.b. Train a custom dataset from scratch
# train efficientdet-d1 on a custom dataset
# with batchsize 8 and learning rate 1e-5
python train.py -c 1 -p your_project_name --batch_size 8 --lr 1e-5
### 3.c. Train a custom dataset with pretrained weights (Highly Recommended)
# train efficientdet-d2 on a custom dataset with pretrained weights
# with batchsize 8 and learning rate 1e-5 for 10 epoches
python train.py -c 2 -p your_project_name --batch_size 8 --lr 1e-5 --num_epochs 10 \
--load_weights /path/to/your/weights/efficientdet-d2.pth
# with a coco-pretrained, you can even freeze the backbone and train heads only
# to speed up training and help convergence.
python train.py -c 2 -p your_project_name --batch_size 8 --lr 1e-5 --num_epochs 10 \
--load_weights /path/to/your/weights/efficientdet-d2.pth \
--head_only True
### 4. Early stopping a training session
# while training, press Ctrl+c, the program wil
没有合适的资源?快使用搜索试试~ 我知道了~
Yet-Another-EfficientDet-Pytorch-master.rar
共743个文件
jpg:615个
desktop-nrjs3vl:68个
py:24个
4星 · 超过85%的资源 需积分: 22 17 下载量 14 浏览量
2020-06-14
16:49:16
上传
评论 2
收藏 233.78MB RAR 举报
温馨提示
手把手教物体检测——EfficientDet源码,包括数据和模型。手把手教物体检测——EfficientDet源码,包括数据和模型。
资源推荐
资源详情
资源评论
收起资源包目录
Yet-Another-EfficientDet-Pytorch-master.rar (743个子文件)
coco_eval_result 8KB
events.out.tfevents.1590620162.DESKTOP-NRJS3VL 15.18MB
events.out.tfevents.1590620162.DESKTOP-NRJS3VL 14.4MB
events.out.tfevents.1590620093.DESKTOP-NRJS3VL 13.87MB
events.out.tfevents.1590620162.DESKTOP-NRJS3VL 11.52MB
events.out.tfevents.1590504086.DESKTOP-NRJS3VL 10.26MB
events.out.tfevents.1590504086.DESKTOP-NRJS3VL 9.73MB
events.out.tfevents.1590504077.DESKTOP-NRJS3VL 9.38MB
events.out.tfevents.1590504086.DESKTOP-NRJS3VL 7.78MB
events.out.tfevents.1589983175.DESKTOP-NRJS3VL 1.91MB
events.out.tfevents.1589983175.DESKTOP-NRJS3VL 1.81MB
events.out.tfevents.1589983166.DESKTOP-NRJS3VL 1.74MB
events.out.tfevents.1589983175.DESKTOP-NRJS3VL 1.45MB
events.out.tfevents.1589983325.DESKTOP-NRJS3VL 10KB
events.out.tfevents.1589983325.DESKTOP-NRJS3VL 10KB
events.out.tfevents.1589983325.DESKTOP-NRJS3VL 8KB
events.out.tfevents.1590622767.DESKTOP-NRJS3VL 3KB
events.out.tfevents.1590622767.DESKTOP-NRJS3VL 3KB
events.out.tfevents.1590622767.DESKTOP-NRJS3VL 2KB
events.out.tfevents.1590507867.DESKTOP-NRJS3VL 2KB
events.out.tfevents.1590507867.DESKTOP-NRJS3VL 2KB
events.out.tfevents.1590507867.DESKTOP-NRJS3VL 1KB
events.out.tfevents.1590405875.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590813171.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592120159.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592119436.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592120159.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590405875.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592120159.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592120259.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592119123.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590405875.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590813122.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592120259.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592120253.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590405867.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592119586.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592119256.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592118946.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592120259.DESKTOP-NRJS3VL 0B
events.out.tfevents.1592120153.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590503485.DESKTOP-NRJS3VL 0B
events.out.tfevents.1589983017.DESKTOP-NRJS3VL 0B
events.out.tfevents.1589983135.DESKTOP-NRJS3VL 0B
events.out.tfevents.1589983017.DESKTOP-NRJS3VL 0B
events.out.tfevents.1591959746.DESKTOP-NRJS3VL 0B
events.out.tfevents.1589983144.DESKTOP-NRJS3VL 0B
events.out.tfevents.1589983111.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590504034.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590504034.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590504025.DESKTOP-NRJS3VL 0B
events.out.tfevents.1589982980.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590503424.DESKTOP-NRJS3VL 0B
events.out.tfevents.1589983144.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590504052.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590503713.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590503584.DESKTOP-NRJS3VL 0B
events.out.tfevents.1589983144.DESKTOP-NRJS3VL 0B
events.out.tfevents.1589983017.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590504061.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590813254.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590504061.DESKTOP-NRJS3VL 0B
events.out.tfevents.1591959746.DESKTOP-NRJS3VL 0B
events.out.tfevents.1591959746.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590504034.DESKTOP-NRJS3VL 0B
events.out.tfevents.1591959695.DESKTOP-NRJS3VL 0B
events.out.tfevents.1591959740.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590503592.DESKTOP-NRJS3VL 0B
events.out.tfevents.1590504061.DESKTOP-NRJS3VL 0B
.gitignore 2KB
.gitignore 50B
Yet-Another-EfficientDet-Pytorch-master.iml 566B
train_shape.ipynb 20KB
aircraft_303.jpg 553KB
aircraft_953.jpg 495KB
aircraft_706.jpg 469KB
aircraft_783.jpg 467KB
oiltank_31.jpg 426KB
aircraft_149.jpg 425KB
oiltank_170.jpg 421KB
oiltank_172.jpg 418KB
oiltank_223.jpg 413KB
aircraft_181.jpg 411KB
aircraft_693.jpg 411KB
oiltank_46.jpg 410KB
aircraft_1005.jpg 406KB
oiltank_244.jpg 406KB
oiltank_96.jpg 401KB
aircraft_728.jpg 400KB
oiltank_366.jpg 397KB
aircraft_752.jpg 393KB
aircraft_608.jpg 387KB
oiltank_342.jpg 385KB
oiltank_21.jpg 384KB
aircraft_343.jpg 381KB
aircraft_820.jpg 381KB
oiltank_184.jpg 380KB
aircraft_332.jpg 378KB
oiltank_322.jpg 376KB
oiltank_67.jpg 376KB
共 743 条
- 1
- 2
- 3
- 4
- 5
- 6
- 8
资源评论
- qq_315444152021-04-21你好,数据集是怎么制作的?直接用图片转换为json格式吗?但,这个图片好像也没有标注有标记框吧?类似分割图片,标记好mask,然后在转换,是否有文档的说明下呢?
AI浩
- 粉丝: 14w+
- 资源: 216
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 数据源-数据可视化(七):Pandas香港酒店数据高级分析,涉及相关系数,协方差,数据离散化,透视表等精美可视化展示
- linux常用命令大全.doc
- 格拉斯哥大学空缺职位申请详细介绍Applicant Guide.pdf
- mmexport1702953347189.mp4
- 2023NOC软件创意编程初中组C++决赛
- 2023NOC软件创意编程赛项真题-python初中决赛
- 2023NOC软件创意编程赛项真题-python小高决赛
- WA4320H-FIT-集客AP220G-FULL编程器固件
- 2023NOC软件创意编程赛项真题图形化小学高年级-决赛
- 2023NOC软件创意编程赛项真题图形化小学低年级-决赛
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功