# yolov5 - CoreML Tools
The scripts in this repo can be used to export a YOLOv5 model to coreml and benchmark it.
## Dependencies
* python 3.8.x
* poetry
Other dependencies are installed by poetry automatically with:
```console
$ poetry install
```
**Note**: It assumes that you've cloned the yolov5 repo to `../yolov5` relative to the project and that you have added a `setup.py` file, so poetry installs the dependencies of YOLOv5. See [Issue #2525](https://github.com/ultralytics/yolov5/issues/2525#issuecomment-821525523).
It's recommended to use the version as specified. CoreML Tools only works with certain PyTorch Versions, which are only available for certain Python versions. You might want to consider using pyenv:
```
$ pyenv install 3.8.6
$ pyenv global 3.8.6
$ poetry install
```
## Export to CoreML
We use the unified conversion api of [coremltools 4.0](https://coremltools.readme.io/docs) to convert a YOLOv5 model to CoreML. Additionally we add an export layer, so the model integrates well with [Apple's Vision Framework](https://developer.apple.com/documentation/vision).
### Limitations
For optimal integration in Swift you need at least `iOS 13` as the export layer with NMS is otherwise not supported. It is possible though to use `iOS 12` if you implement NMS manually in Swift.
Experience has shown that one needs to be very careful with the versions of libaries used. In particular the PyTorch version. It's recommended to not change the version of the libaries used. If you need for some reasons a newer PyTorch version, check the [github page of coremltools](https://github.com/apple/coremltools/issues) for open issues regarding the PyTorch version you want to use, in the past most recents versions where not compatible.
#### YOLOv5 Version
The models always need the original source code, unless you do have a torchscript model and therefore skip the tracing step in the script.
**Note**: It has a huge impact on performance if the model runs on the NeuralEngine or the CPU / GPU (or switches between them) on your device. Unfortunately, there is no documentation which model layers can run on the neural engine and which not ([some infos here](https://github.com/hollance/neural-engine)). With yolov5 version 2, 3 and 4 there were problems with the SPP Layers with kernel sizes bigger than 7, so we replaced them and retrained the model. On a recent device YOLOv5s should be around 20ms / detection.
See [Issue 2526](https://github.com/ultralytics/yolov5/issues/2526#issuecomment-823059344)
### Usage
First, some values in the script should be changed according to your needs. In `src/coreml_export/main.py` you'll find some global variables, which are specific to the concrete model you use:
* `classLabels` -> This is the list of labels your model recognized, all pretrained models of YOLOv5 used the [coco dataset](https://cocodataset.org/#home) and therefore regonize 80 Labels
* `anchors` -> These depend on the model version you use (s, m, l, x) and you will find these in the according `yolo<Version>.yml` file in the `yolov5` repository (Use the files from the correct yolov5 version!).
* `reverseModel` -> Some models have their strides and anchors in reversed order. This variable exists for convinience to quickly change the order of those.
To run the script use the command:
```console
$ poetry run coreml-export
```
Run it with `-h` to get a list of optional arguments to customize model input & output paths / names and other things.
## Helper Export Scripts
### Testing
There is a simple script to test the exported model with some sample images in `src/coreml_export/test.py`. You should check if the predictions are similar to those of the original PyTorch model. Be aware that the predictions will be slightly different though. If there are huge differences this might be a hint that you need to set `reverseModel` accordingly.
To run the script use the command:
```console
$ poetry run coreml-test
```
### Debugging / Fixing Issues
Most important is that the model runs fully on the Neural Engine. There is no official documentation, but take a look at [Everything we actually know about the Apple Neural Engine (ANE)](https://github.com/hollance/neural-engine).
In `src/coreml_export/snippets.py` you might find a few helpful snippets to temporarily (!) change layers, parameters or other things of the model and test how it influences performance.
## CoreML Metrics
This makes heavy use of the library [Object Detection Metrics Library](https://github.com/rafaelpadilla/Object-Detection-Metrics) developed by @rafaelpadilla under the MIT License.
The library is included in the `objectionDetectionMetrics` subfolder with some small adjustments.
The Metrics script in `src/coreml_metrics/main.py` can be used to benchmark a CoreML Model. It would calculate a precision x recall curve for every label and every folder of images.
See [here for a detailed explanation](https://github.com/rafaelpadilla/Object-Detection-Metrics#important-definitions).
### Usage
First, you need some images with ground truth data to benchmark the model. The images can be in a nested folder structure to allow benchmarking categories of images. It's just important that you exactly mirror the folder structure with your ground truth data. Example structure:
```
- data
- images
- sharp_images
- black_white_images
- image1.jpg
- image2.jpg
- colored_images
- image3.jpg
- unsharp_images
- image4.jpg
- image5.jpg
- labels
- sharp_images
- black_white_images
- image1.txt
- image2.txt
- colored_images
- image3.txt
- unsharp_images
- image4.txt
- image5.txt
```
The script will then output detections for every image and one graph for each tag for each folder. So there will be one graph for all black_white_images, one for all colored_images, one for all sharp images ...
Furthermore, some values in the script should be changed according to your needs. In `src/coreml_metrics/main.py` you'll find some global variables, which are specific to the concrete model you use:
* `classLabels` -> This is the list of labels your model recognized, all pretrained models of YOLOv5 used the [coco dataset](https://cocodataset.org/#home) and therefore regonize 80 Labels
To run the script use the command:
```console
$ poetry run coreml-metrics
```
没有合适的资源?快使用搜索试试~ 我知道了~
将 YOLOv5 模型导出到 CoreML 并对其进行基准测试的脚本 .zip
共19个文件
py:9个
txt:2个
md:2个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 123 浏览量
2024-11-26
22:43:57
上传
评论
收藏 46KB ZIP 举报
温馨提示
将 YOLOv5 模型导出到 CoreML 并对其进行基准测试的脚本。yolov5-CoreML 工具此 repo 中的脚本可用于将 YOLOv5 模型导出到 coreml 并对其进行基准测试。依赖项Python 3.8.x诗poetry 自动安装其他依赖项$ poetry install注意假设您已将 yolov5 repo 克隆到../yolov5项目中,并且已添加文件setup.py,因此 poetry 会安装 YOLOv5 的依赖项。请参阅问题 #2525。建议使用指定的版本。CoreML Tools 仅适用于某些 PyTorch 版本,这些版本仅适用于某些 Python 版本。您可能需要考虑使用 pyenv$ pyenv install 3.8.6$ pyenv global 3.8.6$ poetry install导出到 CoreML我们使用coremltools 4.0的统一转换 API将 YOLOv5 模型转换为 CoreML。此外,我们还添加了一个导出层,因此该模型可以很好地与Apple 的 Vision Framewo
资源推荐
资源详情
资源评论
收起资源包目录
将 YOLOv5 模型导出到 CoreML 并对其进行基准测试的脚本。.zip (19个子文件)
CODEOWNERS 44B
objectDetectionMetrics
utils.py 4KB
__init__.py 552B
BoundingBoxes.py 3KB
LICENSE 1KB
BoundingBox.py 7KB
Evaluator.py 19KB
标签.txt 4B
src
coreml_export
main.py 16KB
snippets.py 4KB
test.py 2KB
coreml_metrics
main.py 8KB
LICENSE 11KB
poetry.lock 52KB
CONTRIBUTING.md 426B
资源内容.txt 889B
pyproject.toml 2KB
.gitignore 706B
README.md 6KB
共 19 条
- 1
资源评论
赵闪闪168
- 粉丝: 1622
- 资源: 4239
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功