# Yolo v4 and Yolo v3 Tiny
Yolo v4 & Yolo v3 Tiny using TensorFlow 2.x
This Tensorflow adaptation of the release 4 of the famous deep network Yolo is based on the original Yolo source code in C++ that you can find here: https://github.com/pjreddie/darknet and https://github.com/AlexeyAB/darknet + the WIKI https://github.com/AlexeyAB/darknet/wiki
The method to adapt this deep network is based on the method used by *Jason Brownlee* for the previous release v3 and presented here https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/
But I have made several changes due to the new features added by the release 4 of Yolo.
All the steps are included in the jupyter notebook **YoloV4_tf.ipynb**
In addition, I have defined the **loss function** so you can train the model as described later. The corresponding steps are included in the jupyter notebook **YoloV4_Train_tf.ipynb**
The release numbers are:
- TensorFlow version: 2.1.0
- Keras version: 2.2.4-tf
# The steps to use Yolo-V4 with TensorFlow 2.x are the following
## 1. Build the TensorFlow model
The model is composed of 161 layers.
Most of them are *Conv2D*, there are also 3 *MaxPool2D* and one *UpSampling2D*.
In addtion there are few shorcuts with some concatenate.
Two activation methods are used, *LeakyReLU* with alpha=0.1 and *Mish* with a threshold = 20.0. I have defined Mish as a custom object as Mish is not included in the core TF release yet.
The specifc Yolo output layers *yolo_139*, *yolo_150* and *yolo_161* are not defined in my Tensorflow model because they handle cutomized processing. So I have defined no activation for these layers but I have built the corresponding processing in a specifig python function run after the model prediction.
## 2. Get and compute the weights
The yolo weight have been retreived from https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights.
The file contain the kernel weights but also the biases and the Batch Normalisation parameters scale, mean and var.
Instead of using Batch normalisation layers into the model, I have directly normalized the weights and biases with the values of scale, mean and var.
- bias = bias - scale * mean / (np.sqrt(var + 0.00001)
- weights = weights* scale / (np.sqrt(var + 0.00001))
I have kept the Batch normalisation layers in the model for training purpose. By defaut, the corresponding parameters are not applicable (weight = 1 and bias = 0) but they are updated if you train the model.
As these parameters as stored in the Caffe mode, I have applied several transformation to map the TF requirements.
## 3. Save the model
The model is saved in a h5 file after building it and computing the weights.
## 4. Load the model
The model previously saved is loaded from the h5 file and then ready to be used.
## 5. Pre-processing
During the pre-processing the 80 labels and the image to predict are loaded.
The labels are in the file *coco_classes.txt*.
The image is resized in the Yolo format 608*608 using interpolation = 'bilinear'.
As usual, the values of the pixels are divided by 255.
## 6. Run the model
The model is run with the resized image as input with a shape=(1,608,608,3).
The model provides 3 output layers 139, 150 and 161 with the shapes respectively (1, 76, 76, 255), (1, 38, 38, 255), (1, 19, 19, 255).
The number of channels is 255 = ( bx,by,bh,bw,pc + 80 classes ) * 3 anchor boxes, where *(bx,by,bh,bw)* define the position and size of the box, and *pc* is the probability to find an object in the box.
3 anchor boxes per Yolo output layers are defined:
- output layer 139 (76,76,255): (12, 16), (19, 36), (40, 28)
- output layer 150 (38,38,255): (36, 75), (76, 55), (72, 146)
- output layer 161 (19,19,255): (142, 110), (192, 243), (459, 40)
## 7. Compute the Yolo layers
As explained before, the 3 final Yolo layers are computed outside the TF model by the python function *decode_netout*.
The steps of this function are the following:
- apply the sigmoid activation on everything except bh and bw.
- scale bx and by using the factor *scales_x_y* 1.2, 1.1, 1.05 defined for each Yolo layer.
- (bx,by)=(bx,by)scales_x_y - 0.5(scales_x_y - 1.0)
- get the boxes parameters for prediction *pc* > 0.25
- x = (col + x) / grid_w (=76, 38 or 19)
- y = (row + y) / grid_h (=76, 38 or 19)
- w = anchors_w * exp(w) / network width (=608)
- h = anchors_h * exp(h) / network height (=608)
- classes = classes*pc
## 8. Correct the boxes according the inital size of the image
## 9. Suppress the non Maximal boxes
## 10. Get the details of the detected objects for a threshold > 0.6
## 11. Draw the result
# The steps to train Yolo-V4 with TensorFlow 2.x are the following
## 1. Build the TensorFlow model
The model is composed of 161 layers.
Most of them are *Conv2D*, there are also 3 *MaxPool2D* and one *UpSampling2D*.
In addtion there are few shorcuts with some concatenate.
Two activation methods are used, *LeakyReLU* with alpha=0.1 and *Mish* with a threshold = 20.0. I have defined Mish as a custom object as Mish is not included in the core TF release yet.
The specifc Yolo output layers *yolo_139*, *yolo_150* and *yolo_161* are not defined in my Tensorflow model because they handle cutomized processing. So I have defined no activation for these layers but I have built the corresponding processing in a specifig python function run after the model prediction.
## 2. Get and compute the weights (you can skip this part if you want to train a empty model)
The yolo weight have been retreived from https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights.
The file contain the kernel weights but also the biases and the Batch Normalisation parameters scale, mean and var.
Instead of using Batch normalisation layers into the model, I have directly normalized the weights and biases with the values of scale, mean and var.
- bias = bias - scale * mean / (np.sqrt(var + 0.00001)
- weights = weights* scale / (np.sqrt(var + 0.00001))
I have kept the Batch normalisation layers in the model for training purpose. By defaut, the corresponding parameters are not applicable (weight = 1 and bias = 0) but they are updated if you train the model.
As these parameters as stored in the Caffe mode, I have applied several transformation to map the TF requirements.
## 3. Save the model
The model is saved in a h5 file after building it and computing the weights.
## 4. Load the model
The model previously saved is loaded from the h5 file.
## 5. Freeze the backbone
You need to define until which layer you want to freese the model. To free the backbone Yolo v4, set fine_tune_at = "convn_136"
## 6. Get the Pascal VOC dataset
I have used the Pascal VOC dataset to train the model.
You can find the dataset here: https://pjreddie.com/projects/pascal-voc-dataset-mirror/ in order to get the images and the corresponding annotations in xml format.
## 7. Build the labels files for VOC train dataset
One label file per image and per box is created (3 boxes are defined in Yolo4).
The label file contains the position and the size of the box, the probability to find an object in the box and the class id of the object.
This file contains one line per object in the image.
## 8. Build the labels files for VOC validate dataset
Same thing than above but for the dataset used to validate the training.
## 9. Compute the data for training
Train data are created based on the Label files previously created and the images.
You can define how many data do you want to train.
## 10. Compute the data for validation
Same thing than above but for the data used to validate the training.
## 11. Choose the optimizer
Several optimizers are available in Tensorflow: SGD, RMSprop, Adam...
## 12. Fit the model including validation data
Fit the model using all the Tensorflow features you w
没有合适的资源?快使用搜索试试~ 我知道了~
使用 TensorFlow 2.x 的 Yolo v4.zip
共112个文件
png:40个
py:26个
sh:6个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 5 浏览量
2024-11-26
13:37:45
上传
评论
收藏 9.01MB ZIP 举报
温馨提示
Yolo v4 和 Yolo v3 Tiny使用 TensorFlow 2.x 的 Yolo v4 和 Yolo v3 Tiny这个 Tensorflow 改编自著名深度网络 Yolo 的第 4 版,基于 C++ 中的原始 Yolo 源代码,您可以在此处找到https://github.com/pjreddie/darknet和https://github.com/AlexeyAB/darknet + WIKI https://github.com/AlexeyAB/darknet/wiki适应这个深度网络的方法基于Jason Brownlee在之前的版本 v3 中使用方法,并在此处介绍https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/但由于 Yolo 4 版新增的功能,我做了一些更改。所有步骤都包含在 jupyter 笔记本YoloV4_tf.ipynb中此外,我定义了损失函数,以便您可以按照后面所述训练模型。相应的步骤包含在 jupy
资源推荐
资源详情
资源评论
收起资源包目录
使用 TensorFlow 2.x 的 Yolo v4.zip (112个子文件)
test.512.512.bmp 257KB
setup.cfg 97B
.gitignore 250B
.gitignore 108B
.gitignore 9B
app.icns 8B
MANIFEST.in 300B
YoloV4_tf.ipynb 1.92MB
YoloV3-tiny_tf.ipynb 553KB
YoloV4_Train_tf.ipynb 248KB
YoloV3-tiny_Train_tf.ipynb 79KB
create_own_data.ipynb 7KB
demo3.jpg 89KB
demo.jpg 57KB
6dogs.jpg 9KB
臉書.jpg 747B
LICENSE 1KB
Makefile 520B
README.md 10KB
README.md 454B
issue_template.md 145B
demo5.png 3.09MB
demo4.png 2.71MB
yolomodel.png 1.39MB
app.png 31KB
prev.png 30KB
next.png 30KB
feBlend-icon.png 8KB
format_createml.png 4KB
resetall.png 4KB
close.png 3KB
verify.png 3KB
save-as.png 3KB
labels.png 2KB
color_line.png 2KB
fit.png 2KB
done.png 2KB
cancel.png 2KB
open.png 2KB
undo.png 2KB
undo-cross.png 2KB
quit.png 2KB
help.png 2KB
delete.png 1KB
color.png 1KB
fit-width.png 1KB
eye.png 1KB
save.png 1KB
zoom.png 1KB
objects.png 1KB
fit-window.png 1KB
zoom-in.png 1KB
edit.png 1KB
zoom-out.png 1KB
new.png 977B
format_voc.png 786B
file.png 765B
format_yolo.png 675B
copy.png 646B
expert2.png 335B
expert1.png 278B
strings.properties 2KB
strings-zh-CN.properties 2KB
strings-zh-TW.properties 2KB
labelImg.py 63KB
canvas.py 25KB
shape.py 6KB
labelFile.py 6KB
pascal_voc_io.py 6KB
yolo_io.py 5KB
test_io.py 4KB
create_ml_io.py 4KB
setup.py 3KB
labelDialog.py 3KB
utils.py 3KB
stringBundle.py 2KB
colorDialog.py 1KB
settings.py 1KB
toolBar.py 1KB
test_stringBundle.py 1KB
combobox.py 965B
hashableQListWidgetItem.py 784B
test_settings.py 782B
zoomWidget.py 780B
test_utils.py 713B
constants.py 668B
ustr.py 532B
test_qt.py 310B
__init__.py 76B
__init__.py 0B
resources.qrc 2KB
README.rst 10KB
HISTORY.rst 1KB
CONTRIBUTING.rst 83B
envsetup.sh 2KB
build-windows-binary.sh 882B
build-for-macos.sh 724B
build-for-pypi.sh 680B
build-ubuntu-binary.sh 656B
run-in-container.sh 383B
共 112 条
- 1
- 2
资源评论
徐浪老师
- 粉丝: 8103
- 资源: 8096
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- PIE比赛-基于ConvLSTM2D、CNN3D等模型架构对Sentinel-1的多时序雷达数据进行农作物分类+python源码+文档说明
- 404安卓Andriod网上订餐点菜系统毕业课程源码设计+论文资料
- 基于Flask+VUE前端,在阿里云公网WEB端部署YOLOv5目标检测模型.zip
- magisk模块 Shamiko v0.7.3版本
- FY4A-QPE产品的预处理和MMK趋势分析和Hurst指数等相关统计分析,以及制图分析(箱线图/折线图等)+python源码+文档说明
- 基于caffe的Yolov3,v4实时物体检测框架.zip
- 20241126-studentinfo
- 基于SpringBoot的教务管理系统源码+数据库脚本(高分毕业设计项目)
- 涵盖项目规划、需求管理、开发迭代、版本控制、缺陷跟踪、测试管理、工时管理、效能分析等环 节,实现项目全过程、全方位管理的一站式企业研发项目管理解决方案
- 基于 YOLO、Deep SORT 和 KLT 的高性能多对象跟踪.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功