# PPE Detection using yolo3 and DeepSORT
## Introduction
In Industry, specially manufacturing industry, Personal Protective Equipment (PPE) like helmet (hard-hat), safety-harness, goggles etc play a very important role in ensuring the safety of workers. However, many accidents still occur, due to the negligence of the workers as well as their supervisors. Supervisors can make mistakes due to the fact that such tasks are monotonous and they may not be able to monitor consistently. This project aims to utilize existing CCTV camera infrastructure to assist supervisors to monitor workers effectively by providing them with real time alerts.
## Functioning
* Input is taken from CCTV cameras
* YOLO is used for detecting persons with proper PPE and those without PPE.
* Deep_SORT allocates unique ids to detected persons and tracks them through consecutive frames of the video.
* An alert is raised if a person is found to be without proper PPE for more than some set duration, say 5 seconds.
![img1](https://github.com/AnshulSood11/PPE-Detection-YOLO-Deep_SORT/blob/master/ppe-demo-images/img1.png)
It detects persons without helmet and displays the number of persons with helmet and
those without helmet. It sends notification in the message box for each camera. There is global
message box, where alerts from all cameras are displayed.
![img2](https://github.com/AnshulSood11/PPE-Detection-YOLO-Deep_SORT/blob/master/ppe-demo-images/img2.png)
It detects that the same person about which it had warned earlier has now worn a
helmet and notifies that also.
![img3](https://github.com/AnshulSood11/PPE-Detection-YOLO-Deep_SORT/blob/master/ppe-demo-images/img3.png)
Please note that this is still a work under progress and new ideas and contributions are welcome.
* Currently, the model is trained to detect helmets (hard-hat) only. I have plans to train the model for other PPEs as well.
* Currently, only usb cameras are supported. Support for other cameras needs to be added.
* The tracker needs to be made robust.
* Integrate service (via mobile app or SMS) to send real-time notifications to supervisors present on the field.
## Quick Start
Using conda environment is recommended. Follow these steps to get the code running:
1. First, download the zip file.
2. Download the following files into the project directory:
[mars-small128.pb](https://1drv.ms/u/c/024d7625f12b47b2/QbJHK_Eldk0ggAL4AQAAAAAA8rfjUd8TxK6_-Q)
[full_yolo3_helmet_and_person.h5](https://1drv.ms/u/c/024d7625f12b47b2/QbJHK_Eldk0ggAL3AQAAAAAAExKZFaGcssUM5Q)
3. Run the following command to create a conda environmnet:
```bash
conda env create -f environment.yml
```
Alternatively,
```bash
conda create --name helmet-detection --file requirements.txt
```
4. Activate the conda environment:
```bash
conda activate helmet-detection
```
5. To run the code with gui :
```bash
python predict_gui.py -c config.json -n <number of cameras>
```
Note that the gui supports only upto 2 cameras.
To run the code without gui :
```bash
python predict.py -c config.json -n <number of cameras>
```
Here you can enter any number of cameras you want to use.
## Training the model
### 1. Data preparation
**Data Collection**
The dataset containing images of people wearing helmets and people without helmets were collected mostly from google search. Some images have people applauding, those were collected from Stanford 40 Action Dataset. Download images for training from [train_image_folder](https://drive.google.com/drive/folders/1b5ocFK8Z_plni0JL4gVhs3383V7Q9EYH?usp=sharing).
**Annotations**
Annotaion of each image was done in Pascal VOC format using the awesome lightweight annotation tool [LabelImg](https://github.com/tzutalin/labelImg) for object-detection. Download annotations from [train_annot_folder](https://drive.google.com/drive/folders/1u_s_kxq0x_fqtqgJn9nKC92ikrThMDru?usp=sharing).
**Organize the dataset into 4 folders:**
* train_image_folder <= the folder that contains the train images.
* train_annot_folder <= the folder that contains the train annotations in VOC format.
* valid_image_folder <= the folder that contains the validation images.
* valid_annot_folder <= the folder that contains the validation annotations in VOC format.
There is a one-to-one correspondence by file name between images and annotations. If the validation set is empty, the training set will be automatically splitted into the training set and validation set using the ratio of 0.8.
### 2. Edit the configuration file
The configuration file is a json file, which looks like this:
```
{
"model" : {
"min_input_size": 288,
"max_input_size": 448,
"anchors": [33,34, 52,218, 55,67, 92,306, 96,88, 118,158, 153,347, 209,182, 266,359],
"labels": ["helmet","person with helmet","person without helmet"]
},
"train": {
"train_image_folder": "train_image_folder/",
"train_annot_folder": "train_annot_folder/",
"cache_name": "helmet_train.pkl",
"train_times": 8,
"batch_size": 8,
"learning_rate": 1e-4,
"nb_epochs": 100,
"warmup_epochs": 3,
"ignore_thresh": 0.5,
"gpus": "0,1",
"grid_scales": [1,1,1],
"obj_scale": 5,
"noobj_scale": 1,
"xywh_scale": 1,
"tensorboard_dir": "logs",
"saved_weights_name": "full_yolo3_helmet_and_person.h5",
"debug": true
},
"valid": {
"valid_image_folder": "",
"valid_annot_folder": "",
"cache_name": "",
"valid_times": 1
}
}
```
The model section defines the type of the model to construct as well as other parameters of the model such as the input image size and the list of anchors. The `labels` setting lists the labels to be trained on. Only images, which has labels being listed, are fed to the network. The rest images are simply ignored. By this way, a Dog Detector can easily be trained using VOC or COCO dataset by setting `labels` to `['dog']`.
Download pretrained weights for backend at:
[backend.h5](https://1drv.ms/u/c/024d7625f12b47b2/QbJHK_Eldk0ggAL5AQAAAAAA1JJB2XEu27RBmw)
**These weights must be put in the root folder of the repository. They are the pretrained weights for the backend only and will be loaded during model creation. The code does not work without these weights.**
### 3. Generate anchors for your dataset (optional)
`python gen_anchors.py -c config.json`
Copy the generated anchors printed on the terminal to the `anchors` setting in `config.json`.
### 4. Start the training process
`python train.py -c config.json`
By the end of this process, the code will write the weights of the best model to file best_weights.h5 (or whatever name specified in the setting "saved_weights_name" in the config.json file). The training process stops when the loss on the validation set is not improved in 3 consecutive epoches.
### 5. Perform detection using trained weights on live feed from webcam
To run the code with gui :
```bash
python predict_gui.py -c config.json -n <number of cameras>
```
Note that the gui supports only upto 2 cameras.
To run the code without gui :
```bash
python predict.py -c config.json -n <number of cameras>
```
Here you can enter any number of cameras you want to use.
## Acknowledgements
* [rekon/keras-yolo2](https://github.com/rekon/keras-yolo2) for training data.
* [experiencor/keras-yolo3](https://github.com/experiencor/keras-yolo3) for YOLO v3 implementation.
* [nwojke/deep_sort](https://github.com/nwojke/deep_sort) for Deep_SORT implementation.
没有合适的资源?快使用搜索试试~ 我知道了~
使用 YOLO v3 和 deep-sort 进行实时 PPE 检测和跟踪.zip
共44个文件
py:30个
txt:4个
png:3个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 24 浏览量
2024-11-26
21:06:47
上传
评论
收藏 2.97MB ZIP 举报
温馨提示
使用 yolo3 和 DeepSORT 进行 PPE 检测介绍在工业领域,特别是制造业中,个人防护设备 (PPE)(如头盔、安全带、护目镜等)在确保工人安全方面发挥着非常重要的作用。然而,由于工人及其主管的疏忽,许多事故仍然发生。主管可能会犯错误,因为这些任务很单调,他们可能无法持续监控。该项目旨在利用现有的闭路电视摄像机基础设施,通过向主管提供实时警报,协助主管有效地监控工人。功能输入来自闭路电视摄像机YOLO 用于检测穿着适当 PPE 的人员和未穿着 PPE 的人员。Deep_SORT 为检测到的人分配唯一的 ID,并通过视频的连续帧来跟踪他们。如果发现某人未穿戴适当的 PPE 超过一定时间(比如 5 秒),就会发出警报。 它可检测未戴头盔的人员,并显示戴头盔的人员和未戴头盔的人员数量。它会在每个摄像头的消息框中发送通知。有一个全局消息框,其中显示所有摄像头的警报。 它检测到之前警告过的同一个人现在戴上了头盔,并也发出了通知。请注意,这项工作仍在进行中,欢迎新的想法和贡献。目前,该模型仅用于检测头盔(安全帽)。我计划也针对其他 PPE 训练该模型。
资源推荐
资源详情
资源评论
收起资源包目录
使用 YOLO v3 和 deep_sort 进行实时 PPE 检测和跟踪.zip (44个子文件)
generator.py 9KB
evaluate.py 2KB
gui_2cam.py 4KB
标签.txt 61B
.gitattributes 41B
ppe-demo-images
img3.png 985KB
img2.png 984KB
img1.png 981KB
predict.py 5KB
utils
utils.py 14KB
__init__.py 0B
image.py 3KB
colors.py 3KB
multi_gpu_model.py 2KB
bbox.py 4KB
helmet_train.pkl 198KB
object_tracking
deep_sort
track.py 6KB
kalman_filter.py 8KB
__init__.py 27B
detection.py 1KB
tracker.py 6KB
iou_matching.py 3KB
nn_matching.py 5KB
linear_assignment.py 8KB
application_util
__init__.py 26B
image_viewer.py 11KB
visualization.py 4KB
generate_detections.py 8KB
freeze_model.py 8KB
preprocessing.py 2KB
资源内容.txt 1KB
environment.yml 4KB
requirements.txt 3KB
voc.py 3KB
config.json 1KB
.gitignore 28B
callbacks.py 3KB
gen_anchors.py 4KB
train.py 10KB
links.txt 240B
README.md 7KB
no_signal.jpg 19KB
predict_gui.py 5KB
yolo.py 19KB
共 44 条
- 1
资源评论
赵闪闪168
- 粉丝: 1565
- 资源: 3662
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 这是尝试在 SDL 上运行 DirectX 12.zip
- 这是关于 DirectX 11 的测试投影 .zip
- 企业信息系统规划法-实例
- 这是为 UCLA 的 CS188 课程构建的适用于 Windows 8.1 的简单易用的 direct2d 游戏引擎.zip
- springboot基于springboot的大创管理系统(代码+数据库+LW)
- Python神经网络.zip
- 这是一个简单的 DIY 工具,它使用 Windows 桌面 API 每秒或每隔几秒捕获一次显示输出,作为 DirectX 纹理并在 GPU 上直接将其压缩为 h265 .zip
- 这是一个具有一些基本游戏引擎功能的 DirectX 应用程序 .zip
- 这是 DirectX 中的一款基本客户端,服务器游戏,最多可同时支持 16 名玩家在平坦的草地上移动,就像带有皮肤,动画的 .X 网格(来自 Microsoft 演示的 Tiny 模型)一样,.zip
- 这个基础可以非常轻松地修改基于 IL2CPP 的 Unity 游戏 .zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功