# car_moto_tracking_Jetson_Nano_Yolov8_TensorRT
Detect, track and count cars and motorcycles using yolov8 and TensorRT on Jetson Nano
To deploy this work on a Jetson Nano, you should do it in two steps:
## 1- On your PC:
### Install the required packages:
python -m pip install -r requirements.txt
### Create a folder for the dataset:
mkdir datasets
cd datasets
mkdir Train
mkdir Validation
### Copy your dataset:
Copy your train dataset (images + .txt files) into datasets/Train
cp -r path_to_your_train_dataset/. ${root}/train/datasets/Train
Copy your validation dataset (images + .txt files) into datasets/Validation
cp -r path_to_your_validation_dataset/. ${root}/train/datasets/Validation
If you have more than car and motorcycle classes, modify the data.yaml to add the other classes.
### Train the yolov8n model:
python main.py
When the training is finished, your custom yolov8n model will be saved in
${root}/train/runs/detect/train/weights/best.pt
### 1-6- Push your custom model to GitHub:
git add runs/detect/train/weights/best.onnx
git commit -am "Add the trained yolov8n model"
git push
### Remark:
If you want to use an already pretrained model:
python export_onnx.py --weights path_to_your_pretrained_model
git add path_to_the_onnx_model
git commit -am "Add the trained yolov8n model"
git push
## 2- On the Jetson Nano:
### 2-1- Clone this repo on the Jetson Nano:
git clone "URL to your own repository"
cd "repository_name"
export root=${PWD}
### 2-2- Export the engine from the onnx model
/usr/src/tensorrt/bin/trtexec \
--onnx=train/runs/detect/train/weights/best.onnx \
--saveEngine=best.engine
After executing the above command, you will get an engine named best.engine .
### 2-3- For Detection:
cd ${root}/detect
mkdir build
cd build
cmake ..
make
cd ${root}
#### 2-3-1- Launch Detection:
for video:
${root}/detect/build/yolov8_detect ${root}/best.engine video ${root}/src/test.mp4 1 show
#### Description of all arguments
- 1st argument : path to the maked file
- 2nd argument : path to the engine
- 3rd argument : video for saved video
- 4rth argument: path to video file
- 5th argument : if inference capacity of the Jetson is more then 30 fps, put 1, otherwise put 2, 3, 4 depending on the inference capacity of the Jetson
- 6th argument : show or save
for camera:
${root}/detect/build/yolov8_detect ${root}/best.engine camera 1 show
#### Description of all arguments
- 1st argument : path to the maked file
- 2nd argument : path to the engine
- 3rd argument : camera for using embedded camera
- 4rth argument: if inference capacity of the Jetson is more then 30 fps, put 1, otherwise put 2, 3, 4 depending on the inference capacity of the Jetson
- 5th argument : show or save
### 2-4- For Tracking and Counting:
cd ${root}/track_count
mkdir build
cd build
cmake ..
make
cd ${root}
#### 2-4-1- Launch Tracking and Counting:
If you want to count only in one direction, put 1 as 7th argument. Otherwise, for 2 directions counting, put 2 as 7th argument.
Before displaying the processed video, the first frame of the video will be displayed. You should click on this frame to indicate the position of the line(s). For one direction counting, click twice and for 2 directions counting, click four time.
for video:
${root}/track_count/build/yolov8_track_count ${root}/best.engine video ${root}/src/test.mp4 1 show
#### Description of all arguments
- 1st argument : path to the maked file
- 2nd argument : path to the engine
- 3rd argument : video for saved video
- 4rth argument: path to video file
- 5th argument : if inference capacity of the Jetson is more then 30 fps, put 1, otherwise put 2, 3, 4 depending on the inference capacity of the Jetson
- 6th argument : show or save
for camera:
${root}/track_count/build/yolov8_track_count ${root}/best.engine camera 1 show
#### Description of all arguments
- 1st argument : path to the maked file
- 2nd argument : path to the engine
- 3rd argument : camera for using embedded camera
- 4rth argument: if inference capacity of the Jetson is more then 30 fps, put 1, otherwise put 2, 3, 4 depending on the inference capacity of the Jetson
- 5th argument : show or save
#### Remark:
If you are using the Jetson with SSH, you can not see the first frame of the video to draw the line.
With SSH connection, follow these steps:
1- Add argument to the command line with ssh
for video:
${root}/track_count/build/yolov8_track_count ${root}/best.engine video ${root}/src/test.mp4 1 show ssh
for camera
${root}/track_count/build/yolov8_track_count ${root}/best.engine camera 1 show ssh
2- The first frame of the video will be saved
3- Copie this frame to you PC via SSH (in PC's terminal):
scp jetson_name@jetson_server:path_to_frame_in_jetson/frame_for_line.jpg ${root}/utils/frame_for_line.jpg
4- On your PC, launch the python script draw_line.py to draw the line and get the points.
python3 ${root}/utils/draw_line.py
5- Give the points value to Jetson in the Jetson's terminal.
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
算法部署_在Jetson-Nano上使用TensorRT部署YOLOv8车辆+摩托车跟踪计数算法_附项目源码_优质项目实战.zip (35个子文件)
算法部署_在Jetson-Nano上使用TensorRT部署YOLOv8车辆+摩托车跟踪计数算法_附项目源码_优质项目实战
train
export_onnx.py 8KB
main.py 8KB
data.yaml 193B
requirements.txt 12B
src
test.mp4 36.85MB
test_1.mp4 10.99MB
test_2.mp4 56.46MB
video_presentation.avi 64.62MB
video_presentation_2.avi 39.95MB
utils
draw_line.py 1KB
docs
prepare_jetson.md 1KB
train.md 7KB
detect
include
yolov8.hpp 10KB
common.hpp 3KB
CMakeLists.txt 1KB
main.cpp 7KB
track_count
include
kalmanFilter.h 836B
lapjv.h 1KB
BYTETracker.h 2KB
yolov8.hpp 13KB
util.h 2KB
logging.h 16KB
dataType.h 1KB
constant.h 4KB
common.hpp 3KB
STrack.h 1KB
CMakeLists.txt 1KB
src
STrack.cpp 4KB
BYTETracker.cpp 7KB
utils.cpp 9KB
kalmanFilter.cpp 5KB
lapjv.cpp 7KB
main.cpp 11KB
README.md 2KB
README.md 5KB
共 35 条
- 1
资源评论
Mopes__
- 粉丝: 2991
- 资源: 648
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功