## C-Yolact-tensorflow2 instance segmentation model implementation in tensorflow2
---
## Required environment
Tensorflow - gpu = = 2.2.0
## Training steps
### a, training data set
1. Data set preparation
In the ** File download ** section, download the Data set from Github, extract it after downloading, and put the image and corresponding json file into the datasets/before folder under the root directory.
2. Data set processing
Open coco_annottion. py, the parameters in it are used to process datasets by default. You can generate image files and label files in datasets/coco folder by running it directly, and complete the division of training set and test set.
3. Start network training
The default parameter train.py is used to train shapes data sets. By default, it points to the data set folder in the root directory. You can start training by running train.py directly.
4. Training result prediction
Two files, yolact.py and predict.py, are required for the prediction of training results.
First, you need to modify model_path and classes_path in yolact.py. These two parameters must be modified.
**model_path points to the trained weight file in the logs folder.
classes_path points to the txt corresponding to the detection category. **
Once the changes are complete, you can run predict.py for detection. After running, enter the picture path to detect.
### b, train your own data set
1. Data set preparation
** This paper uses labelme tool for annotation. The annotated files include image files and json files, both of which are placed in the before folder. The specific format can be referred to the shapes data set. **
When marking targets, note that different targets of the same type need to be separated by _.
For example, if you want to train the network to detect ** sinter and coke **, when there are two sinters in a picture, they are marked as:
```python
sinter_1
sinter_2
` ` `
2. Data set processing
Modify the parameters in coco_annotation.py. The first training can modify only classes_path, which is used to point to the txt corresponding to the detection class.
When training your own dataset, you can create your own cls_classes.txt, which will write the categories you need to distinguish.
The contents of the model_data/cls_classes.txt file are:
```python
cat
dog
...
` ` `
Modify the classes_path in coco_annotation.py to correspond to cls_classes.txt and run coco_annotation.py.
3. Start network training
** Training parameters are more, are in train.py, you can download the library after carefully read the notes, the most important part is still the classes_path in train.py. **
**classes_path is used to point to the txt corresponding to the detection class. This txt is the same as the txt in coco_annotation.py! Training yourself on a data set must be modified! **
After modifying the classes_path, you can start training by running train.py. After training multiple epoches, the weights will be generated in the logs folder.
4. Training result prediction
Two files, yolact.py and predict.py, are required for the prediction of training results.
First, you need to modify model_path and classes_path in yolact.py. These two parameters must be modified.
**model_path points to the trained weight file in the logs folder.
classes_path points to the txt corresponding to the detection category. **
Once the changes are complete, you can run predict.py for detection. After running, enter the picture path to detect.
### c, train the coco dataset
1. Data set preparation
Coco training set http://images.cocodataset.org/zips/train2017.zip
Coco http://images.cocodataset.org/zips/val2017.zip validation set
Coco training set and validation set the label of the http://images.cocodataset.org/annotations/annotations_trainval2017.zip
2. Start network training
After decompressing the training set, verification set and their labels. Open the train.py file and change the classes_path to point to model_data/coco_classes.txt.
Change train_image_path to the path of the training picture, train_annotation_path to the tag file of the training picture, val_image_path to the path of the validation picture, and val_annotation_path to the tag file of the validation picture.
3. Training result prediction
Two files, yolact.py and predict.py, are required for the prediction of training results.
First, you need to modify model_path and classes_path in yolact.py. These two parameters must be modified.
**model_path points to the trained weight file in the logs folder.
classes_path points to the txt corresponding to the detection category. **
Once the changes are complete, you can run predict.py for detection. After running, enter the picture path to detect.
## Prediction steps
Run predict.py and enter
```python
img/street.jpg
` ` `
4. The Settings in predict.py can be used for fps test and video detection.
## Evaluation steps
### a. Evaluate your own data set
1. This article uses coco format for evaluation.
2. If the coco_annotation.py file has been run before training, the code will automatically divide the data set into a training set, a validation set, and a test set.
3. If you want to change the proportion of test sets, modify trainval_percent on coco_annotation.py. trainval_percent specifies the ratio of (training set + verification set) to test set, by default (training set + verification set): test set = 9:1. train_percent specifies the ratio of training set to verification set in (training set + verification set). By default, training set: verification set = 9:1.
4. Modify model_path and classes_path in yolact.py. **model_path points to the trained weight file in the logs folder. classes_path points to the txt corresponding to the detection category. **
5. Go to the eval.py file and modify the classes_path. The classes_path is used to point to the txt corresponding to the detection class, which is the same as the training txt. The data set that evaluates itself must be modified. Run eval.py to get the evaluation results.
### b, evaluate the coco data set
1. Download the coco dataset.
2. Modify model_path and classes_path in yolact.py. **model_path points to the weights of the coco dataset, in the logs folder. classes_path points to model_data/coco_classes.txt. **
3. Go to eval.py to set classes_path and point to model_data/coco_classes.txt. Change Image_dir to the path of the evaluated image and Json_path to the tag file of the evaluated image. Run eval.py to get the evaluation results.
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
Deep Learning and Multimodal Fusion of 3D Point Cloud.zip (40个子文件)
yolact-tf2
yolact-tf2
yolact.py 13KB
eval.py 3KB
map_out
README.md 82B
predict.py 6KB
coco_annotation.py 9KB
utils
utils.py 11KB
__init__.py 1B
utils_bbox.py 13KB
augmentations.py 16KB
utils_map.py 2KB
anchors.py 2KB
dataloader.py 14KB
utils_fit.py 5KB
callbacks.py 6KB
nets
__init__.py 1B
yolact.py 7KB
yolact_training.py 15KB
resnet.py 4KB
model_data
shape_classes.txt 0B
coco_classes.txt 701B
datasets
coco
JPEGImages
README.md 54B
Jsons
README.md 58B
before
2.jpg 16KB
2.json 18KB
1.jpg 17KB
0.jpg 14KB
0.json 15KB
1.json 19KB
3.json 16KB
3.jpg 13KB
README.md 36B
3Dpoint cloud pretreatment .py 933B
img
summary.py 811B
Data.zip 389.33MB
requirements.txt 149B
logs
README.md 38B
.gitignore 2KB
3Dshift.py 835B
train.py 22KB
README.md 6KB
共 40 条
- 1
资源评论
大财财i
- 粉丝: 0
- 资源: 1
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功