# High-resolution networks (HRNets) for object detection
## Introduction
```
@inproceedings{SunXLW19,
title={Deep High-Resolution Representation Learning for Human Pose Estimation},
author={Ke Sun and Bin Xiao and Dong Liu and Jingdong Wang},
booktitle={CVPR},
year={2019}
}
@article{SunZJCXLMWLW19,
title={High-Resolution Representations for Labeling Pixels and Regions},
author={Ke Sun and Yang Zhao and Borui Jiang and Tianheng Cheng and Bin Xiao
and Dong Liu and Yadong Mu and Xinggang Wang and Wenyu Liu and Jingdong Wang},
journal = {CoRR},
volume = {abs/1904.04514},
year={2019}
}
```
## Results and Models
### Faster R-CNN
| Backbone | Style | Lr schd | box AP | Download |
| :-------------: | :-----: | :-----: | :----: | :-----------------: |
| HRNetV2p-W18 | pytorch | 1x | 36.1 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/faster_rcnn_hrnetv2p_w18_1x_20190522-e368c387.pth) |
| HRNetV2p-W18 | pytorch | 2x | 38.3 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/faster_rcnn_hrnetv2p_w18_2x_20190810-9c8615d5.pth) |
| HRNetV2p-W32 | pytorch | 1x | 39.5 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/faster_rcnn_hrnetv2p_w32_1x_20190522-d22f1fef.pth) |
| HRNetV2p-W32 | pytorch | 2x | 40.6 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/faster_rcnn_hrnetv2p_w32_2x_20190810-24e8912a.pth) |
| HRNetV2p-W48 | pytorch | 1x | 40.9 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/faster_rcnn_hrnetv2p_w48_1x_20190820-5c6d0903.pth) |
| HRNetV2p-W48 | pytorch | 2x | 41.5 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/faster_rcnn_hrnetv2p_w48_2x_20190820-79fb8bfc.pth) |
### Mask R-CNN
| Backbone | Style | Lr schd | box AP | mask AP | Download |
| :-------------: | :-----: | :-----: | :----: | :----: | :-----------------: |
| HRNetV2p-W18 | pytorch | 1x | 37.3 | 34.2 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/mask_rcnn_hrnetv2p_w18_1x_20190522-c8ad459f.pth) |
| HRNetV2p-W18 | pytorch | 2x | 39.2 | 35.7 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/mask_rcnn_hrnetv2p_w18_2x_20190810-1e4747eb.pth) |
| HRNetV2p-W32 | pytorch | 1x | 40.7 | 36.8 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/mask_rcnn_hrnetv2p_w32_1x_20190522-374aaa00.pth) |
| HRNetV2p-W32 | pytorch | 2x | 41.7 | 37.5 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/mask_rcnn_hrnetv2p_w32_2x_20190810-773eca75.pth) |
| HRNetV2p-W48 | pytorch | 1x | 42.4 | 38.1 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/mask_rcnn_hrnetv2p_w48_1x_20190820-0923d1ad.pth) |
| HRNetV2p-W48 | pytorch | 2x | 42.9 | 38.3 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/mask_rcnn_hrnetv2p_w48_2x_20190820-70df51b2.pth) |
### Cascade R-CNN
| Backbone | Style | Lr schd | box AP | Download |
| :-------------: | :-----: | :-----: | :----: | :-----------------: |
| HRNetV2p-W18 | pytorch | 20e | 41.2 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/cascade_rcnn_hrnetv2p_w18_20e_20190810-132012d0.pth)|
| HRNetV2p-W32 | pytorch | 20e | 43.7 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/cascade_rcnn_hrnetv2p_w32_20e_20190522-55bec4ee.pth)|
| HRNetV2p-W48 | pytorch | 20e | 44.6 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/cascade_rcnn_hrnetv2p_w48_20e_20190810-f40ed8e1.pth)|
### Cascade Mask R-CNN
| Backbone | Style | Lr schd | box AP | mask AP | Download |
| :-------------: | :-----: | :-----: | :----: | :----: | :-----------------: |
| HRNetV2p-W18 | pytorch | 20e | 41.9 | 36.4 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/cascade_mask_rcnn_hrnetv2p_w18_20e_20190810-054fb7bf.pth) |
| HRNetV2p-W32 | pytorch | 20e | 44.5 | 38.5 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/cascade_mask_rcnn_hrnetv2p_w32_20e_20190810-76f61cd0.pth) |
| HRNetV2p-W48 | pytorch | 20e | 46.0 | 39.5 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/cascade_mask_rcnn_hrnetv2p_w48_20e_20190810-d04a1415.pth) |
### Hybrid Task Cascade (HTC)
| Backbone | Style | Lr schd | box AP | mask AP | Download |
| :-------------: | :-----: | :-----: | :----: | :----: | :-----------------: |
| HRNetV2p-W18 | pytorch | 20e | 43.1 | 37.9 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/htc_hrnetv2p_w18_20e_20190810-d70072af.pth) |
| HRNetV2p-W32 | pytorch | 20e | 45.3 | 39.6 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/htc_hrnetv2p_w32_20e_20190810-82f9ef5a.pth) |
| HRNetV2p-W48 | pytorch | 20e | 46.8 | 40.7 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/htc_hrnetv2p_w48_20e_20190810-f6d2c3fd.pth) |
| HRNetV2p-W48 | pytorch | 28e | 47.0 | 41.0 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/htc_hrnetv2p_w48_28e_20190810-a4274b38.pth) |
| X-101-64x4d-FPN | pytorch | 28e | 46.8 | 40.7 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/htc_x101_64x4d_28e_20190810-d7c19dc0.pth) |
### FCOS
| Backbone | Style | GN | MS train | Lr schd | box AP | Download |
|:---------:|:-------:|:-------:|:--------:|:-------:|:------:|:--------:|
|HRNetV2p-W18| pytorch | Y | N | 1x | 35.2 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/fcos_hrnetv2p_w18_1x_20190810-87a17998.pth) |
|HRNetV2p-W18| pytorch | Y | N | 2x | 38.2 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/fcos_hrnetv2p_w18_2x_20190810-dfd60a7b.pth) |
|HRNetV2p-W32| pytorch | Y | N | 1x | 37.7 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/fcos_hrnetv2p_w32_1x_20190810-62014622.pth) |
|HRNetV2p-W32| pytorch | Y | N | 2x | 40.3 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/fcos_hrnetv2p_w32_2x_20190810-8e987ec1.pth) |
|HRNetV2p-W18| pytorch | Y | Y | 2x | 38.1 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/fcos_hrnetv2p_w18_mstrain_2x_20190810-eb846b2c.pth) |
|HRNetV2p-W32| pytorch | Y | Y | 2x | 41.4 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/fcos_hrnetv2p_w32_mstrain_2x_20190810-96127bf8.pth) |
|HRNetV2p-W48| pytorch | Y | Y | 2x | 42.9 | [model](https://open-mmlab.s3.ap-northeast-2.amazonaws.com/mmdetection/models/hrnet/fcos_hrnetv2p_w48_mstrain_2x_20190810-f7dc8801.pth) |
**Note:**
- The `28e` schedule in HTC indicates decreasing the lr at 24 and 27 epochs, with a total of 28 epochs.
- HRNetV2 ImageNet pretrained models are in [HRNets for Image Classification](https://github.com/HRNet/HRNet-Image-Classification).
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
TensorRT-使用TensorRT部署SOLOv2算法-优质算法部署项目实战.zip (464个子文件)
make.bat 795B
.isort.cfg 330B
deform_conv_cuda.cpp 29KB
deform_pool_cuda.cpp 4KB
roi_align_cuda.cpp 3KB
roi_pool_cuda.cpp 3KB
masked_conv2d_cuda.cpp 3KB
nms_cpu.cpp 2KB
sigmoid_focal_loss.cpp 2KB
compiling_info.cpp 1KB
nms_cuda.cpp 575B
deform_conv_cuda_kernel.cu 42KB
deform_pool_cuda_kernel.cu 16KB
roi_align_kernel.cu 11KB
roi_pool_kernel.cu 7KB
sigmoid_focal_loss_cuda.cu 6KB
nms_kernel.cu 5KB
masked_conv2d_kernel.cu 5KB
Dockerfile 569B
pytest.ini 293B
inference_demo.ipynb 1.02MB
demo.jpg 254KB
demo.jpg 254KB
demo_out_trt_solov2.jpg 175KB
demo_out_onnxrt_solov2.jpg 175KB
demo_out_torch.jpg 175KB
Makefile 634B
MODEL_ZOO.md 42KB
GETTING_STARTED.md 16KB
CHANGELOG.md 8KB
README.md 8KB
README.md 7KB
TECHNICAL_DETAILS.md 7KB
readme.md 6KB
README.md 6KB
ROBUSTNESS_BENCHMARKING.md 5KB
README.md 5KB
README.md 5KB
INSTALL.md 5KB
README.md 4KB
README.md 4KB
README.md 4KB
README.md 3KB
README.md 3KB
README.md 2KB
README.md 2KB
README.md 2KB
README.md 2KB
README.md 2KB
README.md 1KB
README.md 1KB
README.md 1KB
README.md 1KB
README.md 924B
README.md 915B
README.md 893B
README.md 845B
README.md 818B
corruptions_sev_3.png 1.34MB
reppoints.png 1.14MB
highlights.png 681KB
data_pipeline.png 82KB
loss_curve.png 37KB
transforms.py 31KB
reppoints_head.py 27KB
guided_anchor_head.py 25KB
htc.py 22KB
cascade_rcnn.py 21KB
decoupled_solo_head.py 21KB
solov2_head.py 21KB
decoupled_solo_light_head.py 21KB
solov2_light_head.py 20KB
hrnet.py 19KB
atss_head.py 19KB
mean_ap.py 18KB
solo_head.py 17KB
resnet.py 17KB
test_robustness.py 16KB
deform_conv.py 16KB
fcos_head.py 16KB
fovea_head.py 16KB
grid_head.py 15KB
generalized_attention.py 15KB
flops_counter.py 14KB
anchor_head.py 14KB
two_stage.py 13KB
test_forward.py 12KB
common.py 12KB
guided_anchor_target.py 12KB
bbox_head.py 11KB
test_mixins.py 11KB
train.py 11KB
test_ins_vis.py 11KB
test_heads.py 11KB
test.py 10KB
quick_onnx_trt_test.py 10KB
deform_pool.py 10KB
inference.py 10KB
onnx_exporter_solov2.py 10KB
setup.py 10KB
共 464 条
- 1
- 2
- 3
- 4
- 5
资源评论
极智视界
- 粉丝: 2w+
- 资源: 1354
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功