# RegNet
> [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678)
<!-- [BACKBONE] -->
## Abstract
In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs.
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/143971942-da50f719-61e9-43bd-9468-0dbfbe80284e.png"/>
</div>
## Introduction
We implement RegNetX and RegNetY models in detection systems and provide their first results on Mask R-CNN, Faster R-CNN and RetinaNet.
The pre-trained models are converted from [model zoo of pycls](https://github.com/facebookresearch/pycls/blob/master/MODEL_ZOO.md).
## Usage
To use a regnet model, there are two steps to do:
1. Convert the model to ResNet-style supported by MMDetection
2. Modify backbone and neck in config accordingly
### Convert model
We already prepare models of FLOPs from 400M to 12G in our model zoo.
For more general usage, we also provide script `regnet2mmdet.py` in the tools directory to convert the key of models pretrained by [pycls](https://github.com/facebookresearch/pycls/) to
ResNet-style checkpoints used in MMDetection.
```bash
python -u tools/model_converters/regnet2mmdet.py ${PRETRAIN_PATH} ${STORE_PATH}
```
This script convert model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`.
### Modify config
The users can modify the config's `depth` of backbone and corresponding keys in `arch` according to the configs in the [pycls model zoo](https://github.com/facebookresearch/pycls/blob/master/MODEL_ZOO.md).
The parameter `in_channels` in FPN can be found in the Figure 15 & 16 of the paper (`wi` in the legend).
This directory already provides some configs with their performance, using RegNetX from 800MF to 12GF level.
For other pre-trained models or self-implemented regnet models, the users are responsible to check these parameters by themselves.
**Note**: Although Fig. 15 & 16 also provide `w0`, `wa`, `wm`, `group_w`, and `bot_mul` for `arch`, they are quantized thus inaccurate, using them sometimes produces different backbone that does not match the key in the pre-trained model.
## Results and Models
### Mask R-CNN
| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
| :----------------------------------------------------------------------------------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :--------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| [R-50-FPN](../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py) | pytorch | 1x | 4.4 | 12.0 | 38.2 | 34.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205_050542.log.json) |
| [RegNetX-3.2GF-FPN](./mask_rcnn_regnetx-3.2GF_fpn_1x_coco.py) | pytorch | 1x | 5.0 | | 40.3 | 36.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-3.2GF_fpn_1x_coco/mask_rcnn_regnetx-3.2GF_fpn_1x_coco_20200520_163141-2a9d1814.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-3.2GF_fpn_1x_coco/mask_rcnn_regnetx-3.2GF_fpn_1x_coco_20200520_163141.log.json) |
| [RegNetX-4.0GF-FPN](./mask_rcnn_regnetx-4GF_fpn_1x_coco.py) | pytorch | 1x | 5.5 | | 41.5 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/mask_rcnn_regnetx-4GF_fpn_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-4GF_fpn_1x_coco/mask_rcnn_regnetx-4GF_fpn_1x_coco_20200517_180217-32e9c92d.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-4GF_fpn_1x_coco/mask_rcnn_regnetx-4GF_fpn_1x_coco_20200517_180217.log.json) |
| [R-101-FPN](../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py) | pytorch | 1x | 6.4 | 10.3 | 40.0 | 36.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r101_fpn_1x_coco/mask_rcnn_r101_fpn_1x_coco_20200204-1efe0ed5.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r101_fpn_1x_coco/mask_rcnn_r101_fpn_1x_coco_20200204_144809.log.json) |
| [RegNetX-6.4GF-FPN](./mask_rcnn_regnetx-6.4GF_fpn_1x_coco.py) | pytorch | 1x | 6.1 | | 41.0 | 37.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/mask_rcnn_regnetx-6.4GF_fpn_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-6.4GF_fpn_1x_coco/mask_rcnn_regnetx-6.4GF_fpn_1x_coco_20200517_180439-3a7aae83.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-6.4GF_fpn_1x_coco/mask_rcnn_regnetx-6.4GF_fpn_1x_coco_20200517_180439.log.json) |
| [X-101-32x4d-FPN](../mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py)
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
算法优化-对SegmentAnything-SAM进行PTQ量化加速+附项目源码-优质项目实战.zip (1588个子文件)
make.bat 760B
make.bat 760B
CITATION.cff 273B
setup.cfg 819B
ms_deform_attn_cpu.cpp 1KB
vision.cpp 799B
readthedocs.css 140B
readthedocs.css 140B
ms_deform_attn_cuda.cu 7KB
ms_deform_im2col_cuda.cuh 53KB
Dockerfile 1KB
Dockerfile 1KB
.gitignore 1KB
ms_deform_attn.h 2KB
ms_deform_attn_cuda.h 1KB
ms_deform_attn_cpu.h 1KB
MANIFEST.in 205B
pytest.ini 293B
MMDet_InstanceSeg_Tutorial.ipynb 15.95MB
MMDet_Tutorial.ipynb 12.32MB
inference_demo.ipynb 1.02MB
zhihu_qrcode.jpg 388KB
demo.jpg 254KB
coco_test_12510.jpg 179KB
coco_id2name.json 1KB
LICENSE 11KB
Makefile 634B
Makefile 634B
changelog.md 80KB
config.md 33KB
config.md 32KB
README.md 29KB
1_exist_data_model.md 27KB
1_exist_data_model.md 27KB
README.md 26KB
README.md 25KB
model_zoo.md 22KB
README.md 22KB
model_zoo.md 20KB
README.md 19KB
customize_dataset.md 18KB
README_zh-CN.md 18KB
README.md 18KB
pytorch2onnx.md 17KB
useful_tools.md 17KB
README.md 17KB
customize_dataset.md 16KB
faq.md 16KB
compatibility.md 13KB
README.md 13KB
README.md 13KB
compatibility.md 12KB
README.md 12KB
customize_runtime.md 12KB
README.md 12KB
README.md 12KB
README.md 12KB
faq.md 11KB
get_started.md 11KB
README.md 11KB
README.md 11KB
README.md 11KB
README.md 11KB
3_exist_data_new_model.md 10KB
README.md 10KB
3_exist_data_new_model.md 10KB
customize_models.md 10KB
customize_models.md 10KB
README.md 10KB
README.md 10KB
README.md 9KB
README.md 9KB
README.md 9KB
README.md 9KB
README.md 9KB
README.md 8KB
projects.md 8KB
README.md 8KB
how_to.md 8KB
how_to.md 8KB
get_started.md 8KB
README.md 8KB
README.md 8KB
README.md 7KB
README.md 7KB
README.md 7KB
README.md 7KB
README.md 7KB
README.md 7KB
2_new_data_model.md 7KB
README.md 7KB
2_new_data_model.md 7KB
projects.md 7KB
README.md 7KB
README.md 7KB
README.md 7KB
README.md 7KB
README.md 6KB
README.md 6KB
README.md 6KB
共 1588 条
- 1
- 2
- 3
- 4
- 5
- 6
- 16
资源评论
__AtYou__
- 粉丝: 3277
- 资源: 1555
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- CUDA加速-在GPU上使用CUDA加速实现随机森林-附项目源码-优质项目实战.zip
- CUDA加速-使用CUDA加速深度图图像处理算法-附项目源码-优质项目实战.zip
- CRNN-基于Pytorch实现卷积循环网络CRNN-附项目源码+预训练模型下载-优质项目实战.zip
- CLIP预训练-基于Region-based实现的Language-Image多模态大模型CLIP的预训练-附项目源码+流程教程
- CLIP-基于Pytorch实现的简洁明了的CLIP模型-附项目源码+流程教程-优质项目实战.zip
- BERT-bert从训练到部署-附完整流程教程-优质项目实战.zip
- BERT-从头开始训练MASK-BERT-算法训练-优质项目实战.zip
- Android算法部署-在Android平台基于NCNN部署YOLOv5目标检测算法-优质项目实战.zip
- AIGC项目-给一张图快速定制逼真的照片-项目分享-附完整实现教程.zip
- Volume Master - 音量控制器
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功