# Deep Residual Learning for Image Recognition
<!-- {ResNet} -->
<!-- [ALGORITHM] -->
## Abstract
<!-- [ABSTRACT] -->
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/26739999/142574068-60cfdeea-c4ec-4c49-abb2-5dc2facafc3b.png" width="40%"/>
</div>
## Citation
```latex
@inproceedings{he2016deep,
title={Deep residual learning for image recognition},
author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={770--778},
year={2016}
}
```
## Results and models
## Cifar10
| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
|:---------------------:|:---------:|:--------:|:---------:|:---------:|:---------:|:--------:|
| ResNet-18-b16x8 | 11.17 | 0.56 | 94.82 | 99.87 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet18_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_b16x8_cifar10_20210528-bd6371c8.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_b16x8_cifar10_20210528-bd6371c8.log.json) |
| ResNet-34-b16x8 | 21.28 | 1.16 | 95.34 | 99.87 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet34_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_b16x8_cifar10_20210528-a8aa36a6.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_b16x8_cifar10_20210528-a8aa36a6.log.json) |
| ResNet-50-b16x8 | 23.52 | 1.31 | 95.55 | 99.91 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet50_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar10_20210528-f54bfad9.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar10_20210528-f54bfad9.log.json) |
| ResNet-101-b16x8 | 42.51 | 2.52 | 95.58 | 99.87 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet101_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_b16x8_cifar10_20210528-2d29e936.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_b16x8_cifar10_20210528-2d29e936.log.json) |
| ResNet-152-b16x8 | 58.16 | 3.74 | 95.76 | 99.89 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet152_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_b16x8_cifar10_20210528-3e8e9178.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_b16x8_cifar10_20210528-3e8e9178.log.json) |
## Cifar100
| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
|:---------------------:|:---------:|:--------:|:---------:|:---------:|:---------:|:--------:|
| ResNet-50-b16x8 | 23.71 | 1.31 | 79.90 | 95.19 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet50_8xb16_cifar100.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar100_20210528-67b58a1b.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_b16x8_cifar100_20210528-67b58a1b.log.json) |
### ImageNet-1k
| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
|:---------------------:|:---------:|:--------:|:---------:|:---------:|:---------:|:--------:|
| ResNet-18 | 11.69 | 1.82 | 69.90 | 89.43 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet18_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.log.json) |
| ResNet-34 | 21.8 | 3.68 | 73.62 | 91.59 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet34_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_8xb32_in1k_20210831-f257d4e6.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet34_8xb32_in1k_20210831-f257d4e6.log.json) |
| ResNet-50 | 25.56 | 4.12 | 76.55 | 93.06 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet50_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.log.json) |
| ResNet-101 | 44.55 | 7.85 | 77.97 | 94.06 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet101_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_8xb32_in1k_20210831-539c63f8.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet101_8xb32_in1k_20210831-539c63f8.log.json) |
| ResNet-152 | 60.19 | 11.58 | 78.48 | 94.13 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnet152_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_8xb32_in1k_20210901-4d7582fa.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnet152_8xb32_in1k_20210901-4d7582fa.log.json) |
| ResNetV1D-50 | 25.58 | 4.36 | 77.54 | 93.57 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnetv1d50_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d50_b32x8_imagenet_20210531-db14775a.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d50_b32x8_imagenet_20210531-db14775a.log.json) |
| ResNetV1D-101 | 44.57 | 8.09 | 78.93 | 94.48 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnetv1d101_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d101_b32x8_imagenet_20210531-6e13bcd3.pth) | [log](https://download.openmmlab.com/mmclassification/v0/resnet/resnetv1d101_b32x8_imagenet_20210531-6e13bcd3.log.json) |
| ResNetV1D-152 | 60.21 | 11.82 | 79.41 | 94.70 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/resnet/resnetv1d152_8xb32_in1k.py) | [model](https://dow
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
MMClassification是一个基于PyTorch框架的开源图像分类工具包,它允许用户方便地进行模型训练、测试和部署。使用MMClassification模型进行图片分类涉及多个步骤。 首先,用户需要准备好自己的数据集。数据集应包含两个主要文件夹:train和val。train文件夹用于存放训练图像,而val文件夹则用于存放验证图像。每个子文件夹的名称应与其所属的类别相对应。例如,如果有10个类别,那么train和val文件夹中应各有10个子文件夹,每个子文件夹的名称代表一个特定的类别。 接下来,配置训练参数是训练过程中的关键步骤。用户可以在config文件夹中选择一个适合自己的配置文件,或者自行创建一个新的配置文件。这些配置包括网络结构、数据增强、优化器等相关参数的设置。此外,还可以设置训练时的其他参数,如设备类型(GPU/CPU)、训练轮数、保存文件路径等。 完成数据集准备和参数配置后,就可以开始训练MMClassification模型了。训练过程中,模型会学习从图像中提取特征,并根据这些特征将图像分类到相应的类别中。训练完成后,可以使用验证集来评估模型的性能,确保其在
资源推荐
资源详情
资源评论
收起资源包目录
MMClassification模型对图片分类 (770个子文件)
latest.bin 5.38MB
CITATION.cff 324B
setup.cfg 626B
readthedocs.css 298B
readthedocs.css 298B
Dockerfile 1KB
Dockerfile 685B
.gitattributes 68B
.gitignore 1KB
magik_model_latest.mk.h 27.26MB
MANIFEST.in 128B
MMClassification_python_cn.ipynb 1019KB
MMClassification_python.ipynb 1019KB
MMClassification_tools.ipynb 639KB
MMClassification_tools_cn.ipynb 639KB
demo.JPEG 107KB
bird.JPEG 72KB
2.jpeg 0B
zhihu_qrcode.jpg 388KB
qq_group_qrcode.jpg 70KB
analyze_log.jpg 67KB
analyze_log.jpg 67KB
pipeline-concat.jpg 44KB
pipeline-concat.jpg 44KB
color.jpg 39KB
gray.jpg 38KB
dog.jpg 26KB
pipeline-pipeline.jpg 19KB
pipeline-pipeline.jpg 19KB
pipeline-original.jpg 9KB
pipeline-original.jpg 9KB
3.jpg 0B
1.JPG 0B
custom.js 39B
LICENSE 11KB
test.logjson 986B
magik-transform-tools 15.92MB
Makefile 634B
Makefile 634B
model_zoo.md 33KB
changelog.md 25KB
config.md 24KB
config.md 23KB
visualization.md 18KB
visualization.md 17KB
schedule.md 13KB
schedule.md 12KB
README.md 10KB
runtime.md 10KB
runtime.md 10KB
README.md 9KB
README_zh-CN.md 9KB
README.md 9KB
getting_started.md 9KB
getting_started.md 9KB
pytorch2onnx.md 8KB
finetune.md 8KB
finetune.md 8KB
README.md 8KB
new_modules.md 8KB
README.md 7KB
new_modules.md 7KB
analysis.md 7KB
analysis.md 7KB
README.md 5KB
install.md 5KB
README.md 5KB
README.md 5KB
data_pipeline.md 5KB
install.md 4KB
data_pipeline.md 4KB
new_dataset.md 4KB
new_dataset.md 4KB
README.md 4KB
README.md 4KB
README.md 4KB
pytorch2onnx.md 4KB
README.md 3KB
README.md 3KB
onnx2tensorrt.md 3KB
README.md 3KB
onnx2tensorrt.md 3KB
README.md 3KB
README.md 3KB
CONTRIBUTING.md 3KB
README.md 3KB
CONTRIBUTING.md 3KB
model_serving.md 2KB
model_serving.md 2KB
README.md 2KB
README.md 2KB
miscellaneous.md 2KB
miscellaneous.md 2KB
pytorch2torchscript.md 2KB
pytorch2torchscript.md 2KB
pull_request_template.md 2KB
README.md 1KB
README.md 1KB
---bug.md 844B
----.md 824B
共 770 条
- 1
- 2
- 3
- 4
- 5
- 6
- 8
资源评论
yc1111yc
- 粉丝: 22
- 资源: 164
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功