# FCOS: Fully Convolutional One-Stage Object Detection
This project hosts the code for implementing the FCOS algorithm for object detection, as presented in our paper:
FCOS: Fully Convolutional One-Stage Object Detection;
Zhi Tian, Chunhua Shen, Hao Chen, and Tong He;
In: Proc. Int. Conf. Computer Vision (ICCV), 2019.
arXiv preprint arXiv:1904.01355
The full paper is available at: [https://arxiv.org/abs/1904.01355](https://arxiv.org/abs/1904.01355).
## Highlights
- **Totally anchor-free:** FCOS completely avoids the complicated computation related to anchor boxes and all hyper-parameters of anchor boxes.
- **Better performance:** The very simple one-stage detector achieves much better performance (38.7 vs. 36.8 in AP with ResNet-50) than Faster R-CNN. Check out more models and experimental results [here](#models).
- **Faster training and testing:** With the same hardwares and backbone ResNet-50-FPN, FCOS also requires less training hours (6.5h vs. 8.8h) than Faster R-CNN. FCOS also takes 12ms less inference time per image than Faster R-CNN (44ms vs. 56ms).
- **State-of-the-art performance:** Our best model based on ResNeXt-64x4d-101 and deformable convolutions achieves **49.0%** in AP on COCO test-dev (with multi-scale testing).
## Updates
- Script for exporting [ONNX models](https://github.com/tianzhi0549/FCOS/tree/master/onnx). (21/11/2019)
- New NMS (see [#165](https://github.com/tianzhi0549/FCOS/pull/165)) speeds up ResNe(x)t based models by up to 30% and MobileNet based models by 40%, with exactly the same performance. Check out [here](#models). (12/10/2019)
- New models with much improved performance are released. The best model achieves **49%** in AP on COCO test-dev with multi-scale testing. (11/09/2019)
- FCOS with VoVNet backbones is available at [VoVNet-FCOS](https://github.com/vov-net/VoVNet-FCOS). (08/08/2019)
- A trick of using a small central region of the BBox for training improves AP by nearly 1 point [as shown here](https://github.com/yqyao/FCOS_PLUS). (23/07/2019)
- FCOS with HRNet backbones is available at [HRNet-FCOS](https://github.com/HRNet/HRNet-FCOS). (03/07/2019)
- FCOS with AutoML searched FPN (R50, R101, ResNeXt101 and MobileNetV2 backbones) is available at [NAS-FCOS](https://github.com/Lausannen/NAS-FCOS). (30/06/2019)
- FCOS has been implemented in [mmdetection](https://github.com/open-mmlab/mmdetection). Many thanks to [@yhcao6](https://github.com/yhcao6) and [@hellock](https://github.com/hellock). (17/05/2019)
## Required hardware
We use 8 Nvidia V100 GPUs. \
But 4 1080Ti GPUs can also train a fully-fledged ResNet-50-FPN based FCOS since FCOS is memory-efficient.
## Installation
#### Testing-only installation
For users who only want to use FCOS as an object detector in their projects, they can install it by pip. To do so, run:
```
pip install torch # install pytorch if you do not have it
pip install git+https://github.com/tianzhi0549/FCOS.git
# run this command line for a demo
fcos https://github.com/tianzhi0549/FCOS/raw/master/demo/images/COCO_val2014_000000000885.jpg
```
Please check out [here](fcos/bin/fcos) for the interface usage.
#### For a complete installation
This FCOS implementation is based on [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark). Therefore the installation is the same as original maskrcnn-benchmark.
Please check [INSTALL.md](INSTALL.md) for installation instructions.
You may also want to see the original [README.md](MASKRCNN_README.md) of maskrcnn-benchmark.
## A quick demo
Once the installation is done, you can follow the below steps to run a quick demo.
# assume that you are under the root directory of this project,
# and you have activated your virtual environment if needed.
wget https://cloudstor.aarnet.edu.au/plus/s/ZSAqNJB96hA71Yf/download -O FCOS_imprv_R_50_FPN_1x.pth
python demo/fcos_demo.py
## Inference
The inference command line on coco minival split:
python tools/test_net.py \
--config-file configs/fcos/fcos_imprv_R_50_FPN_1x.yaml \
MODEL.WEIGHT FCOS_imprv_R_50_FPN_1x.pth \
TEST.IMS_PER_BATCH 4
Please note that:
1) If your model's name is different, please replace `FCOS_imprv_R_50_FPN_1x.pth` with your own.
2) If you enounter out-of-memory error, please try to reduce `TEST.IMS_PER_BATCH` to 1.
3) If you want to evaluate a different model, please change `--config-file` to its config file (in [configs/fcos](configs/fcos)) and `MODEL.WEIGHT` to its weights file.
4) Multi-GPU inference is available, please refer to [#78](https://github.com/tianzhi0549/FCOS/issues/78#issuecomment-526990989).
5) We improved the postprocess efficiency by using multi-label nms (see [#165](https://github.com/tianzhi0549/FCOS/pull/165)), which saves 18ms on average. The inference metric in the following tables has been updated accordingly.
## Models
For your convenience, we provide the following trained models (more models are coming soon).
**ResNe(x)ts:**
*All ResNe(x)t based models are trained with 16 images in a mini-batch and frozen batch normalization (i.e., consistent with models in [maskrcnn_benchmark](https://github.com/facebookresearch/maskrcnn-benchmark)).*
Model | Multi-scale training | Testing time / im | AP (minival) | Link
--- |:---:|:---:|:---:|:---:
FCOS_imprv_R_50_FPN_1x | No | 44ms | 38.7 | [download](https://cloudstor.aarnet.edu.au/plus/s/ZSAqNJB96hA71Yf/download)
FCOS_imprv_dcnv2_R_50_FPN_1x | No | 54ms | 42.3 | [download](https://cloudstor.aarnet.edu.au/plus/s/plKgHuykjiilzWr/download)
FCOS_imprv_R_101_FPN_2x | Yes | 57ms | 43.0 | [download](https://cloudstor.aarnet.edu.au/plus/s/hTeMuRa4pwtCemq/download)
FCOS_imprv_dcnv2_R_101_FPN_2x | Yes | 73ms | 45.6 | [download](https://cloudstor.aarnet.edu.au/plus/s/xq2Ll7s0hpaQycO/download)
FCOS_imprv_X_101_32x8d_FPN_2x | Yes | 110ms | 44.0 | [download](https://cloudstor.aarnet.edu.au/plus/s/WZ0i7RZW5BRpJu6/download)
FCOS_imprv_dcnv2_X_101_32x8d_FPN_2x | Yes | 143ms | 46.4 | [download](https://cloudstor.aarnet.edu.au/plus/s/08UK0OP67TogLCU/download)
FCOS_imprv_X_101_64x4d_FPN_2x | Yes | 112ms | 44.7 | [download](https://cloudstor.aarnet.edu.au/plus/s/rKOJtwvJwcKVOz8/download)
FCOS_imprv_dcnv2_X_101_64x4d_FPN_2x | Yes | 144ms | 46.6 | [download](https://cloudstor.aarnet.edu.au/plus/s/jdtVmG7MlugEXB7/download)
*Note that `imprv` denotes `improvements` in our paper Table 3. These almost cost-free changes improve the performance by ~1.5% in total. Thus, we highly recommend to use them. The following are the original models presented in our initial paper.*
Model | Multi-scale training | Testing time / im | AP (minival) | AP (test-dev) | Link
--- |:---:|:---:|:---:|:---:|:---:
FCOS_R_50_FPN_1x | No | 45ms | 37.1 | 37.4 | [download](https://cloudstor.aarnet.edu.au/plus/s/dDeDPBLEAt19Xrl/download)
FCOS_R_101_FPN_2x | Yes | 59ms | 41.4 | 41.5 | [download](https://cloudstor.aarnet.edu.au/plus/s/vjL3L0AW7vnhRTo/download)
FCOS_X_101_32x8d_FPN_2x | Yes | 110ms | 42.5 | 42.7 | [download](https://cloudstor.aarnet.edu.au/plus/s/U5myBfGF7MviZ97/download)
FCOS_X_101_64x4d_FPN_2x | Yes | 113ms | 43.0 | 43.2 | [download](https://cloudstor.aarnet.edu.au/plus/s/wpwoCi4S8iajFi9/download)
**MobileNets:**
*We update batch normalization for MobileNet based models. If you want to use SyncBN, please install pytorch 1.1 or later.*
Model | Training batch size | Multi-scale training | Testing time / im | AP (minival) | Link
--- |:---:|:---:|:---:|:---:|:---:
FCOS_syncbn_bs32_c128_MNV2_FPN_1x | 32 | No | 26ms | 30.9 | [download](https://cloudstor.aarnet.edu.au/plus/s/3GKwaxZhDSOlCZ0/download)
FCOS_syncbn_bs32_MNV2_FPN_1x | 32 | No | 33ms | 33.1 | [download](https://cloudstor.aarnet.edu.au/plus/s/OpJtCJLj104i2Yc/download)
FCOS_bn_bs16_MNV2_FPN_1x | 16 | No | 44ms | 31.0 | [download](https://cloudstor.aarnet.edu.au/plus/s/B6BrLAiAEAYQkcy/download)
[1] *1x and 2x mean the model is trained for 90K and 180K iterations, respectively
奔强的程序
- 粉丝: 1028
- 资源: 2750
最新资源
- VmwareHardenedLoader.zip
- Labview通过FINS tcp协议与欧姆龙PLC通讯,支持CIO区,W区,D区,布尔量,整数,浮点数,字符串读写操作,软件无加密
- 英特尔2021-2024年网络连接性和IPU路线图
- Intouch2020R2SP1与西门子1500PLC通讯配置手册
- 电池组散热分析 ansys 流体 fluent
- 陀螺仪选型陀螺仪陀螺仪选型型陀螺仪选型
- 快速排序算法Python实现:详解分治法原理与高效排序步骤
- STM32F401,使用ST-link时候,不能识别,显示ST-LINK USB communication error
- Avue.js是基于现有的element-plus库进行的二次封装,简化一些繁琐的操作,核心理念为数据驱动视图,主要的组件库针对table表格和form表单场景,同时衍生出更多企业常用的组件,达到高复
- COMSOL 准 BIC控制石墨烯临界耦合光吸收 COMSOL 光学仿真,石墨烯,光吸收,费米能级可调下图是仿真文件截图,所见即所得
- Intel-633246-eASIC-PB-006-N5X-Product-Brief .pdf
- 家庭用具检测21-YOLO(v5至v11)、COCO、Paligemma、TFRecord、VOC数据集合集.rar
- 51单片机仿真摇号抽奖机源程序12864液晶显示仿真+程序
- Pear Admin 是 一 款 开 箱 即 用 的 前 端 开 发 模 板,提供便捷快速的开发方式,延续 Admin 的设计规范
- ECSHOP模板堂最新2017仿E宠物模板 整合ECTouch微分销商城
- 完结26章Java主流分布式解决方案多场景设计与实战
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈