![Linear Self Attention](./images/swin_transformer.png)
## Swin Transformer - PyTorch
Implementation of the [Swin Transformer](https://arxiv.org/pdf/2103.14030.pdf) architecture. This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (86.4 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones.
### Install
```bash
$ pip install -r requirements.txt
```
### Usage
```python
import torch
from swin_transformer_pytorch import SwinTransformer
net = SwinTransformer(
hidden_dim=96,
layers=(2, 2, 6, 2),
heads=(3, 6, 12, 24),
channels=3,
num_classes=3,
head_dim=32,
window_size=7,
downscaling_factors=(4, 2, 2, 2),
relative_pos_embedding=True
)
dummy_x = torch.randn(1, 3, 224, 224)
logits = net(dummy_x) # (1,3)
print(net)
print(logits)
```
### Parameters
- `hidden_dim`: int.
What hidden dimension you want to use for the architecture, noted C in the original paper
- `layers`: 4-tuple of ints divisible by 2.
How many layers in each stage to apply. Every int should be divisible by two because we are always applying a regular and a shifted SwinBlock together.
- `heads`: 4-tuple of ints
How many heads in each stage to apply.
- `channels`: int.
Number of channels of the input.
- `num_classes`: int.
Num classes the output should have.
- `head_dim`: int.
What dimension each head should have.
- `window_size`: int.
What window size to use, make sure that after each downscaling the image dimensions are still divisible by the window size.
- `downscaling_factors`: 4-tuple of ints.
What downscaling factor to use in each stage. Make sure image dimension is large enough for downscaling factors.
- `relative_pos_embedding`: bool.
Whether to use learnable relative position embedding (2M-1)x(2M-1) or full positional embeddings (M²xM²).
没有合适的资源?快使用搜索试试~ 我知道了~
Swin-T-使用Pytorch实现Swin-Transformer目标检测算法-优质项目实战.zip
共7个文件
py:4个
txt:1个
png:1个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
1 下载量 19 浏览量
2024-04-29
10:26:39
上传
评论
收藏 188KB ZIP 举报
温馨提示
Swin-T_使用Pytorch实现Swin-Transformer目标检测算法_优质项目实战
资源推荐
资源详情
资源评论
收起资源包目录
Swin-T_使用Pytorch实现Swin-Transformer目标检测算法_优质项目实战.zip (7个子文件)
Swin-T_使用Pytorch实现Swin-Transformer目标检测算法_优质项目实战
setup.py 979B
example.py 396B
requirements.txt 26B
images
swin_transformer.png 195KB
README.md 3KB
swin_transformer_pytorch
__init__.py 101B
swin_transformer.py 10KB
共 7 条
- 1
资源评论
极智视界
- 粉丝: 2w+
- 资源: 1448
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- JAVA实现Modbus RTU或Modbus TCPIP案例.zip
- 基于YOLOv8的FPS TPS AI自动锁定源码+使用步骤说明.zip
- JAVA实现Modbus RTU或Modbus TCPIP案例.zip
- 基于yolov8+streamlit的火灾检测部署源码+模型.zip
- 测试aaaaaaabbbbb
- VID20240521070643.mp4
- Android系统原理与开发学习要点详解-培训课件.zip
- 部署yolov8的tensorrt模型支持检测分割姿态估计的C++源码+部署步骤.zip
- 以简单、易用、高性能为目标、开源的时序数据库,支持Linux及Windows, Time Series Database.zip
- python-leetcode面试题解之第198题打家劫舍-题解.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功