# QPyTorch
[![Downloads](https://pepy.tech/badge/qtorch)](https://pepy.tech/project/qtorch) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
#### News:
- Updated to version 0.3.0:
- supporting subnorms now (#43). Thanks @danielholanda for his contribution!
- Updated to version 0.2.0:
- **Bug fixed**: previously in our floating point quantization, numbers that are closer to 0 than the smallest
representable positive number are rounded to the smallest rep positive number. Now we round to 0 or the smallest
representable number based on which one is the nearest.
- **Different Behavior**: To be consistent with PyTorch [Issue #17443](https://github.com/pytorch/pytorch/pull/17443),
we round to nearest even now.
- We migrate to PyTorch 1.5.0. There are several changes in the C++ API of PyTorch.
This new version is not backward-compatible with older PyTorch.
- *Note*: if you are using CUDA 10.1, please install CUDA 10.1 Update 1 (or later version). There is a bug in
the first version of CUDA 10.1 which leads to compilation errors.
- *Note*: previous users, please remove the cache in the pytorch extension directory.
For example, you can run this command `rm -rf /tmp/torch_extensions/quant_cuda /tmp/torch_extensions/quant_cpu` if
you are using the default directory for pytorch extensions.
# Overview
QPyTorch is a low-precision arithmetic simulation package in
PyTorch. It is designed to support researches on low-precision machine
learning, especially for researches in low-precision training.
A more comprehensive write-up can be found [here](https://arxiv.org/abs/1910.04540).
Notably, QPyTorch supports quantizing different numbers in the training process
with customized low-precision formats. This eases the process of investigating
different precision settings and developing new deep learning architectures. More
concretely, QPyTorch implements fused kernels for quantization and integrates
smoothly with existing PyTorch kernels (e.g. matrix multiplication, convolution).
Recent researches can be reimplemented easily through QPyTorch. We offer an
example replication of [WAGE](https://arxiv.org/abs/1802.04680) in a downstream
repo [WAGE](https://github.com/Tiiiger/QPyTorch/blob/master/examples/WAGE). We also provide a list
of working examples under [Examples](#examples).
*Note*: QPyTorch relies on PyTorch functions for the underlying computation,
such as matrix multiplication. This means that the actual computation is done in
single precision. Therefore, QPyTorch is not intended to be used to study the
numerical behavior of different **accumulation** strategies.
*Note*: QPyTorch, as of now, have a different rounding mode with PyTorch. QPyTorch does round-away-from-zero while
PyTorch does round-to-nearest-even. This will create a discrepancy between the PyTorch half-precision tensor
and QPyTorch's simulation of half-precision numbers.
if you find this repo useful please cite
```
@misc{zhang2019qpytorch,
title={QPyTorch: A Low-Precision Arithmetic Simulation Framework},
author={Tianyi Zhang and Zhiqiu Lin and Guandao Yang and Christopher De Sa},
year={2019},
eprint={1910.04540},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
## Installation
requirements:
- Python >= 3.6
- PyTorch >= 1.5.0
- GCC >= 4.9 on linux
- CUDA >= 10.1 on linux
Install other requirements by:
```bash
pip install -r requirements.txt
```
Install QPyTorch through pip:
```bash
pip install qtorch
```
For more details about compiler requirements,
please refer to [PyTorch extension tutorial](https://pytorch.org/tutorials/advanced/cpp_extension.html).
## Documentation
See our [readthedocs](https://qpytorch.readthedocs.io/en/latest/) page.
## Tutorials
- [An overview of QPyTorch's features](https://github.com/Tiiiger/QPyTorch/blob/master/examples/tutorial/Functionality_Overview.ipynb)
- [CIFAR-10 Low-Precision Training Tutorial](https://github.com/Tiiiger/QPyTorch/blob/master/examples/tutorial/CIFAR10_Low_Precision_Training_Example.ipynb)
## Examples
- Low-Precision VGGs and ResNets using fixed point, block floating point on CIFAR and ImageNet. [lp_train](https://github.com/Tiiiger/QPyTorch/blob/master/examples/lp_train)
- Reproduction of WAGE in QPyTorch. [WAGE](https://github.com/Tiiiger/QPyTorch/blob/master/examples/WAGE)
- Implementation (simulation) of 8-bit Floating Point Training in QPyTorch. [IBM8](https://github.com/Tiiiger/QPyTorch/blob/master/examples/IBM8)
## Team
* [Tianyi Zhang](https://scholar.google.com/citations?user=OI0HSa0AAAAJ&hl=en)
* Zhiqiu Lin
* [Guandao Yang](http://www.guandaoyang.com/)
* [Christopher De Sa](http://www.cs.cornell.edu/~cdesa/)
## Other Contributors
* [Daniel Holanda Noronha](https://www.linkedin.com/in/danielholandanoronha/)
没有合适的资源?快使用搜索试试~ 我知道了~
PyTorch中的低精度算术模拟___下载.zip
共89个文件
py:44个
rst:8个
cu:6个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 144 浏览量
2023-04-16
19:42:44
上传
评论
收藏 87KB ZIP 举报
温馨提示
PyTorch中的低精度算术模拟___下载.zip
资源推荐
资源详情
资源评论
收起资源包目录
PyTorch中的低精度算术模拟___下载.zip (89个子文件)
QPyTorch-master
setup.py 1015B
LICENSE 1KB
examples
lp_train
utils.py 3KB
data.py 3KB
models
__init__.py 96B
preresnet.py 5KB
preresnet_low.py 5KB
densenet.py 9KB
vgg.py 3KB
densenet_low.py 10KB
vgg_low.py 3KB
train.py 7KB
README.md 965B
sample_scripts
bfloat_baseline.sh 779B
block_baseline.sh 653B
fixed_baseline.sh 545B
WAGE
utils.py 3KB
wage_qtorch.py 2KB
requirements.txt 29B
models
__init__.py 23B
wage_initializer.py 1KB
vgg_low.py 3KB
train.py 6KB
README.md 2KB
reproduce.sh 504B
tutorial
Functionality_Overview.ipynb 13KB
CIFAR10_Low_Precision_Training_Example.ipynb 13KB
Low_Precision_Quick_Intro.ipynb 843B
IBM8
utils.py 2KB
example.sh 899B
data.py 3KB
models
__init__.py 96B
preresnet.py 5KB
preresnet_low.py 5KB
vgg.py 3KB
vgg_low.py 3KB
train.py 6KB
README.md 1KB
SWALP
utils.py 5KB
example.sh 382B
LICENSE 1KB
requirements.txt 22B
vgg.py 3KB
train.py 5KB
README.md 2KB
qtorch
__init__.py 87B
number.py 5KB
optim
__init__.py 47B
optim_low.py 5KB
auto_low
__init__.py 65B
auto_low.py 6KB
quant
__init__.py 179B
quant_cpu
sim_helper.cpp 457B
bit_helper.cpp 1KB
quant_cpu.h 578B
quant_cpu.cpp 9KB
quant_function.py 12KB
quant_cuda
sim_helper.cu 539B
fixed_point_kernel.cu 3KB
block_kernel.cu 3KB
quant_cuda.h 3KB
quant_kernel.h 3KB
bit_helper.cu 3KB
quant_cuda.cpp 3KB
quant.cu 10KB
float_kernel.cu 3KB
quant_module.py 543B
docs
make.bat 816B
Makefile 609B
source
.#optim.rst 31B
qtorch.rst 94B
auto_low.rst 130B
index.rst 685B
.#quant.rst 31B
optim.rst 129B
conf.py 7KB
.#qtorch.rst 31B
quant.rst 128B
.gitmodules 0B
requirements.txt 37B
test
test_clamp.py 4KB
test_device.py 3KB
test_stochastic.py 2KB
test_backward.py 5KB
test_relation.py 2KB
test_random.py 797B
MANIFEST.in 87B
.gitignore 311B
README.md 5KB
共 89 条
- 1
资源评论
快撑死的鱼
- 粉丝: 1w+
- 资源: 9153
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功