<div align="center">
[![logo](https://raw.githubusercontent.com/tinygrad/tinygrad/master/docs/logo.png)](https://tinygrad.org)
tinygrad: For something between [PyTorch](https://github.com/pytorch/pytorch) and [karpathy/micrograd](https://github.com/karpathy/micrograd). Maintained by [tiny corp](https://tinygrad.org).
<h3>
[Homepage](https://github.com/tinygrad/tinygrad) | [Documentation](/docs) | [Examples](/examples) | [Showcase](/docs/showcase.md) | [Discord](https://discord.gg/ZjZadyC7PK)
</h3>
[![GitHub Repo stars](https://img.shields.io/github/stars/tinygrad/tinygrad)](https://github.com/tinygrad/tinygrad/stargazers)
[![Unit Tests](https://github.com/tinygrad/tinygrad/actions/workflows/test.yml/badge.svg)](https://github.com/tinygrad/tinygrad/actions/workflows/test.yml)
[![Discord](https://img.shields.io/discord/1068976834382925865)](https://discord.gg/ZjZadyC7PK)
</div>
---
This may not be the best deep learning framework, but it is a deep learning framework.
Due to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. If XLA is CISC, tinygrad is RISC.
tinygrad is still alpha software, but we [raised some money](https://geohot.github.io/blog/jekyll/update/2023/05/24/the-tiny-corp-raised-5M.html) to make it good. Someday, we will tape out chips.
## Features
### LLaMA and Stable Diffusion
tinygrad can run [LLaMA](/docs/showcase.md#llama) and [Stable Diffusion](/docs/showcase.md#stable-diffusion)!
### Laziness
Try a matmul. See how, despite the style, it is fused into one kernel with the power of laziness.
```sh
DEBUG=3 python3 -c "from tinygrad.tensor import Tensor;
N = 1024; a, b = Tensor.rand(N, N), Tensor.rand(N, N);
c = (a.reshape(N, 1, N) * b.permute(1,0).reshape(1, N, N)).sum(axis=2);
print((c.numpy() - (a.numpy() @ b.numpy())).mean())"
```
And we can change `DEBUG` to `4` to see the generated code.
### Neural networks
As it turns out, 90% of what you need for neural networks are a decent autograd/tensor library.
Throw in an optimizer, a data loader, and some compute, and you have all you need.
#### Neural network example (from test/models/test_mnist.py)
```py
from tinygrad.tensor import Tensor
import tinygrad.nn.optim as optim
class TinyBobNet:
def __init__(self):
self.l1 = Tensor.uniform(784, 128)
self.l2 = Tensor.uniform(128, 10)
def forward(self, x):
return x.dot(self.l1).relu().dot(self.l2).log_softmax()
model = TinyBobNet()
optim = optim.SGD([model.l1, model.l2], lr=0.001)
# ... complete data loader here
out = model.forward(x)
loss = out.mul(y).mean()
optim.zero_grad()
loss.backward()
optim.step()
```
## Accelerators
tinygrad already supports numerous accelerators, including:
- [x] [CPU](tinygrad/runtime/ops_cpu.py)
- [x] [GPU (OpenCL)](tinygrad/runtime/ops_gpu.py)
- [x] [C Code (Clang)](tinygrad/runtime/ops_clang.py)
- [x] [LLVM](tinygrad/runtime/ops_llvm.py)
- [x] [METAL](tinygrad/runtime/ops_metal.py)
- [x] [CUDA](tinygrad/runtime/ops_cuda.py)
- [x] [Triton](extra/accel/triton/ops_triton.py)
- [x] [PyTorch](tinygrad/runtime/ops_torch.py)
- [x] [HIP](tinygrad/runtime/ops_hip.py)
- [x] [WebGPU](tinygrad/runtime/ops_webgpu.py)
And it is easy to add more! Your accelerator of choice only needs to support a total of 26 (optionally 27) low level ops.
More information can be found in the [documentation for adding new accelerators](/docs/adding_new_accelerators.md).
## Installation
The current recommended way to install tinygrad is from source.
### From source
```sh
git clone https://github.com/tinygrad/tinygrad.git
cd tinygrad
python3 -m pip install -e .
```
Don't forget the `.` at the end!
## Documentation
Documentation along with a quick start guide can be found in the [docs/](/docs) directory.
### Quick example comparing to PyTorch
```py
from tinygrad.tensor import Tensor
x = Tensor.eye(3, requires_grad=True)
y = Tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()
print(x.grad.numpy()) # dz/dx
print(y.grad.numpy()) # dz/dy
```
The same thing but in PyTorch:
```py
import torch
x = torch.eye(3, requires_grad=True)
y = torch.tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()
print(x.grad.numpy()) # dz/dx
print(y.grad.numpy()) # dz/dy
```
## Contributing
There has been a lot of interest in tinygrad lately. Here are some basic guidelines for contributing:
- Bug fixes are the best and always welcome! Like [this one](https://github.com/tinygrad/tinygrad/pull/421/files).
- If you don't understand the code you are changing, don't change it!
- All code golf PRs will be closed, but [conceptual cleanups](https://github.com/tinygrad/tinygrad/pull/372/files) are great.
- Features are welcome. Though if you are adding a feature, you need to include tests.
- Improving test coverage is great, with reliable non-brittle tests.
Additional guidelines can be found in [CONTRIBUTING.md](/CONTRIBUTING.md).
### Running tests
For more examples on how to run the full test suite please refer to the [CI workflow](.github/workflows/test.yml).
Some examples:
```sh
python3 -m pip install -e '.[testing]'
python3 -m pytest
python3 -m pytest -v -k TestTrain
python3 ./test/models/test_train.py TestTrain.test_efficientnet
```
没有合适的资源?快使用搜索试试~ 我知道了~
源深度学习框架:Tinygrad
共373个文件
py:228个
plist:23个
sh:16个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 96 浏览量
2023-11-16
17:08:04
上传
评论
收藏 14.88MB ZIP 举报
温馨提示
这是一个小型的开源深度学习框架。尽管代码行数不足1k,但它提供了足够简单的接口,支持深度学习模型的推理和训练。Tinygrad的设计注重简洁和易用性,使得开发者可以快速上手并构建自己的深度学习模型。
资源推荐
资源详情
资源评论
收起资源包目录
源深度学习框架:Tinygrad (373个子文件)
aneregs 9KB
disasm-a3xx.c 39KB
libtpu_client.c 6KB
gemm.c 4KB
sniff.cc 13KB
conv1_reorder.cl 6KB
joint_matrix_bfloat16.cpp 6KB
Dockerfile 272B
dump 3KB
dump2 3KB
dump3 2KB
dump4 3KB
.editorconfig 66B
.flake8 313B
.gitignore 657B
.gitignore 55B
.gitignore 33B
.gitignore 13B
.gitignore 9B
.gitignore 9B
.gitignore 6B
.gitignore 4B
.gitignore 2B
model.hwx.golden 32KB
train-images-idx3-ubyte.gz 9.45MB
t10k-images-idx3-ubyte.gz 1.57MB
tpu_driver.t1v-n-852cd0d5-w-0.taylor.log.INFO.20210619-062914.26926.gz 1.3MB
sops.gz 489KB
train-labels-idx1-ubyte.gz 28KB
t10k-labels-idx1-ubyte.gz 4KB
ir3.h 47KB
shader_enums.h 30KB
instr-a3xx.h 29KB
libtpu.h 13KB
bitset.h 12KB
macros.h 10KB
list.h 9KB
bitscan.h 7KB
h11ane.h 4KB
h11ane.h 17B
index.html 19KB
index.html 4KB
gemm.hwx 544KB
concat.hwx 48KB
sum.hwx 48KB
relu.hwx 48KB
sigmoid.hwx 32KB
conv.hwx 32KB
mypy.ini 286B
pytest.ini 72B
yolo_by_tinygrad.jpg 131KB
Chicken.jpg 108KB
stable_diffusion_by_tinygrad.jpg 87KB
mnist_by_tinygrad.jpg 15KB
car.jpg 8KB
f16_to_f32.js 3KB
test_webgpu.js 2KB
aneregs.json 23KB
aneregs.json 19B
LICENSE 1KB
compile.m 2KB
MAPPING 1KB
quickstart.md 12KB
env_vars.md 7KB
README.md 5KB
README.md 4KB
README.md 4KB
CONTRIBUTING.md 3KB
adding_new_accelerators.md 2KB
showcase.md 1KB
README.md 1KB
DESIGNv2.md 995B
test.mlmodel 104B
ane.mm 7KB
test.mm 6KB
compile.mm 3KB
NOTES 619B
README.old 10KB
broadcast.plist 4KB
quadconv.plist 4KB
inputview.plist 4KB
scaled.plist 3KB
gemm.plist 3KB
doubleconv.plist 3KB
doubleconvrev.plist 3KB
concat.plist 3KB
convuint8.plist 3KB
doubleconvsout.plist 3KB
net.plist 3KB
convneuron.plist 3KB
sum.plist 3KB
reshape.plist 2KB
goc.plist 2KB
conv.plist 2KB
neuron.plist 2KB
gemm.plist 2KB
reshape.plist 2KB
concat.plist 2KB
doubleneuron.plist 2KB
goc.plist 2KB
共 373 条
- 1
- 2
- 3
- 4
资源评论
UnknownToKnown
- 粉丝: 1w+
- 资源: 627
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功