# Jittor: a Just-in-time(JIT) deep learning framework
[Quickstart](#quickstart) | [Install](#install) | [Tutorial](#tutorial) | [Chinese](./README.cn.md)
Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators. The whole framework and meta-operators are compiled just-in-time. A powerful op compiler and tuner are integrated into Jittor. It allowed us to generate high-performance code with specialized for your model. Jittor also contains a wealth of high-performance model libraries, including: image recognition, detection, segmentation, generation, differentiable rendering, geometric learning, reinforcement learning, etc. .
The front-end language is Python. Module Design and Dynamic Graph Execution is used in the front-end, which is the most popular design for deeplearning framework interface. The back-end is implemented by high performance language, such as CUDA,C++.
Related Links:
* [Jittor Website](https://cg.cs.tsinghua.edu.cn/jittor/)
* [Jittor Tutorials](https://cg.cs.tsinghua.edu.cn/jittor/tutorial/)
* [Jittor Models](https://cg.cs.tsinghua.edu.cn/jittor/resources/)
* [Jittor Documents](https://cg.cs.tsinghua.edu.cn/jittor/assets/docs/index.html)
* [Github](https://github.com/jittor/jittor), [Gitee](https://gitee.com/jittor/jittor)
The following example shows how to model a two-layer neural network step by step and train from scratch In a few lines of Python code.
```python
import jittor as jt
from jittor import Module
from jittor import nn
import numpy as np
class Model(Module):
def __init__(self):
self.layer1 = nn.Linear(1, 10)
self.relu = nn.Relu()
self.layer2 = nn.Linear(10, 1)
def execute (self,x) :
x = self.layer1(x)
x = self.relu(x)
x = self.layer2(x)
return x
def get_data(n): # generate random data for training test.
for i in range(n):
x = np.random.rand(batch_size, 1)
y = x*x
yield jt.float32(x), jt.float32(y)
learning_rate = 0.1
batch_size = 50
n = 1000
model = Model()
optim = nn.SGD(model.parameters(), learning_rate)
for i,(x,y) in enumerate(get_data(n)):
pred_y = model(x)
dy = pred_y - y
loss = dy * dy
loss_mean = loss.mean()
optim.step(loss_mean)
print(f"step {i}, loss = {loss_mean.data.sum()}")
```
## Contents
* [Quickstart](#quickstart)
* [Install](#install)
* [Tutorial](#tutorial)
* [Contributing](#contributing)
* [The Team](#theteam)
* [License](#license)
## Quickstart
We provide some jupyter notebooks to help you quick start with Jittor.
- [Example: Model definition and training][1]
- [Basics: Op, Var][2]
- [Meta-operator: Implement your own convolution with Meta-operator][3]
## Install
Jittor environment requirements:
* System: **Linux**(e.g. Ubuntu/CentOS/Arch) (or **Windows** Subsystem of Linux)
* Python version >= 3.7
* CPU compiler (require at least one of the following)
* g++ (>=5.4.0)
* clang (>=8.0)
* GPU compiler (optional)
* nvcc (>=10.0 for g++ or >=10.2 for clang)
* GPU library: cudnn-dev (recommend tar file installation, [reference link](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#installlinux-tar))
Note: Currently Jittor runs on the Windows operating system through WSL. For the installation method of WSL, please refer to [Microsoft official website](https://docs.microsoft.com/en-us/windows/wsl/install-win10). WSL does not yet support CUDA.
Jittor offers three ways to install: docker, pip, or manual.
## Docker Install
We provide a Docker installation method to save you from configuring the environment. The Docker installation method is as follows:
```
# CPU only(Linux)
docker run -it --network host jittor/jittor
# CPU and CUDA(Linux)
docker run -it --network host --gpus all jittor/jittor-cuda
# CPU only(Mac and Windows)
docker run -it -p 8888:8888 jittor/jittor
```
## Pip install
```bash
sudo apt install python3.7-dev libomp-dev
python3.7 -m pip install jittor
# or install from github(latest version)
# python3.7 -m pip install git+https://github.com/Jittor/jittor.git
python3.7 -m jittor.test.test_example
```
## manual install
We will show how to install Jittor in Ubuntu 16.04 step by step, Other Linux distributions may have similar commands.
### Step 1: Choose your back-end compiler
```bash
# g++
sudo apt install g++ build-essential libomp-dev
# OR clang++-8
wget -O - https://raw.githubusercontent.com/Jittor/jittor/master/script/install_llvm.sh > /tmp/llvm.sh
bash /tmp/llvm.sh 8
```
### Step 2: Install Python and python-dev
Jittor need python version >= 3.7.
```bash
sudo apt install python3.7 python3.7-dev
```
### Step 3: Run Jittor
The whole framework is compiled Just-in-time. Let's install jittor via pip
```bash
git clone https://github.com/Jittor/jittor.git
sudo pip3.7 install ./jittor
export cc_path="clang++-8"
# if other compiler is used, change cc_path
# export cc_path="g++"
# export cc_path="icc"
# run a simple test
python3.7 -m jittor.test.test_example
```
if the test is passed, your Jittor is ready.
### Optional Step 4: Enable CUDA
Using CUDA in Jittor is very simple, Just setup environment value `nvcc_path`
```bash
# replace this var with your nvcc location
export nvcc_path="/usr/local/cuda/bin/nvcc"
# run a simple cuda test
python3.7 -m jittor.test.test_cuda
```
if the test is passed, your can use Jittor with CUDA by setting `use_cuda` flag.
```python
import jittor as jt
jt.flags.use_cuda = 1
```
### Optional Step 5: Test Resnet18 training
To check the integrity of Jittor, you can run Resnet18 training test. Note: 6G GPU RAM is requires in this test.
```bash
python3.7 -m jittor.test.test_resnet
```
if those tests are failed, please report bugs for us, and feel free to contribute ^_^
## Tutorial
In the tutorial section, we will briefly explain the basic concept of Jittor.
To train your model with Jittor, there are only three main concepts you need to know:
* Var: basic data type of jittor
* Operations: Jittor'op is simular with numpy
### Var
First, let's get started with Var. Var is the basic data type of jittor. Computation process in Jittor is asynchronous for optimization. If you want to access the data, `Var.data` can be used for synchronous data accessing.
```python
import jittor as jt
a = jt.float32([1,2,3])
print (a)
print (a.data)
# Output: float32[3,]
# Output: [ 1. 2. 3.]
```
And we can give the variable a name.
```python
a.name('a')
print(a.name())
# Output: a
```
### Operations
Jittor'op is simular with numpy. Let's try some operations. We create Var `a` and `b` via operation `jt.float32`, and add them. Printing those variables shows they have the same shape and dtype.
```python
import jittor as jt
a = jt.float32([1,2,3])
b = jt.float32([4,5,6])
c = a*b
print(a,b,c)
print(type(a), type(b), type(c))
# Output: float32[3,] float32[3,] float32[3,]
# Output: <class 'jittor_core.Var'> <class 'jittor_core.Var'> <class 'jittor_core.Var'>
```
Beside that, All the operators we used `jt.xxx(Var, ...)` have alias `Var.xxx(...)`. For example:
```python
c.max() # alias of jt.max(c)
c.add(a) # alias of jt.add(c, a)
c.min(keepdims=True) # alias of jt.min(c, keepdims=True)
```
if you want to know all the operation which Jittor supports. try `help(jt.ops)`. All the operation you found in `jt.ops.xxx`, can be used via alias `jt.xxx`.
```python
help(jt.ops)
# Output:
# abs(x: core.Var) -> core.Var
# add(x: core.Var, y: core.Var) -> core.Var
# array(data: array) -> core.Var
# binary(x: core.Var, y: core.Var, op: str) -> core.Var
# ......
```
### More
If you want to know more about Jittor, please check out the notebooks below:
* Quickstart
- [Example: Model definition and training][1]
- [Basics: Op, Var][2]
- [Meta-operator: Implement your own convolution with Meta-operator][3]
* Advanced
- [Custom Op: write your operator with C++ and CUDA and JIT compile it][4]
- [Profiler: P
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
PyPI 官网下载 | jittor-1.2.3.23.tar.gz (913个子文件)
op_compiler.cc 39KB
cudnn_conv_test.cc 37KB
expr.cc 36KB
kernel_ir.cc 33KB
executor.cc 20KB
jit_utils.cc 19KB
getitem_op.cc 17KB
conv_tuner.cc 16KB
profiler.cc 16KB
unary_op.cc 15KB
jt_alignment_from_assumptions.cc 15KB
py_var_tracer.cc 15KB
binary_op.cc 14KB
cublas_matmul_test.cc 12KB
parallel_compiler.cc 12KB
log.cc 12KB
helper_cuda.cc 11KB
test_kernel_ir.cc 11KB
setitem_op.cc 11KB
reduce_op.cc 11KB
cache_compile.cc 11KB
loop_var_analyze_pass.cc 11KB
cudnn_conv3d_backward_w_op.cc 11KB
cudnn_conv3d_op.cc 11KB
cudnn_conv3d_backward_x_op.cc 10KB
cudnn_conv_op.cc 10KB
cudnn_conv_backward_x_op.cc 10KB
cudnn_conv_backward_w_op.cc 10KB
sfrl_allocator.cc 9KB
op.cc 9KB
mkl_conv_backward_x_op.cc 9KB
mkl_conv_backward_w_op.cc 8KB
fused_op.cc 8KB
grad.cc 8KB
py_ring_buffer.cc 8KB
code_op.cc 8KB
where_op.cc 8KB
test_expr.cc 7KB
mkl_conv_op.cc 7KB
arg_reduce_op.cc 7KB
var_relay.cc 7KB
fake_main_pass.cc 7KB
check_cache_pass.cc 7KB
broadcast_to_op.cc 7KB
setitem_gopt.cc 6KB
mem_info.cc 6KB
argsort_op.cc 6KB
merge_loop_var_pass.cc 6KB
restride_pass.cc 6KB
var_holder.cc 6KB
py_array_op.cc 5KB
tracer.cc 5KB
reindex_op.cc 5KB
memory_profiler.cc 5KB
reindex_reduce_op.cc 5KB
numpy_code_op.cc 5KB
update_queue.cc 5KB
fetch_op.cc 5KB
test_sfrl_allocator.cc 4KB
nano_string.cc 4KB
allocator.cc 4KB
pass_manager.cc 4KB
loop_to_func_pass.cc 4KB
matmul_tuner.cc 4KB
test_op_relay.cc 4KB
jit_compiler.cc 4KB
array_op.cc 4KB
cublas_batched_matmul_op.cc 4KB
cub_arg_reduce_op.cc 4KB
nccl_test_op.cc 4KB
cub_where_op.cc 4KB
graph.cc 4KB
unroll_pass.cc 4KB
temp_allocator.cc 4KB
candidate_op.cc 3KB
cutt_transpose_op.cc 3KB
jit_key.cc 3KB
transpose_op.cc 3KB
replace_for_num_pass.cc 3KB
mpi_warper.cc 3KB
vectorize_pass.cc 3KB
cub_argsort_op.cc 3KB
float_atomic_fix_pass.cc 3KB
jit_searcher.cc 3KB
var.cc 3KB
mpi_reduce_op.cc 3KB
ring_buffer.cc 3KB
tape_op.cc 3KB
mpi_all_reduce_op.cc 3KB
index_op.cc 2KB
reduce_tuner.cc 2KB
ternary_op.cc 2KB
vtop.cc 2KB
init.cc 2KB
cublas_matmul_op.cc 2KB
tuner_manager.cc 2KB
nan_checker.cc 2KB
reorder_loop_pass.cc 2KB
test_jit_key.cc 2KB
merge_loop_pass.cc 2KB
共 913 条
- 1
- 2
- 3
- 4
- 5
- 6
- 10
资源评论
挣扎的蓝藻
- 粉丝: 13w+
- 资源: 15万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功