<!-- this link magically rendered as video, unfortunately not in docs -->
<!--
<a href='http://arogozhnikov.github.io/images/einops/einops_video.mp4' >
<div align="center">
<img src="http://arogozhnikov.github.io/images/einops/einops_video.gif" alt="einops package examples" />
<br>
<small><a href='http://arogozhnikov.github.io/images/einops/einops_video.mp4'>This video in high quality (mp4)</a></small>
<br><br>
</div>
</a>
-->
https://user-images.githubusercontent.com/6318811/177030658-66f0eb5d-e136-44d8-99c9-86ae298ead5b.mp4
# einops
[![Run tests](https://github.com/arogozhnikov/einops/actions/workflows/run_tests.yml/badge.svg)](https://github.com/arogozhnikov/einops/actions/workflows/run_tests.yml)
[![PyPI version](https://badge.fury.io/py/einops.svg)](https://badge.fury.io/py/einops)
[![Documentation](https://img.shields.io/badge/documentation-link-blue.svg)](https://einops.rocks/)
![Supported python versions](https://raw.githubusercontent.com/arogozhnikov/einops/master/docs/resources/python_badge.svg)
Flexible and powerful tensor operations for readable and reliable code.
Supports numpy, pytorch, tensorflow, jax, and [others](#supported-frameworks).
## Recent updates:
- einsum is now a part of einops
- [Einops paper](https://openreview.net/pdf?id=oapKSVM2bcj) is accepted for oral presentation at ICLR 2022 (yes, it worth reading)
- flax and oneflow backend added
- torch.jit.script is supported for pytorch layers
- powerful EinMix added to einops. [Einmix tutorial notebook](https://github.com/arogozhnikov/einops/blob/master/docs/3-einmix-layer.ipynb)
<!--<div align="center">
<img src="http://arogozhnikov.github.io/images/einops/einops_logo_350x350.png"
alt="einops package logo" width="250" height="250" />
<br><br>
</div> -->
## Tweets
> In case you need convincing arguments for setting aside time to learn about einsum and einops...
[Tim Rockt채schel, FAIR](https://twitter.com/_rockt/status/1230818967205425152)
> Writing better code with PyTorch and einops �윉�
[Andrej Karpathy, AI at Tesla](https://twitter.com/karpathy/status/1290826075916779520)
> Slowly but surely, einops is seeping in to every nook and cranny of my code. If you find yourself shuffling around bazillion dimensional tensors, this might change your life
[Nasim Rahaman, MILA (Montreal)](https://twitter.com/nasim_rahaman/status/1216022614755463169)
[More testimonials](https://einops.rocks/pages/testimonials/)
## Recordings of talk at ICLR 2022
<a href='https://iclr.cc/virtual/2022/oral/6603'>
<img width="922" alt="Screen Shot 2022-07-03 at 1 00 15 AM" src="https://user-images.githubusercontent.com/6318811/177030789-89d349bf-ef75-4af5-a71f-609896d1c8d9.png">
</a>
Watch [a 15-minute talk](https://iclr.cc/virtual/2022/oral/6603) that focuses on main problems of standard tensor manipulation methods, and how einops improves this process.
## Contents
- [Installation](#Installation)
- [Documentation](https://einops.rocks/)
- [Tutorial](#Tutorials)
- [API micro-reference](#API)
- [Why using einops](#Why-using-einops-notation)
- [Supported frameworks](#Supported-frameworks)
- [Contributing](#Contributing)
- [Repository](https://github.com/arogozhnikov/einops) and [discussions](https://github.com/arogozhnikov/einops/discussions)
## Installation <a name="Installation"></a>
Plain and simple:
```bash
pip install einops
```
<!--
`einops` has no mandatory dependencies (code examples also require jupyter, pillow + backends).
To obtain the latest github version
```bash
pip install https://github.com/arogozhnikov/einops/archive/master.zip
```
-->
## Tutorials <a name="Tutorials"></a>
Tutorials are the most convenient way to see `einops` in action
- part 1: [einops fundamentals](https://github.com/arogozhnikov/einops/blob/master/docs/1-einops-basics.ipynb)
- part 2: [einops for deep learning](https://github.com/arogozhnikov/einops/blob/master/docs/2-einops-for-deep-learning.ipynb)
- part 3: [improve pytorch code with einops](https://arogozhnikov.github.io/einops/pytorch-examples.html)
## API <a name="API"></a>
`einops` has a minimalistic yet powerful API.
Three operations provided ([einops tutorial](https://github.com/arogozhnikov/einops/blob/master/docs/)
shows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)
```python
from einops import rearrange, reduce, repeat
# rearrange elements according to the pattern
output_tensor = rearrange(input_tensor, 't b c -> b c t')
# combine rearrangement and reduction
output_tensor = reduce(input_tensor, 'b c (h h2) (w w2) -> b h w c', 'mean', h2=2, w2=2)
# copy along a new axis
output_tensor = repeat(input_tensor, 'h w -> h w c', c=3)
```
And two corresponding layers (`einops` keeps a separate version for each framework) with the same API.
```python
from einops.layers.chainer import Rearrange, Reduce
from einops.layers.gluon import Rearrange, Reduce
from einops.layers.keras import Rearrange, Reduce
from einops.layers.torch import Rearrange, Reduce
from einops.layers.tensorflow import Rearrange, Reduce
```
Layers behave similarly to operations and have the same parameters
(with the exception of the first argument, which is passed during call)
```python
layer = Rearrange(pattern, **axes_lengths)
layer = Reduce(pattern, reduction, **axes_lengths)
# apply created layer to a tensor / variable
x = layer(x)
```
Example of using layers within a model:
```python
# example given for pytorch, but code in other frameworks is almost identical
from torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU
from einops.layers.torch import Rearrange
model = Sequential(
Conv2d(3, 6, kernel_size=5),
MaxPool2d(kernel_size=2),
Conv2d(6, 16, kernel_size=5),
MaxPool2d(kernel_size=2),
# flattening
Rearrange('b c h w -> b (c h w)'),
Linear(16*5*5, 120),
ReLU(),
Linear(120, 10),
)
```
<!---
Additionally two auxiliary functions provided
```python
from einops import asnumpy, parse_shape
# einops.asnumpy converts tensors of imperative frameworks to numpy
numpy_tensor = asnumpy(input_tensor)
# einops.parse_shape gives a shape of axes of interest
parse_shape(input_tensor, 'batch _ h w') # e.g {'batch': 64, 'h': 128, 'w': 160}
```
-->
## Naming <a name="Naming"></a>
`einops` stands for Einstein-Inspired Notation for operations
(though "Einstein operations" is more attractive and easier to remember).
Notation was loosely inspired by Einstein summation (in particular by `numpy.einsum` operation).
## Why use `einops` notation?! <a name="Why-using-einops-notation"></a>
### Semantic information (being verbose in expectations)
```python
y = x.view(x.shape[0], -1)
y = rearrange(x, 'b c h w -> b (c h w)')
```
While these two lines are doing the same job in *some* context,
the second one provides information about the input and output.
In other words, `einops` focuses on interface: *what is the input and output*, not *how* the output is computed.
The next operation looks similar:
```python
y = rearrange(x, 'time c h w -> time (c h w)')
```
but it gives the reader a hint:
this is not an independent batch of images we are processing,
but rather a sequence (video).
Semantic information makes the code easier to read and maintain.
### Convenient checks
Reconsider the same example:
```python
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)')
```
The second line checks that the input has four dimensions,
but you can also specify particular dimensions.
That's opposed to just writing comments about shapes since
[comments don't work and don't prevent mistakes](https://medium.freecodecamp.org/code-comments-the-good-the-bad-and-the-ugly-be9cc65fbf83)
as we know
```python
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)
```
### Result is strictly determined
Below we have at least two ways to define