# FastAI XLA Extensions Library
> The FastAI XLA Extensions library package allows your fastai/Pytorch models to run on TPUs using the Pytorch-XLA library.
## Install
`pip install git+https://github.com/butchland/fastai_xla_extensions`
## How to use
### Configure TPU Environment Access
The Pytorch XLA package requires an environment supporting TPUs (Kaggle kernels, GCP or Colab environments required).
Nominally, Pytorch XLA also supports GPUs so please see the [Pytorch XLA site for more instructions](https://pytorch.org/xla/release/1.7/index.html).
If running on Colab, make sure the Runtime Type is set to TPU.
## Install fastai
Use the latest fastai and fastcore versions
```
#hide_output
#colab
!pip install -Uqq fastcore --upgrade
!pip install -Uqq fastai --upgrade
```
## Install Pytorch XLA
This is the official way to install Pytorch-XLA 1.7 as per the [instructions here](https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/getting-started.ipynb#scrollTo=CHzziBW5AoZH)
```
#hide_output
#colab
!pip install -Uqq cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.7-cp36-cp36m-linux_x86_64.whl
```
## Check if XLA is available
### Import the libraries
Import the fastai and fastai_xla_extensions libraries
```
#colab
#hide_output
import fastai_xla_extensions.core
```
```
from fastai.vision.all import *
```
### Example
Build a MNIST classifier -- adapted from fastai course [Lesson 4 notebook](https://github.com/fastai/course-v4/blob/master/nbs/04_mnist_basics.ipynb)
Load MNIST dataset
```
path = untar_data(URLs.MNIST_TINY)
```
Create Fastai DataBlock
```
datablock = DataBlock(
blocks=(ImageBlock,CategoryBlock),
get_items=get_image_files,
splitter=GrandparentSplitter(),
get_y=parent_label,
item_tfms=Resize(28),
batch_tfms=aug_transforms(do_flip=False,min_scale=0.8)
)
```
```
#colab
datablock.summary(path)
```
Setting-up type transforms pipelines
Collecting items from /root/.fastai/data/mnist_tiny
Found 1428 items
2 datasets of sizes 709,699
Setting up Pipeline: PILBase.create
Setting up Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}
Building one sample
Pipeline: PILBase.create
starting from
/root/.fastai/data/mnist_tiny/train/7/7770.png
applying PILBase.create gives
PILImage mode=RGB size=28x28
Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}
starting from
/root/.fastai/data/mnist_tiny/train/7/7770.png
applying parent_label gives
7
applying Categorize -- {'vocab': None, 'sort': True, 'add_na': False} gives
TensorCategory(1)
Final sample: (PILImage mode=RGB size=28x28, TensorCategory(1))
Collecting items from /root/.fastai/data/mnist_tiny
Found 1428 items
2 datasets of sizes 709,699
Setting up Pipeline: PILBase.create
Setting up Pipeline: parent_label -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False}
Setting up after_item: Pipeline: Resize -- {'size': (28, 28), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} -> ToTensor
Setting up before_batch: Pipeline:
Setting up after_batch: Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} -> Warp -- {'magnitude': 0.2, 'p': 1.0, 'draw_x': None, 'draw_y': None, 'size': None, 'mode': 'bilinear', 'pad_mode': 'reflection', 'batch': False, 'align_corners': True, 'mode_mask': 'nearest'} -> RandomResizedCropGPU -- {'size': None, 'min_scale': 0.8, 'ratio': (1, 1), 'mode': 'bilinear', 'valid_scale': 1.0, 'p': 1.0} -> Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False}
Building one batch
Applying item_tfms to the first sample:
Pipeline: Resize -- {'size': (28, 28), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} -> ToTensor
starting from
(PILImage mode=RGB size=28x28, TensorCategory(1))
applying Resize -- {'size': (28, 28), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} gives
(PILImage mode=RGB size=28x28, TensorCategory(1))
applying ToTensor gives
(TensorImage of size 3x28x28, TensorCategory(1))
Adding the next 3 samples
No before_batch transform to apply
Collating items in a batch
Applying batch_tfms to the batch built
Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} -> Warp -- {'magnitude': 0.2, 'p': 1.0, 'draw_x': None, 'draw_y': None, 'size': None, 'mode': 'bilinear', 'pad_mode': 'reflection', 'batch': False, 'align_corners': True, 'mode_mask': 'nearest'} -> RandomResizedCropGPU -- {'size': None, 'min_scale': 0.8, 'ratio': (1, 1), 'mode': 'bilinear', 'valid_scale': 1.0, 'p': 1.0} -> Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False}
starting from
(TensorImage of size 4x3x28x28, TensorCategory([1, 1, 1, 1]))
applying IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} gives
(TensorImage of size 4x3x28x28, TensorCategory([1, 1, 1, 1]))
applying Warp -- {'magnitude': 0.2, 'p': 1.0, 'draw_x': None, 'draw_y': None, 'size': None, 'mode': 'bilinear', 'pad_mode': 'reflection', 'batch': False, 'align_corners': True, 'mode_mask': 'nearest'} gives
(TensorImage of size 4x3x28x28, TensorCategory([1, 1, 1, 1]))
applying RandomResizedCropGPU -- {'size': None, 'min_scale': 0.8, 'ratio': (1, 1), 'mode': 'bilinear', 'valid_scale': 1.0, 'p': 1.0} gives
(TensorImage of size 4x3x27x27, TensorCategory([1, 1, 1, 1]))
applying Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False} gives
(TensorImage of size 4x3x27x27, TensorCategory([1, 1, 1, 1]))
Create the dataloader
```
dls = datablock.dataloaders(path)
```
```
#colab
dls.show_batch()
```
![png](docs/images/output_21_0.png)
Create a Fastai CNN Learner
```
learner = cnn_learner(dls, resnet18, metrics=accuracy)
```
```
#colab
learner.summary()
```
<table border="1" class="dataframe">
<thead>
<tr style="text-align: left;">
<th>epoch</th>
<th>train_loss</th>
<th>valid_loss</th>
<th>accuracy</th>
<th>time</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>None</td>
<td>None</td>
<td>00:00</td>
</tr>
</tbody>
</table>
Sequential (Input shape: ['64 x 3 x 28 x 28'])
================================================================
Layer (type) Output Shape Param # Trainable
================================================================
Conv2d 64 x 64 x 14 x 14 9,408 True
________________________________________________________________
BatchNorm2d 64 x 64 x 14 x 14 128 True
________________________________________________________________
ReLU 64 x 64 x 14 x 14 0 False
________________________________________________________________
MaxPool2d 64 x 64 x 7 x 7 0 False
________________________________________________________________
Conv2d 64 x 64 x 7 x 7 36,864 True
________________________________________________________________
BatchNorm2d 64 x 64 x 7 x 7 128 True
________________________________________________________________
ReLU 64 x 64 x 7 x 7 0 False
________________________________________________________________
Conv2d 64 x 64 x 7 x 7 36,864 True
________________________________________________________________
BatchNorm2d 64 x 64 x 7 x 7 128 True
__________________
挣扎的蓝藻
- 粉丝: 14w+
- 资源: 15万+
最新资源
- 各种字符串相似度和距离算法的实现Levenshtein、Jaro-winkler、n-Gram、Q-Gram、Jaccard index、最长公共子序列编辑距离、余弦相似度…….zip
- 运用python生成的跳跃的爱心
- 包括用 Java 编写的程序 欢迎您在此做出贡献!.zip
- (源码)基于QT框架的学生管理系统.zip
- 功能齐全的 Java Socket.IO 客户端库,兼容 Socket.IO v1.0 及更高版本 .zip
- 功能性 javascript 研讨会 无需任何库(即无需下划线),只需 ES5 .zip
- 分享Java相关的东西 - Java安全漫谈笔记相关内容.zip
- 具有适合 Java 应用程序的顺序定义的 Cloud Native Buildpack.zip
- 网络建设运维资料库职业
- 关于 Java 的一切.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈