[pypi-image]: https://badge.fury.io/py/torch-sparse.svg
[pypi-url]: https://pypi.python.org/pypi/torch-sparse
[build-image]: https://travis-ci.org/rusty1s/pytorch_sparse.svg?branch=master
[build-url]: https://travis-ci.org/rusty1s/pytorch_sparse
[coverage-image]: https://codecov.io/gh/rusty1s/pytorch_sparse/branch/master/graph/badge.svg
[coverage-url]: https://codecov.io/github/rusty1s/pytorch_sparse?branch=master
# PyTorch Sparse
[![PyPI Version][pypi-image]][pypi-url]
[![Build Status][build-image]][build-url]
[![Code Coverage][coverage-image]][coverage-url]
--------------------------------------------------------------------------------
This package consists of a small extension library of optimized sparse matrix operations with autograd support.
This package currently consists of the following methods:
* **[Coalesce](#coalesce)**
* **[Transpose](#transpose)**
* **[Sparse Dense Matrix Multiplication](#sparse-dense-matrix-multiplication)**
* **[Sparse Sparse Matrix Multiplication](#sparse-sparse-matrix-multiplication)**
All included operations work on varying data types and are implemented both for CPU and GPU.
To avoid the hazzle of creating [`torch.sparse_coo_tensor`](https://pytorch.org/docs/stable/torch.html?highlight=sparse_coo_tensor#torch.sparse_coo_tensor), this package defines operations on sparse tensors by simply passing `index` and `value` tensors as arguments ([with same shapes as defined in PyTorch](https://pytorch.org/docs/stable/sparse.html)).
Note that only `value` comes with autograd support, as `index` is discrete and therefore not differentiable.
## Installation
### Binaries
We provide pip wheels for all major OS/PyTorch/CUDA combinations, see [here](https://pytorch-geometric.com/whl).
#### PyTorch 1.8.0
To install the binaries for PyTorch 1.8.0, simply run
```
pip install torch-scatter torch-sparse -f https://pytorch-geometric.com/whl/torch-1.8.0+${CUDA}.html
```
where `${CUDA}` should be replaced by either `cpu`, `cu101`, `cu102`, or `cu111` depending on your PyTorch installation.
| | `cpu` | `cu101` | `cu102` | `cu111` |
|-------------|-------|---------|---------|---------|
| **Linux** | ✅ | ✅ | ✅ | ✅ |
| **Windows** | ✅ | ✅ | ✅ | ✅ |
| **macOS** | ✅ | | | |
#### PyTorch 1.7.0/1.7.1
To install the binaries for PyTorch 1.7.0 and 1.7.1, simply run
```
pip install torch-scatter torch-sparse -f https://pytorch-geometric.com/whl/torch-1.7.0+${CUDA}.html
```
where `${CUDA}` should be replaced by either `cpu`, `cu92`, `cu101`, `cu102`, or `cu110` depending on your PyTorch installation.
| | `cpu` | `cu92` | `cu101` | `cu102` | `cu110` |
|-------------|-------|--------|---------|---------|---------|
| **Linux** | ✅ | ✅ | ✅ | ✅ | ✅ |
| **Windows** | ✅ | ❌ | ✅ | ✅ | ✅ |
| **macOS** | ✅ | | | | |
**Note:** Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0 and PyTorch 1.6.0 (following the same procedure).
### From source
Ensure that at least PyTorch 1.4.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, *e.g.*:
```
$ python -c "import torch; print(torch.__version__)"
>>> 1.4.0
$ echo $PATH
>>> /usr/local/cuda/bin:...
$ echo $CPATH
>>> /usr/local/cuda/include:...
```
If you want to additionally build `torch-sparse` with METIS support, *e.g.* for partioning, please download and install the [METIS library](http://glaros.dtc.umn.edu/gkhome/metis/metis/download) by following the instructions in the `Install.txt` file.
Note that METIS needs to be installed with 64 bit `IDXTYPEWIDTH` by changing `include/metis.h`.
Afterwards, set the environment variable `WITH_METIS=1`.
Then run:
```
pip install torch-scatter torch-sparse
```
When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.
In this case, ensure that the compute capabilities are set via `TORCH_CUDA_ARCH_LIST`, *e.g.*:
```
export TORCH_CUDA_ARCH_LIST="6.0 6.1 7.2+PTX 7.5+PTX"
```
## Functions
### Coalesce
```
torch_sparse.coalesce(index, value, m, n, op="add") -> (torch.LongTensor, torch.Tensor)
```
Row-wise sorts `index` and removes duplicate entries.
Duplicate entries are removed by scattering them together.
For scattering, any operation of [`torch_scatter`](https://github.com/rusty1s/pytorch_scatter) can be used.
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **op** *(string, optional)* - The scatter operation to use. (default: `"add"`)
#### Returns
* **index** *(LongTensor)* - The coalesced index tensor of sparse matrix.
* **value** *(Tensor)* - The coalesced value tensor of sparse matrix.
#### Example
```python
import torch
from torch_sparse import coalesce
index = torch.tensor([[1, 0, 1, 0, 2, 1],
[0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])
index, value = coalesce(index, value, m=3, n=2)
```
```
print(index)
tensor([[0, 1, 1, 2],
[1, 0, 1, 0]])
print(value)
tensor([[6.0, 8.0],
[7.0, 9.0],
[3.0, 4.0],
[5.0, 6.0]])
```
### Transpose
```
torch_sparse.transpose(index, value, m, n) -> (torch.LongTensor, torch.Tensor)
```
Transposes dimensions 0 and 1 of a sparse matrix.
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **coalesced** *(bool, optional)* - If set to `False`, will not coalesce the output. (default: `True`)
#### Returns
* **index** *(LongTensor)* - The transposed index tensor of sparse matrix.
* **value** *(Tensor)* - The transposed value tensor of sparse matrix.
#### Example
```python
import torch
from torch_sparse import transpose
index = torch.tensor([[1, 0, 1, 0, 2, 1],
[0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])
index, value = transpose(index, value, 3, 2)
```
```
print(index)
tensor([[0, 0, 1, 1],
[1, 2, 0, 1]])
print(value)
tensor([[7.0, 9.0],
[5.0, 6.0],
[6.0, 8.0],
[3.0, 4.0]])
```
### Sparse Dense Matrix Multiplication
```
torch_sparse.spmm(index, value, m, n, matrix) -> torch.Tensor
```
Matrix product of a sparse matrix with a dense matrix.
#### Parameters
* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **matrix** *(Tensor)* - The dense matrix.
#### Returns
* **out** *(Tensor)* - The dense output matrix.
#### Example
```python
import torch
from torch_sparse import spmm
index = torch.tensor([[0, 0, 1, 2, 2],
[0, 2, 1, 0, 1]])
value = torch.Tensor([1, 2, 4, 1, 3])
matrix = torch.Tensor([[1, 4], [2, 5], [3, 6]])
out = spmm(index, value, 3, 3, matrix)
```
```
print(out)
tensor([[7.0, 16.0],
[8.0, 20.0],
[7.0, 19.0]])
```
### Sparse Sparse Matrix Multiplication
```
torch_sparse.spspmm(indexA, valueA, indexB, valueB, m, k, n) -> (torch.LongTensor, torch.Tensor)
```
Matrix product of two sparse tensors.
Both input sparse matrices need to be **coalesced** (use the `coalesced` attribute to force).
#### Parameters
* **indexA** *(LongTensor)* - The index tensor of first sparse matrix.
* **valueA** *(Tensor)* - The value tensor of first sparse matrix.
* **indexB** *(LongTensor)* - The index tensor of second sparse matrix
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
PyTorch稀疏 该软件包包括一个小型扩展库,该库具有自动分级支持,可优化稀疏矩阵运算。 该软件包当前包含以下方法: 所有包含的操作都适用于不同的数据类型,并且都针对CPU和GPU实施。 为了避免创建 ,此程序包仅通过将index和value张量作为参数传递(对稀疏张量的操作。 请注意,autograd支持仅包含value ,因为index是离散的,因此不可微分。 安装 二进制文件 我们为所有主要的OS / PyTorch / CUDA组合提供了点胶轮,请参见。 PyTorch 1.8.0 要安装PyTorch 1.8.0的二进制文件,只需运行 pip install torch-scatter torch-sparse -f https://pytorch-geometric.com/whl/torch-1.8.0+${CUDA}.html 其中${CUDA}应该由cpu , cu
资源详情
资源评论
资源推荐
收起资源包目录
pytorch_sparse:优化的Autograd稀疏矩阵运算的PyTorch扩展库 (111个子文件)
setup.cfg 109B
.coveragerc 114B
spmm.cpp 13KB
spmm_cpu.cpp 5KB
ego_sample_cpu.cpp 4KB
sample_cpu.cpp 4KB
relabel_cpu.cpp 4KB
metis_cpu.cpp 4KB
spspmm_cpu.cpp 3KB
metis.cpp 2KB
convert_cpu.cpp 1KB
saint_cpu.cpp 1KB
relabel.cpp 1KB
rw_cpu.cpp 1KB
diag_cpu.cpp 1KB
spspmm.cpp 1KB
ego_sample.cpp 1002B
convert.cpp 979B
sample.cpp 777B
rw.cpp 774B
diag.cpp 752B
saint.cpp 715B
version.cpp 456B
spmm_cuda.cu 8KB
spspmm_cuda.cu 6KB
convert_cuda.cu 2KB
rw_cuda.cu 2KB
diag_cuda.cu 2KB
reducer.cuh 4KB
atomics.cuh 637B
utils.cuh 242B
.gitignore 82B
.gitignore 12B
reducer.h 4KB
sparse.h 3KB
utils.h 914B
metis_cpu.h 614B
spmm_cuda.h 480B
spmm_cpu.h 474B
relabel_cpu.h 473B
spspmm_cuda.h 391B
spspmm_cpu.h 386B
ego_sample_cpu.h 344B
sample_cpu.h 243B
saint_cpu.h 206B
rw_cuda.h 190B
rw_cpu.h 188B
diag_cuda.h 182B
diag_cpu.h 180B
convert_cuda.h 160B
convert_cpu.h 158B
TorchSparseConfig.cmake.in 1KB
MANIFEST.in 85B
LICENSE 1KB
README.md 9KB
storage.py 23KB
tensor.py 19KB
cat.py 8KB
test_storage.py 6KB
main.py 5KB
narrow.py 5KB
setup.py 4KB
matmul.py 4KB
diag.py 4KB
index_select.py 3KB
reduce.py 3KB
__init__.py 3KB
masked_select.py 3KB
test_matmul.py 3KB
add.py 3KB
mul.py 2KB
test_diag.py 2KB
test_eye.py 2KB
metis.py 2KB
transpose.py 2KB
test_cat.py 2KB
test_spspmm.py 2KB
spspmm.py 1KB
sample.py 1KB
test_metis.py 1KB
test_tensor.py 1KB
test_transpose.py 1KB
test_coalesce.py 1KB
coalesce.py 1KB
test_sample.py 968B
test_ego_sample.py 904B
padding.py 810B
bandwidth.py 767B
spmm.py 767B
eye.py 743B
convert.py 723B
test_convert.py 717B
saint.py 699B
test_spmm.py 585B
test_permute.py 566B
test_overload.py 426B
utils.py 409B
test_saint.py 329B
rw.py 318B
permute.py 281B
共 111 条
- 1
- 2
Mia不大听话
- 粉丝: 17
- 资源: 4592
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- AIS2024 valid
- 最入门的爬虫代码 python.docx
- 爬虫零基础入门-爬取天气预报.pdf
- 最通俗易懂的 MongoDB 非结构化文档存储数据库教程.zip
- 以mongodb为数据库的订单物流小项目.zip
- 腾讯云-mongodb数据库, 项目部署.zip
- 腾讯 APIJSON 的 MongoDB 数据库插件.zip
- 理解非关系型数据库和关系型数据库的区别.zip
- 操作简单的Mongodb网页web管理工具,基于Spring Boot2.0支持mongodb集群.zip
- tms-mongodb-web,提供访问mongodb数据的REST API和可灵活扩展的mongodb web 客户端.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0