# Eigen Tensors {#eigen_tensors}
Tensors are multidimensional arrays of elements. Elements are typically scalars,
but more complex types such as strings are also supported.
## Tensor Classes
You can manipulate a tensor with one of the following classes. They all are in
the namespace `::Eigen.`
### Class Tensor<data_type, rank>
This is the class to use to create a tensor and allocate memory for it. The
class is templatized with the tensor datatype, such as float or int, and the
tensor rank. The rank is the number of dimensions, for example rank 2 is a
matrix.
Tensors of this class are resizable. For example, if you assign a tensor of a
different size to a Tensor, that tensor is resized to match its new value.
#### Constructor Tensor<data_type, rank>(size0, size1, ...)
Constructor for a Tensor. The constructor must be passed `rank` integers
indicating the sizes of the instance along each of the the `rank`
dimensions.
// Create a tensor of rank 3 of sizes 2, 3, 4. This tensor owns
// memory to hold 24 floating point values (24 = 2 x 3 x 4).
Tensor<float, 3> t_3d(2, 3, 4);
// Resize t_3d by assigning a tensor of different sizes, but same rank.
t_3d = Tensor<float, 3>(3, 4, 3);
#### Constructor Tensor<data_type, rank>(size_array)
Constructor where the sizes for the constructor are specified as an array of
values instead of an explicitly list of parameters. The array type to use is
`Eigen::array<Eigen::Index>`. The array can be constructed automatically
from an initializer list.
// Create a tensor of strings of rank 2 with sizes 5, 7.
Tensor<string, 2> t_2d({5, 7});
### Class TensorFixedSize<data_type, Sizes<size0, size1, ...>>
Class to use for tensors of fixed size, where the size is known at compile
time. Fixed sized tensors can provide very fast computations because all their
dimensions are known by the compiler. FixedSize tensors are not resizable.
If the total number of elements in a fixed size tensor is small enough the
tensor data is held onto the stack and does not cause heap allocation and free.
// Create a 4 x 3 tensor of floats.
TensorFixedSize<float, Sizes<4, 3>> t_4x3;
### Class TensorMap<Tensor<data_type, rank>>
This is the class to use to create a tensor on top of memory allocated and
owned by another part of your code. It allows to view any piece of allocated
memory as a Tensor. Instances of this class do not own the memory where the
data are stored.
A TensorMap is not resizable because it does not own the memory where its data
are stored.
#### Constructor TensorMap<Tensor<data_type, rank>>(data, size0, size1, ...)
Constructor for a Tensor. The constructor must be passed a pointer to the
storage for the data, and "rank" size attributes. The storage has to be
large enough to hold all the data.
// Map a tensor of ints on top of stack-allocated storage.
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
// The same storage can be viewed as a different tensor.
// You can also pass the sizes as an array.
TensorMap<Tensor<int, 2>> t_2d(storage, 16, 8);
// You can also map fixed-size tensors. Here we get a 1d view of
// the 2d fixed-size tensor.
TensorFixedSize<float, Sizes<4, 3>> t_4x3;
TensorMap<Tensor<float, 1>> t_12(t_4x3.data(), 12);
#### Class TensorRef
See Assigning to a TensorRef below.
## Accessing Tensor Elements
#### <data_type> tensor(index0, index1...)
Return the element at position `(index0, index1...)` in tensor
`tensor`. You must pass as many parameters as the rank of `tensor`.
The expression can be used as an l-value to set the value of the element at the
specified position. The value returned is of the datatype of the tensor.
// Set the value of the element at position (0, 1, 0);
Tensor<float, 3> t_3d(2, 3, 4);
t_3d(0, 1, 0) = 12.0f;
// Initialize all elements to random values.
for (int i = 0; i < 2; ++i) {
for (int j = 0; j < 3; ++j) {
for (int k = 0; k < 4; ++k) {
t_3d(i, j, k) = ...some random value...;
}
}
}
// Print elements of a tensor.
for (int i = 0; i < 2; ++i) {
LOG(INFO) << t_3d(i, 0, 0);
}
## TensorLayout
The tensor library supports 2 layouts: `ColMajor` (the default) and
`RowMajor`. Only the default column major layout is currently fully
supported, and it is therefore not recommended to attempt to use the row major
layout at the moment.
The layout of a tensor is optionally specified as part of its type. If not
specified explicitly column major is assumed.
Tensor<float, 3, ColMajor> col_major; // equivalent to Tensor<float, 3>
TensorMap<Tensor<float, 3, RowMajor> > row_major(data, ...);
All the arguments to an expression must use the same layout. Attempting to mix
different layouts will result in a compilation error.
It is possible to change the layout of a tensor or an expression using the
`swap_layout()` method. Note that this will also reverse the order of the
dimensions.
Tensor<float, 2, ColMajor> col_major(2, 4);
Tensor<float, 2, RowMajor> row_major(2, 4);
Tensor<float, 2> col_major_result = col_major; // ok, layouts match
Tensor<float, 2> col_major_result = row_major; // will not compile
// Simple layout swap
col_major_result = row_major.swap_layout();
eigen_assert(col_major_result.dimension(0) == 4);
eigen_assert(col_major_result.dimension(1) == 2);
// Swap the layout and preserve the order of the dimensions
array<int, 2> shuffle(1, 0);
col_major_result = row_major.swap_layout().shuffle(shuffle);
eigen_assert(col_major_result.dimension(0) == 2);
eigen_assert(col_major_result.dimension(1) == 4);
## Tensor Operations
The Eigen Tensor library provides a vast library of operations on Tensors:
numerical operations such as addition and multiplication, geometry operations
such as slicing and shuffling, etc. These operations are available as methods
of the Tensor classes, and in some cases as operator overloads. For example
the following code computes the elementwise addition of two tensors:
Tensor<float, 3> t1(2, 3, 4);
...set some values in t1...
Tensor<float, 3> t2(2, 3, 4);
...set some values in t2...
// Set t3 to the element wise sum of t1 and t2
Tensor<float, 3> t3 = t1 + t2;
While the code above looks easy enough, it is important to understand that the
expression `t1 + t2` is not actually adding the values of the tensors. The
expression instead constructs a "tensor operator" object of the class
TensorCwiseBinaryOp<scalar_sum>, which has references to the tensors
`t1` and `t2`. This is a small C++ object that knows how to add
`t1` and `t2`. It is only when the value of the expression is assigned
to the tensor `t3` that the addition is actually performed. Technically,
this happens through the overloading of `operator=()` in the Tensor class.
This mechanism for computing tensor expressions allows for lazy evaluation and
optimizations which are what make the tensor library very fast.
Of course, the tensor operators do nest, and the expression `t1 + t2 * 0.3f`
is actually represented with the (approximate) tree of operators:
TensorCwiseBinaryOp<scalar_sum>(t1, TensorCwiseUnaryOp<scalar_mul>(t2, 0.3f))
### Tensor Operations and C++ "auto"
Because Tensor operations create tensor operators, the C++ `auto` keyword
does not have its intuitive meaning. Consider these 2 lines of code:
Tensor<float, 3> t3 = t1 + t2;
auto t4 = t1 + t2;
In the first line we allocate the tensor `t3` and it will contain the
result of the addition of `t1` and `t2`. In the second line, `t4`
is actually the tree of tensor operators that will compute the addition of
`t1` and `t2`. In fact, `t4` is *not* a tensor and you cannot get
the values of its elements:
Tensor<float, 3> t3 = t1 + t2;
cout << t3(0, 0, 0); // OK prints the value of t1(0, 0, 0) + t2(0, 0, 0
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
【资源说明】 基于JNeRF的实拍物重建项目python源码+项目运行教程(计算机图形学课设).zip 1、使用百度飞桨AI平台搭建环境 ```bash pip install -r requirements.txt -i https://pypi.mirrors.ustc.edu.cn/simple/ ``` 运行代码(训练、测试、生成视频一体化) ```bash python run_net.py --config-file ./projects/ngp/configs/ngp_bear.py 【备注】 1.项目代码均经过功能验证ok,确保稳定可靠运行。欢迎下载体验!下载完使用问题请私信沟通。 2.主要针对各个计算机相关专业,包括计算机科学、信息安全、数据科学与大数据技术、人工智能、通信、物联网等领域的在校学生、专业教师、企业员工。 3.项目具有丰富的拓展空间,不仅可作为入门进阶,也可直接作为毕设、课程设计、大作业、初期项目立项演示等用途。 4.当然也鼓励大家基于此进行二次开发。在使用过程中,如有问题或建议,请及时沟通。 5.期待你能在项目中找到乐趣和灵感,也欢迎你的分享和反馈!
资源推荐
资源详情
资源评论
收起资源包目录
基于JNeRF的实拍物重建项目python源码+项目运行教程(计算机图形学课设).zip (2000个子文件)
ztbmv.c 19KB
ctbmv.c 19KB
zhbmv.c 15KB
chbmv.c 15KB
zhpmv.c 13KB
chpmv.c 13KB
dtbmv.c 11KB
stbmv.c 11KB
ssbmv.c 10KB
dsbmv.c 10KB
dspmv.c 8KB
sspmv.c 8KB
drotmg.c 6KB
srotmg.c 6KB
drotm.c 5KB
srotm.c 5KB
lsame.c 3KB
complexdots.c 2KB
example.c 2KB
d_cnjg.c 117B
r_cnjg.c 105B
NonLinearOptimization.cpp 63KB
cxx11_tensor_image_patch_sycl.cpp 61KB
cxx11_tensor_symmetry.cpp 58KB
levenberg_marquardt.cpp 54KB
packetmath.cpp 54KB
cxx11_tensor_contract_sycl.cpp 46KB
cxx11_tensor_reduction_sycl.cpp 41KB
cxx11_tensor_image_patch.cpp 35KB
cxx11_tensor_block_eval.cpp 31KB
cxx11_tensor_executor.cpp 30KB
sparse_basic.cpp 29KB
analyze-blocking-sizes.cpp 28KB
array_cwise.cpp 27KB
geo_transformations.cpp 26KB
cxx11_tensor_chipping_sycl.cpp 26KB
sparse_product.cpp 25KB
cxx11_tensor_thread_pool.cpp 25KB
cxx11_tensor_contraction.cpp 23KB
special_functions.cpp 22KB
cxx11_tensor_block_access.cpp 22KB
benchmark-blocking-sizes.cpp 22KB
evaluators.cpp 21KB
vectorization_logic.cpp 20KB
cxx11_tensor_convolution_sycl.cpp 20KB
indexed_view.cpp 19KB
stl_iterators.cpp 19KB
quaternion_demo.cpp 19KB
cxx11_meta.cpp 18KB
cxx11_tensor_index_list.cpp 18KB
openglsupport.cpp 18KB
cholesky.cpp 18KB
cxx11_tensor_morphing.cpp 18KB
geo_alignedbox.cpp 18KB
bfloat16_float.cpp 18KB
cxx11_tensor_morphing_sycl.cpp 17KB
mixingtypes.cpp 16KB
bessel_functions.cpp 16KB
cxx11_tensor_reduction.cpp 16KB
cxx11_tensor_block_io.cpp 15KB
cxx11_tensor_builtins_sycl.cpp 15KB
product_extra.cpp 15KB
block.cpp 15KB
cxx11_tensor_sycl.cpp 14KB
half_float.cpp 14KB
array_for_matrix.cpp 14KB
cxx11_tensor_expr.cpp 14KB
qr_colpivoting.cpp 14KB
sparse_setter.cpp 13KB
basicstuff.cpp 13KB
cxx11_tensor_fft.cpp 13KB
ref.cpp 13KB
cxx11_tensor_chipping.cpp 13KB
initializer_list_construction.cpp 13KB
nullary.cpp 13KB
product_small.cpp 12KB
sparse_block.cpp 12KB
cxx11_tensor_volume_patch_sycl.cpp 12KB
triangular.cpp 12KB
bench_norm.cpp 11KB
geo_quaternion.cpp 11KB
bench_gemm.cpp 11KB
eigensolver_selfadjoint.cpp 11KB
mapstride.cpp 11KB
autodiff.cpp 11KB
product_notemporary.cpp 11KB
reshape.cpp 11KB
stable_norm.cpp 10KB
sparse_permutations.cpp 10KB
cxx11_tensor_argmax_sycl.cpp 10KB
cxx11_tensor_broadcasting.cpp 10KB
cxx11_tensor_simple.cpp 9KB
EulerAngles.cpp 9KB
vectorwiseop.cpp 9KB
cxx11_tensor_assign.cpp 9KB
eigensolver_generic.cpp 9KB
cxx11_tensor_patch_sycl.cpp 9KB
cxx11_tensor_reverse_sycl.cpp 9KB
FFTW.cpp 9KB
cxx11_tensor_argmax.cpp 9KB
共 2000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 20
资源评论
.whl
- 粉丝: 3949
- 资源: 4864
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 社区待就业人员信息管理系统的设计与实现pf-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 膳食营养健康网站_4d8g9--论文-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 社区帮扶对象管理系统pf-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 基于springboot vue的影院购票系统录像pf-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 计算机操作系统: Ubuntu 20.04 LTS的详细安装与配置指南
- 社区网格化管理平台的构建pf-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 基于springboot+vue的游戏交易系统-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.rar
- 社区防疫物资申报系统--论文pf-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 基于SpringBoot+Vue的乡政府管理系统-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 社区团购管理系统的设计与实现_975sz--论文-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 社区维修平台-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 大模型Llama架构:从理论到实战课程
- Python学习资源集
- 一个使用 Python 写的判断字符串是否为回文串的源码
- 基于微服务的车联网位置信息管理软件的设计与实现-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
- 基于文学创作的社交论坛--论文-springboot毕业项目,适合计算机毕-设、实训项目、大作业学习.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功