# Eigen Tensors {#eigen_tensors}
Tensors are multidimensional arrays of elements. Elements are typically scalars,
but more complex types such as strings are also supported.
[TOC]
## Tensor Classes
You can manipulate a tensor with one of the following classes. They all are in
the namespace `::Eigen.`
### Class Tensor<data_type, rank>
This is the class to use to create a tensor and allocate memory for it. The
class is templatized with the tensor datatype, such as float or int, and the
tensor rank. The rank is the number of dimensions, for example rank 2 is a
matrix.
Tensors of this class are resizable. For example, if you assign a tensor of a
different size to a Tensor, that tensor is resized to match its new value.
#### Constructor `Tensor<data_type, rank>(size0, size1, ...)`
Constructor for a Tensor. The constructor must be passed `rank` integers
indicating the sizes of the instance along each of the the `rank`
dimensions.
// Create a tensor of rank 3 of sizes 2, 3, 4. This tensor owns
// memory to hold 24 floating point values (24 = 2 x 3 x 4).
Tensor<float, 3> t_3d(2, 3, 4);
// Resize t_3d by assigning a tensor of different sizes, but same rank.
t_3d = Tensor<float, 3>(3, 4, 3);
#### Constructor `Tensor<data_type, rank>(size_array)`
Constructor where the sizes for the constructor are specified as an array of
values instead of an explicitly list of parameters. The array type to use is
`Eigen::array<Eigen::Index>`. The array can be constructed automatically
from an initializer list.
// Create a tensor of strings of rank 2 with sizes 5, 7.
Tensor<string, 2> t_2d({5, 7});
### Class `TensorFixedSize<data_type, Sizes<size0, size1, ...>>`
Class to use for tensors of fixed size, where the size is known at compile
time. Fixed sized tensors can provide very fast computations because all their
dimensions are known by the compiler. FixedSize tensors are not resizable.
If the total number of elements in a fixed size tensor is small enough the
tensor data is held onto the stack and does not cause heap allocation and free.
// Create a 4 x 3 tensor of floats.
TensorFixedSize<float, Sizes<4, 3>> t_4x3;
### Class `TensorMap<Tensor<data_type, rank>>`
This is the class to use to create a tensor on top of memory allocated and
owned by another part of your code. It allows to view any piece of allocated
memory as a Tensor. Instances of this class do not own the memory where the
data are stored.
A TensorMap is not resizable because it does not own the memory where its data
are stored.
#### Constructor `TensorMap<Tensor<data_type, rank>>(data, size0, size1, ...)`
Constructor for a Tensor. The constructor must be passed a pointer to the
storage for the data, and "rank" size attributes. The storage has to be
large enough to hold all the data.
// Map a tensor of ints on top of stack-allocated storage.
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
// The same storage can be viewed as a different tensor.
// You can also pass the sizes as an array.
TensorMap<Tensor<int, 2>> t_2d(storage, 16, 8);
// You can also map fixed-size tensors. Here we get a 1d view of
// the 2d fixed-size tensor.
TensorFixedSize<float, Sizes<4, 5>> t_4x3;
TensorMap<Tensor<float, 1>> t_12(t_4x3.data(), 12);
#### Class `TensorRef`
See Assigning to a TensorRef below.
## Accessing Tensor Elements
#### `<data_type> tensor(index0, index1...)`
Return the element at position `(index0, index1...)` in tensor
`tensor`. You must pass as many parameters as the rank of `tensor`.
The expression can be used as an l-value to set the value of the element at the
specified position. The value returned is of the datatype of the tensor.
// Set the value of the element at position (0, 1, 0);
Tensor<float, 3> t_3d(2, 3, 4);
t_3d(0, 1, 0) = 12.0f;
// Initialize all elements to random values.
for (int i = 0; i < 2; ++i) {
for (int j = 0; j < 3; ++j) {
for (int k = 0; k < 4; ++k) {
t_3d(i, j, k) = ...some random value...;
}
}
}
// Print elements of a tensor.
for (int i = 0; i < 2; ++i) {
LOG(INFO) << t_3d(i, 0, 0);
}
## TensorLayout
The tensor library supports 2 layouts: `ColMajor` (the default) and
`RowMajor`. Only the default column major layout is currently fully
supported, and it is therefore not recommended to attempt to use the row major
layout at the moment.
The layout of a tensor is optionally specified as part of its type. If not
specified explicitly column major is assumed.
Tensor<float, 3, ColMajor> col_major; // equivalent to Tensor<float, 3>
TensorMap<Tensor<float, 3, RowMajor> > row_major(data, ...);
All the arguments to an expression must use the same layout. Attempting to mix
different layouts will result in a compilation error.
It is possible to change the layout of a tensor or an expression using the
`swap_layout()` method. Note that this will also reverse the order of the
dimensions.
Tensor<float, 2, ColMajor> col_major(2, 4);
Tensor<float, 2, RowMajor> row_major(2, 4);
Tensor<float, 2> col_major_result = col_major; // ok, layouts match
Tensor<float, 2> col_major_result = row_major; // will not compile
// Simple layout swap
col_major_result = row_major.swap_layout();
eigen_assert(col_major_result.dimension(0) == 4);
eigen_assert(col_major_result.dimension(1) == 2);
// Swap the layout and preserve the order of the dimensions
array<int, 2> shuffle(1, 0);
col_major_result = row_major.swap_layout().shuffle(shuffle);
eigen_assert(col_major_result.dimension(0) == 2);
eigen_assert(col_major_result.dimension(1) == 4);
## Tensor Operations
The Eigen Tensor library provides a vast library of operations on Tensors:
numerical operations such as addition and multiplication, geometry operations
such as slicing and shuffling, etc. These operations are available as methods
of the Tensor classes, and in some cases as operator overloads. For example
the following code computes the elementwise addition of two tensors:
Tensor<float, 3> t1(2, 3, 4);
...set some values in t1...
Tensor<float, 3> t2(2, 3, 4);
...set some values in t2...
// Set t3 to the element wise sum of t1 and t2
Tensor<float, 3> t3 = t1 + t2;
While the code above looks easy enough, it is important to understand that the
expression `t1 + t2` is not actually adding the values of the tensors. The
expression instead constructs a "tensor operator" object of the class
TensorCwiseBinaryOp<scalar_sum>, which has references to the tensors
`t1` and `t2`. This is a small C++ object that knows how to add
`t1` and `t2`. It is only when the value of the expression is assigned
to the tensor `t3` that the addition is actually performed. Technically,
this happens through the overloading of `operator=()` in the Tensor class.
This mechanism for computing tensor expressions allows for lazy evaluation and
optimizations which are what make the tensor library very fast.
Of course, the tensor operators do nest, and the expression `t1 + t2 * 0.3f`
is actually represented with the (approximate) tree of operators:
TensorCwiseBinaryOp<scalar_sum>(t1, TensorCwiseUnaryOp<scalar_mul>(t2, 0.3f))
### Tensor Operations and C++ "auto"
Because Tensor operations create tensor operators, the C++ `auto` keyword
does not have its intuitive meaning. Consider these 2 lines of code:
Tensor<float, 3> t3 = t1 + t2;
auto t4 = t1 + t2;
In the first line we allocate the tensor `t3` and it will contain the
result of the addition of `t1` and `t2`. In the second line, `t4`
is actually the tree of tensor operators that will compute the addition of
`t1` and `t2`. In fact, `t4` is *not* a tensor and you cannot get
the values of its elements:
Tensor<float, 3> t3 = t1 + t2;
cout << t3(0, 0, 0); // OK prints the value of t1(
没有合适的资源?快使用搜索试试~ 我知道了~
深度学习作业.zip
共2001个文件
py:1496个
h:230个
html:121个
需积分: 5 0 下载量 108 浏览量
2024-05-08
11:28:05
上传
评论
收藏 171.18MB ZIP 举报
温馨提示
机器学习是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。它专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径。 机器学习的发展历程可以追溯到20世纪50年代,当时Arthur Samuel在IBM开发了第一个自我学习程序,一个西洋棋程序,这标志着机器学习的起步。随后,Frank Rosenblatt发明了第一个人工神经网络模型——感知机。在接下来的几十年里,机器学习领域取得了许多重要的进展,包括最近邻算法、决策树、随机森林、深度学习等算法和技术的发展。 机器学习有着广泛的应用场景,如自然语言处理、物体识别和智能驾驶、市场营销和个性化推荐等。通过分析大量的数据,机器学习可以帮助我们更好地理解和解决各种复杂的问题。例如,在自然语言处理领域,机器学习技术可以实现机器翻译、语音识别、文本分类和情感分析等功能;在物体识别和智能驾驶领域,机器学习可以通过训练模型来识别图像和视频中的物体,并实现智能驾驶等功能;在市场营销领域,机器学习可以帮助企业分析用户的购买行为和偏好,提供个性化的产品推荐和定制化的营销策略。 总的来说,机器学习是一个快速发展且充满潜力的领域,它正在不断地改变我们的生活和工作方式。随着技术的不断进步和应用场景的不断扩展,相信机器学习将会在未来发挥更加重要的作用。
资源推荐
资源详情
资源评论
收起资源包目录
深度学习作业.zip (2001个子文件)
fortranobject.c 35KB
wrapmodule.c 9KB
testext.c 673B
gfortran_vs2003_hack.c 77B
responsive.css 18KB
select2.css 17KB
base.css 16KB
select2.min.css 15KB
widgets.css 10KB
forms.css 8KB
autocomplete.css 8KB
changelists.css 6KB
rtl.css 4KB
responsive_rtl.css 2KB
login.css 1KB
ol3.css 657B
fonts.css 423B
dashboard.css 412B
.DS_Store 6KB
descriptor.pb.h 485KB
repeated_field.h 90KB
descriptor.h 87KB
type.pb.h 87KB
extension_set.h 71KB
test_util.h 64KB
ndarraytypes.h 63KB
__multiarray_api.h 60KB
coded_stream.h 56KB
plugin.pb.h 56KB
message.h 53KB
wrappers.pb.h 51KB
map.h 42KB
api.pb.h 42KB
message_differencer.h 41KB
wire_format_lite_inl.h 41KB
wire_format_lite.h 40KB
map_type_handler.h 37KB
npy_common.h 36KB
struct.pb.h 36KB
strutil.h 36KB
generated_message_reflection.h 33KB
generated_message_table_driven_lite.h 33KB
map_util.h 30KB
map_field.h 29KB
arena.h 29KB
parser.h 26KB
map_entry_lite.h 26KB
text_format.h 25KB
map_test_util_impl.h 22KB
reflection.h 22KB
message_lite.h 19KB
npy_math.h 18KB
command_line_interface.h 18KB
cpp_helpers.h 17KB
descriptor_database.h 17KB
java_helpers.h 17KB
stringpiece.h 17KB
printer.h 17KB
port.h 16KB
callback.h 16KB
tokenizer.h 16KB
zero_copy_stream_impl_lite.h 15KB
generated_message_util.h 15KB
hash.h 15KB
js_generator.h 15KB
protostream_objectwriter.h 15KB
arenastring.h 14KB
reflection_internal.h 14KB
npy_3kcompat.h 14KB
protostream_objectsource.h 14KB
map_field_inl.h 14KB
proto_writer.h 14KB
wire_format.h 14KB
importer.h 13KB
zero_copy_stream_impl.h 13KB
default_value_objectwriter.h 13KB
service.h 13KB
ufuncobject.h 13KB
unknown_field_set.h 12KB
__ufunc_api.h 12KB
int128.h 12KB
arena_impl.h 12KB
any.pb.h 11KB
time_util.h 11KB
bytestream.h 11KB
mathlimits.h 11KB
ndarrayobject.h 11KB
field_comparator.h 10KB
field_mask_util.h 10KB
cpp_message.h 10KB
zero_copy_stream.h 10KB
dynamic_message.h 10KB
field_mask.pb.h 10KB
json_stream_parser.h 10KB
cpp_field.h 9KB
source_context.pb.h 9KB
common.h 9KB
expecting_objectwriter.h 9KB
logging.h 9KB
utility.h 9KB
共 2001 条
- 1
- 2
- 3
- 4
- 5
- 6
- 21
资源评论
生瓜蛋子
- 粉丝: 3919
- 资源: 7441
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- Go 的异步 Redis 客户端.zip
- MATLAB实现基于EWT经验小波变换的时间序列信号分解(含完整的程序和代码详解)
- abplc+运动控制+轴设置+实例
- MATLAB实现WOA-FS-SVM鲸鱼算法同步优化特征选择结合支持向量机分类预测(含完整的程序和代码详解)
- yolov8n-obb.pt
- MATLAB实现CNN-LSTM卷积长短期记忆神经网络时间序列预测(风电功率预测)(含完整的程序和代码详解)
- Google Go 客户端和 Redis 连接器.zip
- MATLAB实现TCN-BiGRU时间卷积双向门控循环单元时间序列预测(含完整的程序和代码详解)
- ADC Web And API Test
- HangFire Redis 存储基于原始的(现在不受支持的)Hangfire.Redis,但使用可爱的 StackExchange.Redis 客户端.zip
- Matlab实现CNN-XGBoost卷积神经网络结合极限梯度提升树时间序列预测(含完整的程序和代码详解)
- MATLAB实现CNN-LSTM-Attention卷积神经网络-长短期记忆网络结合SE注意力机制的多输入多输出预测(含完整的程序和代码详解)
- Haskell 的 Redis 客户端库 .zip
- yolov8s-obb.pt
- MATLAB实现TCN-BiLSTM时间卷积双向长短期记忆神经网络时间序列预测(含完整的程序和代码详解)
- 02111612atng.pdf
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功