# Eigen Tensors
Tensors are multidimensional arrays of elements. Elements are typically scalars,
but more complex types such as strings are also supported.
[TOC]
## Tensor Classes
You can manipulate a tensor with one of the following classes. They all are in
the namespace ```::Eigen.```
### Class Tensor<data_type, rank>
This is the class to use to create a tensor and allocate memory for it. The
class is templatized with the tensor datatype, such as float or int, and the
tensor rank. The rank is the number of dimensions, for example rank 2 is a
matrix.
Tensors of this class are resizable. For example, if you assign a tensor of a
different size to a Tensor, that tensor is resized to match its new value.
#### Constructor Tensor<data_type, rank>(size0, size1, ...)
Constructor for a Tensor. The constructor must be passed ```rank``` integers
indicating the sizes of the instance along each of the the ```rank```
dimensions.
// Create a tensor of rank 3 of sizes 2, 3, 4. This tensor owns
// memory to hold 24 floating point values (24 = 2 x 3 x 4).
Tensor<float, 3> t_3d(2, 3, 4);
// Resize t_3d by assigning a tensor of different sizes, but same rank.
t_3d = Tensor<float, 3>(3, 4, 3);
#### Constructor Tensor<data_type, rank>(size_array)
Constructor where the sizes for the constructor are specified as an array of
values instead of an explicitly list of parameters. The array type to use is
```Eigen::array<Eigen::Index>```. The array can be constructed automatically
from an initializer list.
// Create a tensor of strings of rank 2 with sizes 5, 7.
Tensor<string, 2> t_2d({5, 7});
### Class TensorFixedSize<data_type, Sizes<size0, size1, ...>>
Class to use for tensors of fixed size, where the size is known at compile
time. Fixed sized tensors can provide very fast computations because all their
dimensions are known by the compiler. FixedSize tensors are not resizable.
If the total number of elements in a fixed size tensor is small enough the
tensor data is held onto the stack and does not cause heap allocation and free.
// Create a 4 x 3 tensor of floats.
TensorFixedSize<float, Sizes<4, 3>> t_4x3;
### Class TensorMap<Tensor<data_type, rank>>
This is the class to use to create a tensor on top of memory allocated and
owned by another part of your code. It allows to view any piece of allocated
memory as a Tensor. Instances of this class do not own the memory where the
data are stored.
A TensorMap is not resizable because it does not own the memory where its data
are stored.
#### Constructor TensorMap<Tensor<data_type, rank>>(data, size0, size1, ...)
Constructor for a Tensor. The constructor must be passed a pointer to the
storage for the data, and "rank" size attributes. The storage has to be
large enough to hold all the data.
// Map a tensor of ints on top of stack-allocated storage.
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
// The same storage can be viewed as a different tensor.
// You can also pass the sizes as an array.
TensorMap<Tensor<int, 2>> t_2d(storage, 16, 8);
// You can also map fixed-size tensors. Here we get a 1d view of
// the 2d fixed-size tensor.
TensorFixedSize<float, Sizes<4, 5>> t_4x3;
TensorMap<Tensor<float, 1>> t_12(t_4x3.data(), 12);
#### Class TensorRef
See Assigning to a TensorRef below.
## Accessing Tensor Elements
#### <data_type> tensor(index0, index1...)
Return the element at position ```(index0, index1...)``` in tensor
```tensor```. You must pass as many parameters as the rank of ```tensor```.
The expression can be used as an l-value to set the value of the element at the
specified position. The value returned is of the datatype of the tensor.
// Set the value of the element at position (0, 1, 0);
Tensor<float, 3> t_3d(2, 3, 4);
t_3d(0, 1, 0) = 12.0f;
// Initialize all elements to random values.
for (int i = 0; i < 2; ++i) {
for (int j = 0; j < 3; ++j) {
for (int k = 0; k < 4; ++k) {
t_3d(i, j, k) = ...some random value...;
}
}
}
// Print elements of a tensor.
for (int i = 0; i < 2; ++i) {
LOG(INFO) << t_3d(i, 0, 0);
}
## TensorLayout
The tensor library supports 2 layouts: ```ColMajor``` (the default) and
```RowMajor```. Only the default column major layout is currently fully
supported, and it is therefore not recommended to attempt to use the row major
layout at the moment.
The layout of a tensor is optionally specified as part of its type. If not
specified explicitly column major is assumed.
Tensor<float, 3, ColMajor> col_major; // equivalent to Tensor<float, 3>
TensorMap<Tensor<float, 3, RowMajor> > row_major(data, ...);
All the arguments to an expression must use the same layout. Attempting to mix
different layouts will result in a compilation error.
It is possible to change the layout of a tensor or an expression using the
```swap_layout()``` method. Note that this will also reverse the order of the
dimensions.
Tensor<float, 2, ColMajor> col_major(2, 4);
Tensor<float, 2, RowMajor> row_major(2, 4);
Tensor<float, 2> col_major_result = col_major; // ok, layouts match
Tensor<float, 2> col_major_result = row_major; // will not compile
// Simple layout swap
col_major_result = row_major.swap_layout();
eigen_assert(col_major_result.dimension(0) == 4);
eigen_assert(col_major_result.dimension(1) == 2);
// Swap the layout and preserve the order of the dimensions
array<int, 2> shuffle(1, 0);
col_major_result = row_major.swap_layout().shuffle(shuffle);
eigen_assert(col_major_result.dimension(0) == 2);
eigen_assert(col_major_result.dimension(1) == 4);
## Tensor Operations
The Eigen Tensor library provides a vast library of operations on Tensors:
numerical operations such as addition and multiplication, geometry operations
such as slicing and shuffling, etc. These operations are available as methods
of the Tensor classes, and in some cases as operator overloads. For example
the following code computes the elementwise addition of two tensors:
Tensor<float, 3> t1(2, 3, 4);
...set some values in t1...
Tensor<float, 3> t2(2, 3, 4);
...set some values in t2...
// Set t3 to the element wise sum of t1 and t2
Tensor<float, 3> t3 = t1 + t2;
While the code above looks easy enough, it is important to understand that the
expression ```t1 + t2``` is not actually adding the values of the tensors. The
expression instead constructs a "tensor operator" object of the class
TensorCwiseBinaryOp<scalar_sum>, which has references to the tensors
```t1``` and ```t2```. This is a small C++ object that knows how to add
```t1``` and ```t2```. It is only when the value of the expression is assigned
to the tensor ```t3``` that the addition is actually performed. Technically,
this happens through the overloading of ```operator=()``` in the Tensor class.
This mechanism for computing tensor expressions allows for lazy evaluation and
optimizations which are what make the tensor library very fast.
Of course, the tensor operators do nest, and the expression ```t1 + t2 *
0.3f``` is actually represented with the (approximate) tree of operators:
TensorCwiseBinaryOp<scalar_sum>(t1, TensorCwiseUnaryOp<scalar_mul>(t2, 0.3f))
### Tensor Operations and C++ "auto"
Because Tensor operations create tensor operators, the C++ ```auto``` keyword
does not have its intuitive meaning. Consider these 2 lines of code:
Tensor<float, 3> t3 = t1 + t2;
auto t4 = t1 + t2;
In the first line we allocate the tensor ```t3``` and it will contain the
result of the addition of ```t1``` and ```t2```. In the second line, ```t4```
is actually the tree of tensor operators that will compute the addition of
```t1``` and ```t2```. In fact, ```t4``` is *not* a tensor and you cannot get
the values of its elements:
Tensor<float,
没有合适的资源?快使用搜索试试~ 我知道了~
tensorflow180 C++ API CPU
共2000个文件
h:2221个
exe:6个
txt:5个
4星 · 超过85%的资源 需积分: 46 119 下载量 88 浏览量
2019-02-13
21:47:40
上传
评论 5
收藏 63.64MB 7Z 举报
温馨提示
这是自己编译的tensorflow C++接口,CPU版本,tensorflow版本是1.8,适用VS2015开发,这东西我不确定在别的电脑上能用,我只在我自己的两台电脑上用过,所以不要浪费大家积分。
资源推荐
资源详情
资源评论
收起资源包目录
tensorflow180 C++ API CPU (2000个子文件)
lapacke.h 1.01MB
lapacke.h 1.01MB
descriptor.pb.h 390KB
worker.pb.h 220KB
test_log.pb.h 205KB
master.pb.h 200KB
array_ops.h 173KB
meta_graph.pb.h 167KB
config.pb.h 163KB
tfprof_log.pb.h 144KB
nn_ops.h 139KB
blas.h 130KB
math_ops.h 117KB
training_ops.h 117KB
data_flow_ops.h 116KB
summary.pb.h 115KB
stream.h 109KB
graph_transfer_info.pb.h 99KB
dnn.h 99KB
boosted_trees.pb.h 97KB
op_performance_data.pb.h 92KB
repeated_field.h 91KB
tfprof_output.pb.h 89KB
descriptor.h 87KB
control_flow.pb.h 86KB
op_def.pb.h 86KB
api_def.pb.h 85KB
type.pb.h 85KB
event.pb.h 83KB
step_stats.pb.h 81KB
GeneralBlockPanelKernel.h 79KB
GeneralBlockPanelKernel.h 79KB
log_memory.pb.h 74KB
profile.pb.h 72KB
MatMatProductAVX2.h 71KB
image_ops.h 70KB
extension_set.h 68KB
mkl_util.h 65KB
attr_value.pb.h 64KB
tfprof_options.pb.h 63KB
op_kernel.h 63KB
CoreEvaluators.h 63KB
CoreEvaluators.h 63KB
sparse_ops.h 62KB
TensorContractionCuda.h 61KB
TensorContractionCuda.h 61KB
Eigen_Colamd.h 61KB
Eigen_Colamd.h 61KB
Transform.h 59KB
Transform.h 59KB
coded_stream.h 59KB
example_parser_configuration.pb.h 58KB
plugin.pb.h 55KB
feature.pb.h 54KB
BlockMethods.h 54KB
BlockMethods.h 54KB
message.h 54KB
tensor.pb.h 53KB
BDCSVD.h 51KB
BDCSVD.h 51KB
SparseMatrix.h 51KB
SparseMatrix.h 51KB
eigen_spatial_convolutions.h 51KB
TensorContractionThreadPool.h 50KB
TensorContractionThreadPool.h 50KB
wrappers.pb.h 50KB
debug.pb.h 49KB
MathFunctions.h 49KB
MathFunctions.h 49KB
cost_graph.pb.h 49KB
TensorBase.h 49KB
TensorBase.h 49KB
PacketMath.h 49KB
PacketMath.h 49KB
ProductEvaluators.h 48KB
ProductEvaluators.h 48KB
remote_fused_graph_execute_info.pb.h 48KB
dataset_ops.h 47KB
TensorConvolution.h 47KB
TensorConvolution.h 47KB
wire_format_lite_inl.h 45KB
device_attributes.pb.h 45KB
PlainObjectBase.h 44KB
PlainObjectBase.h 44KB
saved_tensor_slice.pb.h 44KB
state_ops.h 43KB
map.h 43KB
api.pb.h 43KB
rewriter_config.pb.h 42KB
debug_service.pb.h 42KB
SpecialFunctionsImpl.h 42KB
SpecialFunctionsImpl.h 42KB
quantization_utils.h 42KB
message_differencer.h 42KB
cuda_dnn.h 41KB
wire_format_lite.h 41KB
variable.pb.h 40KB
PacketMath.h 39KB
PacketMath.h 39KB
BlockSparseMatrix.h 39KB
共 2000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 20
资源评论
- GHZhao_GIS_RS2019-08-17自己编译一直没成功,试试这个
- 10022019-10-12emmmmmm,是C++的,我下载错了
- 孔方兄_2019-07-11理论上不错,下载了,还不知道接口怎么用。
- jimxiu11252019-05-07资源不错,不过有些位置编译还没有通过
- 明天也要加油鸭2019-07-18什么系统呢
熊叫大雄
- 粉丝: 65
- 资源: 3
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功