# TensorFlow-Slim
TF-Slim is a lightweight library for defining, training and evaluating models in
TensorFlow. It enables defining complex networks quickly and concisely while
keeping a model's architecture transparent and its hyperparameters explicit.
[TOC]
## Teaser
As a demonstration of the simplicity of using TF-Slim, compare the simplicity of
the code necessary for defining the entire [VGG](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) network using TF-Slim to
the lengthy and verbose nature of defining just the first three layers (out of
16) using native tensorflow:
```python{.good}
# VGG16 in TF-Slim.
def vgg16(inputs):
with slim.arg_scope([slim.ops.conv2d, slim.ops.fc], stddev=0.01, weight_decay=0.0005):
net = slim.ops.repeat_op(2, inputs, slim.ops.conv2d, 64, [3, 3], scope='conv1')
net = slim.ops.max_pool(net, [2, 2], scope='pool1')
net = slim.ops.repeat_op(2, net, slim.ops.conv2d, 128, [3, 3], scope='conv2')
net = slim.ops.max_pool(net, [2, 2], scope='pool2')
net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 256, [3, 3], scope='conv3')
net = slim.ops.max_pool(net, [2, 2], scope='pool3')
net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 512, [3, 3], scope='conv4')
net = slim.ops.max_pool(net, [2, 2], scope='pool4')
net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 512, [3, 3], scope='conv5')
net = slim.ops.max_pool(net, [2, 2], scope='pool5')
net = slim.ops.flatten(net, scope='flatten5')
net = slim.ops.fc(net, 4096, scope='fc6')
net = slim.ops.dropout(net, 0.5, scope='dropout6')
net = slim.ops.fc(net, 4096, scope='fc7')
net = slim.ops.dropout(net, 0.5, scope='dropout7')
net = slim.ops.fc(net, 1000, activation=None, scope='fc8')
return net
```
```python{.bad}
# Layers 1-3 (out of 16) of VGG16 in native tensorflow.
def vgg16(inputs):
with tf.name_scope('conv1_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(inputs, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv1_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
with tf.name_scope('pool1')
pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID', name='pool1')
```
## Why TF-Slim?
TF-Slim offers several advantages over just the built-in tensorflow libraries:
* Allows one to define models much more compactly by eliminating boilerplate
code. This is accomplished through the use of [argument scoping](./scopes.py)
and numerous high level [operations](./ops.py). These tools increase
readability and maintainability, reduce the likelihood of an error from
copy-and-pasting hyperparameter values and simplifies hyperparameter tuning.
* Makes developing models simple by providing commonly used [loss functions](./losses.py)
* Provides a concise [definition](./inception_model.py) of [Inception v3](http://arxiv.org/abs/1512.00567) network architecture ready to be used
out-of-the-box or subsumed into new models.
Additionally TF-Slim was designed with several principles in mind:
* The various modules of TF-Slim (scopes, variables, ops, losses) are
independent. This flexibility allows users to pick and choose components of
TF-Slim completely à la carte.
* TF-Slim is written using a Functional Programming style. That means it's
super-lightweight and can be used right alongside any of TensorFlow's native
operations.
* Makes re-using network architectures easy. This allows users to build new
networks on top of existing ones as well as fine-tuning pre-trained models
on new tasks.
## What are the various components of TF-Slim?
TF-Slim is composed of several parts which were designed to exist independently.
These include:
* [scopes.py](./scopes.py): provides a new scope named `arg_scope` that allows
a user to define default arguments for specific operations within that
scope.
* [variables.py](./variables.py): provides convenience wrappers for variable
creation and manipulation.
* [ops.py](./ops.py): provides high level operations for building models using
tensorflow.
* [losses.py](./losses.py): contains commonly used loss functions.
## Defining Models
Models can be succinctly defined using TF-Slim by combining its variables,
operations and scopes. Each of these elements are defined below.
### Variables
Creating [`Variables`](https://www.tensorflow.org/how_tos/variables/index.html)
in native tensorflow requires either a predefined value or an initialization
mechanism (random, normally distributed). Furthermore, if a variable needs to be
created on a specific device, such as a GPU, the specification must be [made
explicit](https://www.tensorflow.org/how_tos/using_gpu/index.html). To alleviate
the code required for variable creation, TF-Slim provides a set of thin wrapper
functions in [variables.py](./variables.py) which allow callers to easily define
variables.
For example, to create a `weight` variable, initialize it using a truncated
normal distribution, regularize it with an `l2_loss` and place it on the `CPU`,
one need only declare the following:
```python
weights = variables.variable('weights',
shape=[10, 10, 3 , 3],
initializer=tf.truncated_normal_initializer(stddev=0.1),
regularizer=lambda t: losses.l2_loss(t, weight=0.05),
device='/cpu:0')
```
In addition to the functionality provided by `tf.Variable`, `slim.variables`
keeps track of the variables created by `slim.ops` to define a model, which
allows one to distinguish variables that belong to the model versus other
variables.
```python
# Get all the variables defined by the model.
model_variables = slim.variables.get_variables()
# Get all the variables with the same given name, i.e. 'weights', 'biases'.
weights = slim.variables.get_variables_by_name('weights')
biases = slim.variables.get_variables_by_name('biases')
# Get all the variables in VARIABLES_TO_RESTORE collection.
variables_to_restore = tf.get_collection(slim.variables.VARIABLES_TO_RESTORE)
weights = variables.variable('weights',
shape=[10, 10, 3 , 3],
initializer=tf.truncated_normal_initializer(stddev=0.1),
regularizer=lambda t: losses.l2_loss(t, weight=0.05),
device='/cpu:0')
```
### Operations (Layers)
While the set of TensorFlow operations is quite extensive, builders of neural
networks typically think of models in terms of "layers". A layer, such as a
Convolutional Layer, a Fully Connected Layer or a BatchNorm Layer are more
abstract than a single TensorFlow operation and typically involve many such
operations. For example, a Convolutional Layer in a neural network is built
using several steps:
1. Creating the weight variables
2. Creating the bias variables
3. Convolving the weights with the input from the previous layer
4. Adding the biases to the result of the convolution.
In python code this can be rather laborious:
```python
input = ...
with tf.name_scope('conv1_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
python 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作 一
资源推荐
资源详情
资源评论
收起资源包目录
一些基于python的例子, ai测试, 图像处理, matlab调用,文件操作.zip (153个子文件)
baboon.bmp 703KB
baboon.bmp 703KB
baboon.bmp 703KB
BUILD 3KB
BUILD 2KB
checkpoint 277B
checkpoint 119B
checkpoint 77B
VDSR_adam_epoch_016.ckpt-57188 2.55MB
VDSR_adam_epoch_016.ckpt-57188 2.55MB
model.ckpt-28001.data-00000-of-00001 3.03MB
model.ckpt-27001.data-00000-of-00001 3.03MB
model.ckpt-29001.data-00000-of-00001 3.03MB
model.ckpt-30001.data-00000-of-00001 3.03MB
model.ckpt-26001.data-00000-of-00001 3.03MB
VDSR_adam_epoch_079.ckpt-248080.data-00000-of-00001 2.54MB
model.ckpt.data-00000-of-00001 32B
.gitignore 7B
train-images-idx3-ubyte.gz 9.45MB
t10k-images-idx3-ubyte.gz 1.57MB
train-labels-idx1-ubyte.gz 28KB
t10k-labels-idx1-ubyte.gz 4KB
VDSR_adam_epoch_079.ckpt-248080.index 1KB
model.ckpt-29001.index 470B
model.ckpt-28001.index 470B
model.ckpt-30001.index 470B
model.ckpt-27001.index 470B
model.ckpt-26001.index 470B
model.ckpt.index 151B
ssd.jpg 120KB
0.jpg 610B
7.jpg 601B
4.jpg 589B
1.jpg 582B
5.jpg 581B
2.jpg 573B
9.jpg 549B
3.jpg 494B
6.jpg 471B
8.jpg 467B
test.m 1KB
0_3.mat 1.69MB
0_3.mat 1.69MB
0_4.mat 1.69MB
0_4.mat 1.69MB
0_2.mat 1.61MB
0_2.mat 1.61MB
0.mat 337KB
baboon.mat 337KB
0.mat 337KB
README.md 26KB
README.md 64B
README.md 31B
VDSR_adam_epoch_079.ckpt-248080.meta 356KB
model.ckpt-27001.meta 62KB
model.ckpt-30001.meta 62KB
model.ckpt-26001.meta 62KB
model.ckpt-29001.meta 62KB
model.ckpt-28001.meta 62KB
model.ckpt.meta 5KB
srgan.pb 6.02MB
vdsr.pb 2.56MB
mnist.pb 1.52MB
model.pb 262B
img_001-outputs.png 490KB
output.png 102KB
img_001.png 33KB
inception_v3.py 31KB
ops_test.py 29KB
build_imagenet_data.py 26KB
image_processing.py 21KB
ops.py 18KB
inception_model.py 18KB
variables_test.py 16KB
inception_train.py 15KB
inception_distributed_train.py 14KB
datagenerator.py 13KB
inception_v3_test.py 12KB
variables.py 10KB
process_bounding_boxes.py 9KB
collections_test.py 8KB
train.py 8KB
inception_eval.py 7KB
losses_test.py 6KB
losses.py 6KB
scopes_test.py 6KB
inception_model.py 6KB
scopes.py 5KB
create_mobilenetssd_raws.py 5KB
inception_test.py 5KB
alexnet.py 5KB
create_incptionv3_raws.py 4KB
demo.py 3KB
mnist_train.py 3KB
dataset.py 3KB
preprocess_imagenet_validation_data.py 3KB
mnist_eval.py 3KB
model_test.py 3KB
httpapi.py 3KB
unzip_test.py 3KB
共 153 条
- 1
- 2
资源评论
辣椒种子
- 粉丝: 3324
- 资源: 5724
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功