1 Intro
组成 tf.Graph 和 tf.Session
简单的输入constant,placeholder,feeding,Datasets
layers和feature columns和初始化方法
loss和optimizer来训练
-
Abstract
Use tf.Graph an tf.Session
Use high level components(datasets, layers and feature_columns
Build your own training loop -
Core walkthrough
A graph consists of two parts: tf.Operation and tf.Tensor -
TensorBoard
tf.summary.FileWriter('.')
https://www.tensorflow.org/guide/summaries_and_tensorboard -
Session
sess = tf.Session()
sess.run({'ab':(a, b), 'total':total})
tf.random_uniform每次同时产生的是一个数据
- Feeding
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
z = x + y
print(sess.run(z, feed_dict={x: 3, y: 4.5}))
print(sess.run(z, feed_dict={x: [1, 3], y: [2, 4]}))
- Datasets
tf.data.Dataset.from_tensor_slices(my_data)
next_item = slices.make_one_shot_iterator().get_next()
主要是可以有iterator机制 - Layers
主要有两种写法,一种是tf.layers.Dense(units=1)和函数式写法tf.layers.dense(x, units=1)
运行时需要init = tf.global_variables_initializer()初始化weight - Feature columns
用tf.feature_column.input_layer实现
分为numeric_column和indicator_column
有时候feature columns有内部状态,需要像layers一样被初始化,categorical用的是tf.contrib.lookup,用tf.tables_initializer初始化 - Training
定义一个loss
定义optimizer
然后组合训练
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
完整代码
x = tf.constant([[1], [2], [3], [4]], dtype=tf.float32)
y_true = tf.constant([[0], [-1], [-2], [-3]], dtype=tf.float32)
linear_model = tf.layers.Dense(units=1)
y_pred = linear_model(x)
loss = tf.losses.mean_squared_error(labels=y_true, predictions=y_pred)
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(100):
_, loss_value = sess.run((train, loss))
print(loss_value)
print(sess.run(y_pred))
2 Tensors
有两个属性data type和shape
有四种类型
rank和shape
- Rank
就是维度
获得维度 r = tf.rank(my_image) - Shape
表示每一个维度的个数
获得shape,zeros = tf.zeros(my_matrix.shape[1])
更改shape
rank_three_tensor = tf.ones([3, 4, 5])
matrix = tf.reshape(rank_three_tensor, [6, 10]) # Reshape existing content into
# a 6x10 matrix
matrixB = tf.reshape(matrix, [3, -1]) # Reshape existing content into a 3x20
# matrix. -1 tells reshape to calculate
# the size of this dimension.
matrixAlt = tf.reshape(matrixB, [4, 3, -1]) # Reshape existing content into a
#4x3x5 tensor
- Data types
tf.string, tf.int32, tf.float32,后面两个是python的默认值 - Printing Tensors
Evaluating Tensors
Print Tensors
3 Variables
创建变量
collection和device placement
使用和初始化变量
共享变量
- Creating a variable
my_variable = tf.get_variable("my_variable", [1, 2, 3])
会有默认初始化方法,也可以设置初始化方法
- variable collections
为了在一个地方使用所有的变量
默认有两个collection:
-
tf.GraphKeys.GLOBAL_VARIABLES
--- variables that can be shared across multiple devices, -
tf.GraphKeys.TRAINABLE_VARIABLES
--- variables for which TensorFlow will calculate gradients.
变量可以设置是否trainable和在哪个collections
- device placement
可以把变量放在particular devices - Initializing
高阶api如Estimator,keras会自动初始化
session.run(tf.global_variables_initializer())
# Now all variables are initialized.
session.run(my_variable.initializer)
print(session.run(tf.report_uninitialized_variables()))
一个变量初始化依赖别的变量
v = tf.get_variable("v", shape=(), initializer=tf.zeros_initializer())
w = tf.get_variable("w", initializer=v.initialized_value() + 1)
- Using
使用正常像tensor一样
赋值用v.assign()和v.assign_add(1)
tf.Variable.read_value去in time读变量 - Sharing
显示sharing,直接传递
隐示sharing,使用tf.variable_scope("model", reuse=True) reuse来控制是否重载使用
4 Graphs and Sessions
- why and what is tf.Graph
why: parallelism distributed compilation portability
tf.Graph contains graph structure and graph collections - Building a tf.Graph
网友评论