一、如何调整学习率
Keras提供两种学习率适应方法,可通过回调函数实现。
LearningRateScheduler
1. keras.callbacks.LearningRateScheduler(schedule)
该回调函数是学习率调度器
参数
- schedule:函数,该函数以epoch号为参数(从0算起的整数),返回一个新学习率(浮点数)
ReduceLROnPlateau
1. keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1,
patience=10, verbose=0, mode='auto', epsilon=0.0001, cooldown=0, min_lr=0)
当评价指标不在提升时,减少学习率
当学习停滞时,减少2倍或10倍的学习率常常能获得较好的效果。该回调函数检测指标的情况,如果在patience
个epoch中看不到模型性能提升,则减少学习率
参数
- monitor:被监测的量
- factor:每次减少学习率的因子,学习率将以
lr = lr*factor
的形式被减少 - patience:当patience个epoch过去而模型性能不提升时,学习率减少的动作会被触发
- mode:‘auto’,‘min’,‘max’之一,在
min
模式下,如果检测值触发学习率减少。在max
模式下,当检测值不再上升则触发学习率减少。 - epsilon:阈值,用来确定是否进入检测值的“平原区”
- cooldown:学习率减少后,会经过cooldown个epoch才重新进行正常操作
- min_lr:学习率的下限
二、如何在使用train_on_batch时候调用tensorboard
运行下列代码
import numpy as np
import tensorflow as tf
from keras.callbacks import TensorBoard
from keras.layers import Input, Dense
from keras.models import Model
def write_log(callback, names, logs, batch_no):
for name, value in zip(names, logs):
summary = tf.Summary()
summary_value = summary.value.add()
summary_value.simple_value = value
summary_value.tag = name
callback.writer.add_summary(summary, batch_no)
callback.writer.flush()
net_in = Input(shape=(3,))
net_out = Dense(1)(net_in)
model = Model(net_in, net_out)
model.compile(loss='mse', optimizer='sgd', metrics=['mae'])
log_path = './graph'
callback = TensorBoard(log_path)
callback.set_model(model)
train_names = ['train_loss', 'train_mae']
val_names = ['val_loss', 'val_mae']
for batch_no in range(100):
X_train, Y_train = np.random.rand(32, 3), np.random.rand(32, 1)
logs = model.train_on_batch(X_train, Y_train)
write_log(callback, train_names, logs, batch_no)
if batch_no % 10 == 0:
X_val, Y_val = np.random.rand(32, 3), np.random.rand(32, 1)
logs = model.train_on_batch(X_val, Y_val)
write_log(callback, val_names, logs, batch_no//10)
Using TensorFlow backend.
网友评论