tf.keras的model在进行fit时可以传入各种回调函数,介绍几种常用的回调函数。
EarlyStopping
- monitor='val_loss',monitor用来告知需要监听的变量,当该变量满足某些要求之后,就会停止训练,提前结束
- min_delta=0, 判定变量不在提升的最小值,当变量的变动小于该值,就认为变量已经不再提升
- patience=0, 该数字表示,经过多少轮之后,变量不再变动,则停止训练
- verbose=0,这是个日志级别
- mode='auto',由于监听的变量可能是准确率,所以变量提升的方向不确定,一般就设置为auto
- baseline=None,要监控的变量的基准值。 如果模型没有显示baseline的改善,训练将停止。
- restore_best_weights=False,模型是否根据监控变量的最佳值时的model存储下来
LearningRateScheduler
LearningRateScheduler是学习率调整策略,有以下两个参数
-
schedule 一个function,接受epoch参数,然后我们可以根据epoch参数,返回学习率
-
verbose,日志
ReduceLROnPlateau
monitor: 监听的变量
factor: 衰减因子,每次衰减都乘以这个数,factor by which the learning rate will be reduced. new_lr = lr *
factor
patience: 变量多少轮没有提升之后,减少学习率
verbose: int. 0: quiet, 1: update messages.
mode: one of {auto, min, max}. In min
mode, lr will be reduced when the
quantity monitored has stopped decreasing; in max
mode it will be
reduced when the quantity monitored has stopped increasing; in auto
mode, the direction is automatically inferred from the name of the
monitored quantity.
min_delta: 这是用来衡量监控值是否提升的阈值.
cooldown: 学习率衰减之后,会等几轮再重新判定是否衰减number of epochs to wait before resuming normal operation after
lr has been reduced.
min_lr:学习率的下届
LambdaCallback
接受如下六个参数,这几个参数都是匿名函数:
on_epoch_begin: called at the beginning of every epoch.
on_epoch_end: called at the end of every epoch.
on_batch_begin: called at the beginning of every batch.
on_batch_end: called at the end of every batch.
on_train_begin: called at the beginning of model training.
on_train_end: called at the end of model training.`
匿名函数的参数如下:
on_epoch_beginand
on_epoch_endexpect two positional arguments:
epoch,
logs-
on_batch_beginand
on_batch_endexpect two positional arguments:
batch,
logs-
on_train_beginand
on_train_endexpect one positional argument:
logs`
自定义CallBack
自定义callback继承父类即可,可以自己实现复杂的回调函数
网友评论