简介
Caffe网络通常由数据层、网络层(通常我们认为的网络结构)、Loss层组成。Caffe中数据以Blob的形式在Layer之间传播。
每个Layer包含三个基本的操作Setup、Forward、Backward。
Setup: 在模型初始化时重置 layers 及其相互之间的连接 ;
Forward: 从 bottom 层中接收数据,进行计算后将输出送入到 top 层中;
Backward: 给定相对于 top 层输出的梯度,计算其相对于输入的梯度,并传递到 bottom层。
Caffe中各层的参数可以查阅 https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto
Caffe的安装参考:https://www.jianshu.com/p/992351958ed4
下面以特征点检测模型为例。新建模型定义文件landmark.prototxt
首先给模型起一个名字
name:"ONet"
一、定义网络结构
1、数据层
Caffe又7种数据层(Data、MemoryData、HDF5Data、HDF5Output、ImageData、WindowData、DummyData),除此之外还可以自定义数据层。
这里我们自定义数据层。
在landmark.prototxt中定义数据层:
layer{
name: "data"
type: "Python"
top: "data"
top: "label"
python_param{
module: "my_data_layer"
layer: "ImageDataLayer"
param_str:"{\'source\':\'../data/preproc/data/112/landmark_aug.txt\', \'batch_size\':384, \'shuffle\':True, \'size\':(112,112)}"
}
include: {phase: TRAIN}
}
layer{
name: "data"
type: "Python"
top: "data"
top: "label"
python_param{
module: "my_data_layer"
layer: "ImageDataLayer"
param_str:"{\'source\':\'../data/preproc/data/112/landmark_aug.txt\', \'batch_size\':384, \'shuffle\':False, \'size\':(112,112)}"
}
include: {phase: TEST}
}
name
表示该层的名字,可以随意取。
其中type:"Python"
需要在编译安装Caffe的时候开启Python Layer的支持,在Makefile.config中把这一行WITH_PYTHON_LAYER=1
的注释去掉。
top
表示该层的输出
bottom
表示该层的输入,这里没有输入。
可以有多个top和bottom。
注意:在数据层中,至少有一个命名为data的top https://zhuanlan.zhihu.com/p/34606014
module
表示自定义Layer的python代码路径,这个文件需要和landmark.prototxt在同一目录下,注意不要加后缀.py。
layer
表示python代码中的类名。
param_str
用来传递自定义的变量,类型为字符串
include: {phase: TRAIN}
表示训练时使用的Layer.
include: {phase: TEST}
表示测试时使用的Layer.
对应的Python代码,文件名为my_data_layer.py:
import caffe
import numpy as np
import random
import sys
class ImageDataLayer(caffe.Layer):
def setup(self, bottom, top):
self.top_names = ['data', 'label']
params = eval(self.param_str)
self.batch_size =params['batch_size']
self.shuffle = params['shuffle']
self.batch_loader = BatchLoader(params)
height, width = params['size']
top[0].reshape(self.batch_size, 3, height, width)
top[1].reshape(self.batch_size, 127*2)
def forward(self, bottom, top):
batch = self.batch_loader.next()
imgs = []
labels = []
if self.shuffle:
random.shuffle(batch)
for it in batch:
items = it.split()
img = cv2.imread(items[0])
if img is None:
print("cv2.read %s is None, exit.".format(items[0]))
sys.exit(1)
img = np.transpose(img, (2,0,1)) #convert (height, width, 3) to (3, height, width)
label = map(float, items[1:])
imgs.append(img)
labels.append(label)
top[0].data = np.array(imgs)
top[1].data = np.array(labels)
def reshape(self, bottom, top):
pass
def backward(self, top, propagate_down, bottom):
pass
class BatchLoader(object):
def __init__(self, params):
self.source = params['source']
self.batch_size = params['batch_size']
self.size = params['size']
self.isshuffle = params['shuffle']
self.datalist = open(self.source,"r").read().splitlines()
def __iter__(self):
return self
def next(self):
len_data = len(self.datalist)
for i in range(0, len_data, self.batch_size):
pad = i+self.batch_size - len_data
pad = pad if pad >0 else 0
batch = self.datalist[i:i+self.batch_size] + random.sample(self.datalist[0:i], pad)
yield batch
2、中间网络层
定义第一层卷积
layer{
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param{
lr_mult: 1
}
param{
lr_mult: 2
}
convolution_param{
num_output: 32
kernel_size: 3
stride: 1
weight_filler:{
type: "xavier"
}
bias_filler{
type: "constant"
}
}
}
其中第一个lr_mult:1
表示权重的学习率为:1base_lr (在solver.prototxt中定义)
第二个lr_mult:2
表示偏置的学习率为:2base_lr。
num_output
表示输出通道数
kernel_size
表示卷积核大小,宽高都为kernel_size。如果宽高不相等就分别设置kernel_h、kernel_w。
stride
表示步长。
weight_filler type
权重初始化方式
bias_filler type
偏置初始化方式,设置为constant
时默认值为0
pad
:填充值,默认为0。设置为2时左右各填充2个。
池化层:
layer{
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param{
pool: MAX
kernel_size: 3
stride: 2
pad: 1
}
}
prelu层:
layer{
name: "prelu1"
type: "PReLU"
bottom: "pool1"
top: "pool1"
}
用Reshape层实现Flatten操作(当然有专用的Flatten层):
layer{
name: "flatten"
type: "Reshape"
bottom: "prelu4"
top: "flatten"
reshape_param{
shape{
dim: 0
dim: -1
}
}
}
全连接层:
layer{
name: "landmark_pred"
type: "InnerProduct"
bottom: "prelu5"
top: "landmark_pred"
param{
lr_mult: 1
}
param{
lr_mult: 2
}
inner_product_param{
num_output: 254
weight_filler:{
type: "xavier"
}
bias_filler{
type: "constant"
}
}
}
自定义Loss层:
layer{
name: "landmark_loss"
type: "Python"
top: "landmark_loss"
bottom: "landmark_pred"
bottom: "label"
python_param{
module: "wing_loss_layer"
layer: "WingLossLayer"
param_str : "{\'w\':1.0, \'eplison\':0.2}"
}
# set loss weight so Caffe knows this is a loss layer.
# since PythonLayer inherits directly from Layer, this isn't automatically
# known to Caffe
loss_weight: 1
}
需要加上loss_weight参数,否则会不收敛
自定义Loss(wing loss)的python代码:
import caffe
import numpy as np
class WingLossLayer(caffe.Layer):
def setup(self, bottom, top):
if len(bottom) != 2:
raise Exception("Need two bottom for WingLossLayer")
params = eval(self.param_str)
self.w = params['w']
self.eplison = params['eplison']
def reshape(self, bottom, top):
if bottom[0].count != bottom[1].count:
raise Exception("Inputs must have the save dimension")
self.diff = np.zeros_like(bottom[0].data, dtype=np.float32)
top[0].reshape(1)
def forward(self, bottom, top):
#tag,need reshape bottom[0] and bottom[1],maybe lmdb don't need
self.diff = bottom[0].data - bottom[1].data
idx = np.abs(self.diff) < self.w
idx1 = np.abs(self.diff) >= self.w
top[0].data[...] = (\
np.sum(self.w * np.log(1.0/self.eplison * np.abs(self.diff[idx]) + 1.)) +\
np.sum(np.abs(self.diff[idx1]) - (self.w - self.w * np.log(1.0 + self.w/self.eplison)))\
) / bottom[0].num
def backward(self, top, propagate_down, bottom):
idx0 = (0. < self.diff) & (self.diff < self.w)
idx1 = (-self.w < self.diff) & (self.diff < 0.)
idx2 = self.diff >= self.w
idx3 = self.diff <= -self.w
#print "idx2"
for i in range(0,2):
if not propagate_down[i]:
continue
if i == 0:
sign = 1
else:
sign = -1
bottom[i].diff[idx0] = sign * 1.0 * (self.w / (1. + 1.0/self.eplison * np.abs(self.diff[idx0]))) / bottom[i].num
bottom[i].diff[idx1] = sign * (-1.0) * (self.w / (1. + 1.0/self.eplison * np.abs(self.diff[idx1]))) / bottom[i].num
bottom[i].diff[idx2] = sign * 1.0 / bottom[i].num
bottom[i].diff[idx3] = sign * (-1.0) / bottom[i].num
3、Caffe的BN层
Caffe的BN层由BatchNorm 层和Scale层组成。BatchNorm减均值,Scale层除方差。示例如下:
layer{
name: "conv1/bn"
type: "BatchNorm"
bottom: "conv1"
top: "conv1/bn"
batch_norm_param{
moving_average_fraction: 0.997
eps: 1e-3
}
}
layer{
name: "conv1/scale"
type: "Scale"
bottom: "conv1/bn"
top: "conv1/scale"
scale_param{
bias_term: true
}
}
【参考】
Caffe 中 BN(BatchNorm ) 层的参数均值、方差和滑动系数解读
caffe中的BatchNorm层
Caffe中的BatchNorm实现
浅谈Batch Normalization及其Caffe实现
4、定义DepthwiseConv层
【参考】
https://mc.ai/depthwise-separable-convolution%E2%80%8A-%E2%80%8Ain-caffe-framework/
How to get Depthwise Separable Convolution in Caffe ?
In caffe framework, We can use normal convolution layer as depthwise convolution layer by specifying number of groups as equal to number of input channels.
https://github.com/shicai/MobileNet-Caffe/blob/master/mobilenet_deploy.prototxt
https://github.com/farmingyard/caffe-mobilenet/blob/master/mobilenet_1by2_deploy.prototxt
二、定义优化参数
设置优化参数,文件名定为solver.prototxt
net: "landmark.prototxt"
test_iter: 100
test_interval: 500
base_lr: 0.0001
momentum: 0.9
momentum2: 0.999
type: "Adam"
lr_policy: "fixed"
display: 100
max_iter: 30000
snapshot: 5000
snapshot_prefix: "../../checkpoint/caffe"
solver_mode: GPU
三、训练
执行命令:caffe.bin train --solver=solver.prototxt -gpu
四、部署
部署的时候需用部署专用的模型结构,其实就是去掉了训练阶段的数据层和loss层(一般如此),然后在首层加上Input层,Input层如下:
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 1 dim: 3 dim: 112 dim: 112 } }
}
1、pycaffe部署,就是用python API进行推理
import caffe
import numpy as np
import cv2
import random
import os
caffe.set_mode_cpu()
class Inference():
def __init__(self, deploy_proto, model):
self.net = caffe.Net(deploy_proto, model, caffe.TEST)
self.transformer = caffe.io.Transformer({'data':self.net.blobs['data'].data.shape})
self.transformer.set_transpose('data', (2,0,1))
#self.transformer.set_mean('data', np.array([127.5,127.5,127.5]))
#self.transformer.set_raw_scale('data', 1/128.0)
#self.transformer.set_channel_swap('data', (2,1,0))
def forward(self, img):
self.net.blobs['data'].data[...] = self.transformer.preprocess('data',img)
out = self.net.forward()
landmarks = self.net.blobs['landmark_pred'].data[0]
return landmarks
#调用
if __name__ == "__main__":
model_path = "onet1_deploy.prototxt"
weight_path = "../../checkpoint/caffe-onet1/onet_iter_50000.caffemodel"
net = Inference(model_path, weight_path)
img_orig = cv2.imread(path)
img = np.asarray(img_orig).astype(np.float32)
img = (img-127.5)/128.0
landmarks = net.forward(img)
【参考】
Caffe for Python 官方教程(翻译)
http://www.voidcn.com/article/p-pgjwtpri-st.html
caffe学习(六):使用python调用训练好的模型来分类(Ubuntu)
Caffe学习笔记(七):使用训练好的model做预测(mnist)
Caffe学习系列(20):用训练好的caffemodel来进行分类
Caffe python layer方法执行时机
参考
http://caffecn.cn/?/page/tutorial
http://manutdzou.github.io/2016/05/15/Caffe-Document.html
caffe添加python数据层(ImageData)
caffe中添加Python层
【caffe中添加C++层】Caffe添加自定义层-自定义loss C++ caffe layer: https://github.com/JunrQ/caffe-layer (内含wing loss layer、depthwise conv layer、coord2heatmap layer、heatmap loss layer)
【自定义Loss】caffe-python-layer 的自定义
用python自定义caffe loss层
https://github.com/BVLC/caffe/blob/master/examples/pycaffe/layers/pyloss.py
【wing loss】 : https://github.com/DaChaoXc/caffe-layer-code/blob/master/wingLoss.py
caffe常见优化器使用参数
caffe的特殊层
caffe solver文件个参数的意义
caffe solver参数详解
caffe学习笔记3:Loss和多个Loss合并问题
Windows Caffe 学习笔记(四)搭建自己的网络,训练和测试MNIST手写字体库
https://github.com/RiweiChen/DeepFace/blob/master/FaceAlignment/try1_2/train_val.prototxt
caffe 中base_lr、weight_decay、lr_mult、decay_mult代表什么意思?
caffe入门应用方法(一)——网络层参数配置解析
Caffe 中 BN(BatchNorm ) 层的参数均值、方差和滑动系数解读
Caffe傻瓜系列(3):激活层(Activiation Layers)及参数
https://gist.github.com/jyegerlehner/b2f073aa8e213f0a9167
CAFFE官方教程学习笔记
Caffe通过代码生成prototxt网络文件:Caffe的深度学习训练全过程
网友评论