pytorch中构建CNN网络
之前的章节中,安装pytorch官网的教程,已经实现了LetNet-5网络的构建以及可视化。本文将继续探索构建CNN网络的方式。将列举4种方式。
开发/实验环境
- Ununtu 18.04
- pytorch 1.0.0
- Anaconda3 python3.6
- pycharm
pytorch API介绍
image.pngtorch.nn.Module 类
torch.nn.Module类是所有神经网络的基类。因此构建一个神经网络,需要继承于torch.nn.Module。
- 神经网络构建的基本格式
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
-
torch.nn.Module类的主要方法
方法一:
add_module( name, module)
功能: 给当前的module添加一个字module
参数:
name
-------子module的名称
module
-------子module
方法二:
applay(fn)
功能: 对所有子module使用fn
fn(sub_module)
方法三:
forward(*input)
功能: 神经网络的前向计算。所有子类必须实现该方法。
方法四:
parameters()
功能:返回moudle parameters的迭代器。
torch.nn.Sequential类
torch.nn.Sequential是一个顺序容器container。根据传入的构造方法依次添加module。
也可以直接传入一个有序字典OrderedDict。
示例:
# Example of using Sequential
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
# Example of using Sequential with OrderedDict
model = nn.Sequential(OrderedDict([
('conv1', nn.Conv2d(1,20,5)),
('relu1', nn.ReLU()),
('conv2', nn.Conv2d(20,64,5)),
('relu2', nn.ReLU())
]))
实验/构建神经网络的不同方式
方法一
- 简单方式
class Net1(nn.Module):
def __init__(self):
super(Net1, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3,
out_channels=32,
kernel_size=3,
stride=1,
padding=1)
# why 32*32*3
self.fc1 = nn.Linear(in_features=32 * 3 * 3,
out_features=128)
self.fc2 = nn.Linear(in_features=128,
out_features=10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), 2)
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
print("CNN model_1:")
model_1 = Net1()
print(model_1)
image.png
方法二
- 使用torch.nn.Sequential
class Net2(nn.Module):
def __init__(self):
super(Net2, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(in_channels=3,
out_channels=32,
kernel_size=3,
stride=1,
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2)
)
self.fc = nn.Sequential(
nn.Linear(in_features=32 * 3 * 3,
out_features=128),
nn.ReLU(),
nn.Linear(in_features=128,
out_features=10)
)
def forward(self, x):
conv_out = self.conv(x)
res = conv_out.view(conv_out.size(0), -1)
out = self.fc(res)
return out
print('CNN model_2:')
print(Net2())
image.png
方法三
-
使用torch.nn.Sequential, 传入有序字典
特点: 可以指定每一层的名称
'''
使用字典OrderedDict形式
'''
class Net4(nn.Module):
def __init__(self):
super(Net4, self).__init__()
self.conv = nn.Sequential(
OrderedDict(
[
('conv1', nn.Conv2d(in_channels=3,
out_channels=32,
kernel_size=3,
stride=1,
padding=1)),
('relu1', nn.ReLU()),
('pool1', nn.MaxPool2d(kernel_size=2))
]
)
)
self.fc = nn.Sequential(
OrderedDict(
[
('fc1', nn.Linear(in_features=32 * 3 * 3,
out_features=128)),
('relu2', nn.ReLU()),
('fc2', nn.Linear(in_features=128,
out_features=10))
]
)
)
def forward(self, x):
conv_out = self.conv(x)
res = conv_out.view(conv_out.size(0), -1)
out = self.fc(res)
return out
print('CNN model_4:')
print(Net4())
image.png
方法四
-
使用add_module方法
add_module
是添加一个子module
采用此方式也可以指定每一层的名称
'''
通过 add_module()添加
'''
class Net3(nn.Module):
def __init__(self):
super(Net3, self).__init__()
self.conv = nn.Sequential()
self.conv.add_module(name='conv1',
module=nn.Conv2d(in_channels=3,
out_channels=32,
kernel_size=1,
stride=1))
self.conv.add_module(name='relu1', module=nn.ReLU())
self.conv.add_module(name='pool1', module=nn.MaxPool2d(kernel_size=2))
self.fc = nn.Sequential()
self.fc.add_module('fc1', module=nn.Linear(in_features=32 * 3 * 3,
out_features=128))
self.fc.add_module('relu2', module=nn.ReLU())
self.fc.add_module('fc2', module=nn.Linear(in_features=128,
out_features=10))
def forward(self, x):
conv_out = self.conv(x)
res = conv_out.view(conv_out.size(0), -1)
out = self.fc(x)
return out
print('CNN model_3:')
print(Net3())
image.png
测试网络
传入一个Tensor
大小: 1x3x6x6
x = torch.randn(1, 3, 6, 6)
model = Net4()
out = model(x)
print(out)
输出结果:
该测试样本的输出:
image.png
End
参考:
https://pytorch.org/docs/stable/nn.html#torch.nn.Parameter
https://www.cnblogs.com/denny402/p/7593301.html
网友评论