卷积神经网络在图像处理方面具有广泛应用,下面这张图展示了利用CNN来对图片进行特征提取,最后对图片进行分类的应用:
![](https://img.haomeiwen.com/i25693627/d5a77a91883e3a6f.png)
CNN主要结构有:输入层,卷积层,池化层,全连接层,下面依次介绍。
输入层:
整个神经网络的输入,图像处理中通常代表一张图片的像素矩阵, width, height, depth这分别描述神经元。比如输入的图片大小是 32 × 32 × 3 (rgb),那么输入神经元就也具有 32×32×3 的维度。从输入层开始,卷积神经网络将上一层的三维矩阵转化为下一层的三维矩阵,直到全连接层。
![](https://img.haomeiwen.com/i25693627/af30493013083dca.png)
卷积层:
卷积层中每一个节点的输入只是上一层神经网络的一小块,常用的大小有 33 或者 55 ,卷积层试图将神经网络中每一个小块进行更加深入的分析从而得到抽象程度更高的特征,卷积神经网络中每层卷积层由若干卷积单元组成,每个卷积单元的参数都是通过反向传播算法优化得到的。卷积运算的目的是提取输入的不同特征,第一层卷积层可能只能提取一些低级的特征如边缘、线条和角等层级,更多层的网络能从低级特征中迭代提取更复杂的特征。
下图为使用一个 3*3 的卷积核进行卷积操作,
![](https://img.haomeiwen.com/i25693627/b324bb209dca3f74.png)
参数共享:卷积模型中,针对不同的输入可以使用同样的卷积核来获得输出,这种参数共享的特点是只需要一个参数集,而不需要对每个位置学习一个参数集合。由于卷积核的尺寸可以远远小于输入尺寸,减少了需要学习的参数的数量,针对每个卷积层可以使用多个卷积核获取输入的特征映射,对数据具有很强的特征提取和表示能力。
激励层:
激励层是指的激活函数层,关于激活函数可以看这篇:激活函数、线性变换 - 简书 (jianshu.com)
池化层:
池化层神经网络不会改变三维矩阵的深度,但是可以缩小矩阵的大小,池化操作可以认为是将一张分辨率较高的图片转化为分辨率较低的图片,可以进一步缩小最后全连接层的节点个数,从而减少整个神经网络参数
池化层前向传播的过程也是通过类似过滤器的结构完成,不同于卷积层的加权求和计算,而是采用更加简单的最大值或者平均值计算,分别称为最大池化层和平均池化层。下图是一个 2*2 的步长为2 的最大池化层示例:
![](https://img.haomeiwen.com/i25693627/40c143e54653a482.png)
如果池化层的输入单元大小不是二的整数倍,一般采取边缘补零(zero-padding)的方式补成2的倍数,然后再池化。
全连接层:
经过多层卷积层和池化层的处理之后,在卷积网络最后一般会是由1到2个全连接层来给出最后的分类结果,将卷积和池化看做自动图像特征提取过程,认为图像信息已经被抽象成了信息含量更高的特征。
应用于自然语言处理任务的二维卷积运算过程:
针对词嵌入的二维卷积,是利用卷积神经网络对自然语言进行分类的关键步骤。词嵌入层每一行表示一个词语,每个词语由一个向量来表示,当提取句子中有利于分类的特征时,需要从词语或者字符级别提取,也就是说卷积核的宽度应该覆盖完全单个词向量,即二维卷积核的卷积核宽度应该等于词向量的维度:
![](https://img.haomeiwen.com/i25693627/8a87743c3ce39516.png)
![](https://img.haomeiwen.com/i25693627/d6960a19e690fb04.png)
基于CNN 的句子表示:
Y. Kim. “Convolutional neuralnetworks for sentence classification”. In: arXiv preprint, arXiv:1408.5882 (2014)
![](https://img.haomeiwen.com/i25693627/d21d42e4e398ddd3.png)
N.Kalchbrenner, E.Grefenste, and P.Blunsom "A Convolutional Neural Network for Modelling Sentences”. In: Proceedings of ACL 2014.
![](https://img.haomeiwen.com/i25693627/65a15d2fe1ad768c.png)
TextCNN 代码实现,找的一个博客:
'''
code by Tae Hwan Jung(Jeff Jung) @graykode, modify by wmathor
'''
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as Data
import torch.nn.functional as F
dtype = torch.FloatTensor
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 3 words sentences (=sequence_length is 3)
sentences = ["i love you", "he loves me", "she likes baseball", "i hate you", "sorry for that", "this is awful"]
labels = [1, 1, 1, 0, 0, 0] # 1 is good, 0 is not good.
# TextCNN Parameter
embedding_size = 2
sequence_length = len(sentences[0]) # every sentences contains sequence_length(=3) words
num_classes = 2 # 0 or 1
batch_size = 3
word_list = " ".join(sentences).split()
vocab = list(set(word_list))
word2idx = {w: i for i, w in enumerate(vocab)}
vocab_size = len(vocab)
def make_data(sentences, labels):
inputs = []
for sen in sentences:
inputs.append([word2idx[n] for n in sen.split()])
targets = []
for out in labels:
targets.append(out) # To using Torch Softmax Loss function
return inputs, targets
input_batch, target_batch = make_data(sentences, labels)
input_batch, target_batch = torch.LongTensor(input_batch), torch.LongTensor(target_batch)
dataset = Data.TensorDataset(input_batch, target_batch)
loader = Data.DataLoader(dataset, batch_size, True)
class TextCNN(nn.Module):
def __init__(self):
super(TextCNN, self).__init__()
self.W = nn.Embedding(vocab_size, embedding_size)
output_channel = 3
self.conv = nn.Sequential(
# conv : [input_channel(=1), output_channel, (filter_height, filter_width), stride=1]
nn.Conv2d(1, output_channel, (2, embedding_size)),
nn.ReLU(),
# pool : ((filter_height, filter_width))
nn.MaxPool2d((2, 1)),
)
# fc
self.fc = nn.Linear(output_channel, num_classes)
def forward(self, X):
'''
X: [batch_size, sequence_length]
'''
batch_size = X.shape[0]
embedding_X = self.W(X) # [batch_size, sequence_length, embedding_size]
embedding_X = embedding_X.unsqueeze(1) # add channel(=1) [batch, channel(=1), sequence_length, embedding_size]
conved = self.conv(embedding_X) # [batch_size, output_channel, 1, 1]
flatten = conved.view(batch_size, -1) # [batch_size, output_channel*1*1]
output = self.fc(flatten)
return output
model = TextCNN().to(device)
criterion = nn.CrossEntropyLoss().to(device)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
# Training
for epoch in range(5000):
for batch_x, batch_y in loader:
batch_x, batch_y = batch_x.to(device), batch_y.to(device)
pred = model(batch_x)
loss = criterion(pred, batch_y)
if (epoch + 1) % 1000 == 0:
print('Epoch:', '%04d' % (epoch + 1), 'loss =', '{:.6f}'.format(loss))
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Test
test_text = 'i hate me'
tests = [[word2idx[n] for n in test_text.split()]]
test_batch = torch.LongTensor(tests).to(device)
# Predict
model = model.eval()
predict = model(test_batch).data.max(1, keepdim=True)[1]
if predict[0][0] == 0:
print(test_text,"is Bad Mean...")
else:
print(test_text,"is Good Mean!!")
网友评论