美文网首页
深度学习-2

深度学习-2

作者: 恰似一碗咸鱼粥 | 来源:发表于2020-02-19 14:37 被阅读0次

    1.卷积神经网络基础

    主要包括:卷积层、池化层,以及一些参数的含义:padding、步幅、输入通道、输出通道。

    二维卷积层

    二维卷积层常用来处理图像数据

    二维互相关运算

    输入是一个二维数组和一个二维的kernel,输出也是一个二维的数组,卷积核在输入矩阵上滑动并对核内元素点乘并求和


    二维卷积层

    二维卷积层将输入与卷积核做互相关运算,并添加了一个标量偏置,所以二维卷积层的参数包括卷积核和标量偏置。

    特征图:即二维数组输入在空间维度的特征(feature map)
    感受野:输出的某一个元素是从输入的特征图的某个区域来的,这个区域便是这个元素的感受野,它往往表示了更深层、更抽象的特征

    填充和步幅

    填充(padding)即在输入的宽和高两侧填充元素(一般是0)
    步幅:卷积核在数组上滑动,每次滑动的行数和列数就是步幅(stride),当高上步幅为S_h,宽上步幅为S_w,输入的宽高分别为n_w,n_h,填充为p_h,pw,核的宽高为k_h,k_w时,输出的形状为:

    多通道输入和输出

    之前的二维卷积输入与输出都是二维数组,但是真实数据一般有rgb三个通道,所以我们称大小为3的这一维为通道(channel)维。


    对于多通道输出,假设输入通道与输出通道数分别为和,则卷积核的形状为。

    1*1卷积核

    如图,输入通道数为3,输出通道数为2的11的卷积核的互相关运算。输入和输出有相同的宽和高。其中核有四个维。在这里11的卷积层与全连接等价。这样的卷积层优点是参数少,且有捕捉局部信息的能力。

    池化

    二维池化层

    池化层用于缓解卷积层对于位置的过度敏感,池化层与卷积层一样,对于输入有一个固定的窗口,计算窗口内的最大值或者平均值,所以又分最大池化层和平均池化层。



    但是与卷积层的区别是,在处理多通道数据时,池化层对每个通道分别池化后并不会相加,即输入的通道数与输出通道数相同。

    2.LeNet

    LeNet模型由卷积层块与全连接层快构成:



    其中卷积层块包括卷积层与平均池化层,卷积层用来挖掘边缘特征:线条和物体的局部,池化层用于降低卷积层对位置的敏感性。
    卷积层块由两个这样的卷积层构成,kernel采用5*5的窗口,并在输出上使用sigmoid激活函数,第一个卷积层输出通道数为6,第二个卷积层输出通道数为16.
    对于三个全连接层,输出个数分别为120,84,10,最终的10为类别个数,其中argmax最大的为类别。

    PyTorch实现

    import torch
    import torch.nn as nn
    import torch.optim as optim
    import time
    
    class Flatten(torch.nn.Module):
        def forward(self,x):
            return x.view(x.shape[0],-1)
    class Reshape(torch.nn.Module):
        def forward(self,x):
            return x.view(-1,1,28,28)#Batch,channel,high,width
    
    net=torch.nn.Sequential(
        Reshape(),
        nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5,padding=2),
        nn.Sigmoid(), 
        nn.AvgPool2d(kernel_size=2,stride=2),
        nn.Conv2d(in_channels=6,out_channels=16,kernel_size=5),
        nn.Sigmoid(), 
        nn.AvgPool2d(kernel_size=2,stride=2),
        Flatten(),
        nn.Linear(in_features=16*5*5, out_features=120),
        nn.Sigmoid(),
        nn.Linear(120, 84),
        nn.Sigmoid(),
        nn.Linear(84, 10)
    )
    #print
    X = torch.randn(size=(1,1,28,28), dtype = torch.float32)
    for layer in net:
        X = layer(X)
        print(layer.__class__.__name__,'output shape: \t',X.shape)
    

    输出:

    Reshape output shape:    torch.Size([1, 1, 28, 28])
    Conv2d output shape:     torch.Size([1, 6, 28, 28])
    Sigmoid output shape:    torch.Size([1, 6, 28, 28])
    AvgPool2d output shape:      torch.Size([1, 6, 14, 14])
    Conv2d output shape:     torch.Size([1, 16, 10, 10])
    Sigmoid output shape:    torch.Size([1, 16, 10, 10])
    AvgPool2d output shape:      torch.Size([1, 16, 5, 5])
    Flatten output shape:    torch.Size([1, 400])
    Linear output shape:     torch.Size([1, 120])
    Sigmoid output shape:    torch.Size([1, 120])
    Linear output shape:     torch.Size([1, 84])
    Sigmoid output shape:    torch.Size([1, 84])
    Linear output shape:     torch.Size([1, 10])
    

    可以看到通道数和长款的变化如图:


    LeNet对minst进行分类

    读取数据

    import torchvision
    import torchvision.transforms as transforms
    
    batch_size = 256
    num_workers = 4
    train_data = torchvision.datasets.MNIST(
        './mnist', train=True, transform=torchvision.transforms.ToTensor(), download=True
    )
    test_data = torchvision.datasets.MNIST(
        './mnist', train=False, transform=torchvision.transforms.ToTensor()
    )
    train_iter = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True, num_workers=num_workers)
    test_iter = torch.utils.data.DataLoader(test_data, batch_size=batch_size, shuffle=False, num_workers=num_workers)
    
    num_inputs=784#28*28,共784个特征
    num_outputs=10#10个类别
    

    一些常规项目的定义:

    loss = torch.nn.CrossEntropyLoss()
    optimizer = torch.optim.SGD(net.parameters(), lr=0.1)
    def evaluate_accuracy(data_iter, net):
        acc_sum, n = 0.0, 0
        for X, y in data_iter:
            acc_sum += (net(X).argmax(dim=1) == y).float().sum().item()
            n += y.shape[0]
        return acc_sum / n
    

    初始化参数

    def init_weights(m):
        if type(m) == nn.Linear or type(m) == nn.Conv2d:
            torch.nn.init.xavier_uniform_(m.weight)
    net.apply(init_weights)
    

    LeNet的训练

    num_epochs = 10
    for epoch in range(num_epochs):
        train_l_sum, train_acc_sum, n = 0.0, 0.0, 0
        for X, y in train_iter:
            y_hat = net(X)
            l = loss(y_hat, y).sum()
            optimizer.zero_grad()#梯度清零
                
            l.backward()#反向传播求梯度
            optimizer.step()#前向传播
                
            train_l_sum += l.item()
            train_acc_sum += (y_hat.argmax(dim=1) == y).sum().item()
            n += y.shape[0]
        test_acc = evaluate_accuracy(test_iter, net)
        print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f'
            % (epoch + 1, train_l_sum / n, train_acc_sum / n, test_acc))
    

    3.卷积神经网络进阶

    AlexNet

    AlexNet包括八层变换,有五层卷积层,两层全连接隐藏层以及一个全连接输出层,使用ReLU作为激活函数,并且采用dropout来控制复杂度


    PyTorch实现

    import time
    import torch
    from torch import nn, optim
    import torchvision
    import numpy as np
    import sys
    import os
    import torch.nn.functional as F
    
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    
    class AlexNet(nn.Module):
        def __init__(self):
            super(AlexNet, self).__init__()
            self.conv = nn.Sequential(
                nn.Conv2d(1, 96, 11, 4), # in_channels, out_channels, kernel_size, stride, padding
                nn.ReLU(),
                nn.MaxPool2d(3, 2), # kernel_size, stride
                # 减小卷积窗口,使用填充为2来使得输入与输出的高和宽一致,且增大输出通道数
                nn.Conv2d(96, 256, 5, 1, 2),
                nn.ReLU(),
                nn.MaxPool2d(3, 2),
                # 连续3个卷积层,且使用更小的卷积窗口。除了最后的卷积层外,进一步增大了输出通道数。
                # 前两个卷积层后不使用池化层来减小输入的高和宽
                nn.Conv2d(256, 384, 3, 1, 1),
                nn.ReLU(),
                nn.Conv2d(384, 384, 3, 1, 1),
                nn.ReLU(),
                nn.Conv2d(384, 256, 3, 1, 1),
                nn.ReLU(),
                nn.MaxPool2d(3, 2)
            )
             # 这里全连接层的输出个数比LeNet中的大数倍。使用丢弃层来缓解过拟合
            self.fc = nn.Sequential(
                nn.Linear(256*5*5, 4096),
                nn.ReLU(),
                nn.Dropout(0.5),
                #由于使用CPU镜像,精简网络,若为GPU镜像可添加该层
                #nn.Linear(4096, 4096),
                #nn.ReLU(),
                #nn.Dropout(0.5),
    
                # 输出层。由于这里使用Fashion-MNIST,所以用类别数为10,而非论文中的1000
                nn.Linear(4096, 10),
            )
    
        def forward(self, img):
    
            feature = self.conv(img)
            output = self.fc(feature.view(img.shape[0], -1))
            return output
    net = AlexNet()
    print(net)
    

    输出

    AlexNet(
      (conv): Sequential(
        (0): Conv2d(1, 96, kernel_size=(11, 11), stride=(4, 4))
        (1): ReLU()
        (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
        (3): Conv2d(96, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
        (4): ReLU()
        (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
        (6): Conv2d(256, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (7): ReLU()
        (8): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (9): ReLU()
        (10): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (11): ReLU()
        (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
      )
      (fc): Sequential(
        (0): Linear(in_features=6400, out_features=4096, bias=True)
        (1): ReLU()
        (2): Dropout(p=0.5, inplace=False)
        (3): Linear(in_features=4096, out_features=10, bias=True)
      )
    )
    

    VGG(使用重复元素的网络)

    如图,数个padding为1,kernel为33的卷积层,并接上一个步幅未,窗口形状为22的池化层,这样卷积层保持输入输出的宽高不变,池化层减半。

    PyTorch实现

    def vgg_block(num_convs,in_channels,out_channels):
        blk=[]
        for i in range(num_convs):
            if i==0:
                blk.append(nn.Conv2d(in_channels,out_channels,kernel_size=3,padding=1))
            else:
                blk.append(nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1))
            blk.append(nn.ReLU())
        blk.append(nn.MaxPool2d(kernel_size=2, stride=2))#使宽高减半
        return nn.Sequential(*blk)
    
    def vgg(conv_arch, fc_features, fc_hidden_units=4096):
        net = nn.Sequential()
        # 卷积层部分
        for i, (num_convs, in_channels, out_channels) in enumerate(conv_arch):
            # 每经过一个vgg_block都会使宽高减半
            net.add_module("vgg_block_" + str(i+1), vgg_block(num_convs, in_channels, out_channels))
        # 全连接层部分
        net.add_module("fc", nn.Sequential(Flatten(),
                                     nn.Linear(fc_features, fc_hidden_units),
                                     nn.ReLU(),
                                     nn.Dropout(0.5),
                                     nn.Linear(fc_hidden_units, fc_hidden_units),
                                     nn.ReLU(),
                                     nn.Dropout(0.5),
                                     nn.Linear(fc_hidden_units, 10)
                                    ))
        return net
    

    测试:

    conv_arch = ((1, 1, 64), (1, 64, 128), (2, 128, 256), (2, 256, 512), (2, 512, 512))
    # 经过5个vgg_block, 宽高会减半5次, 变成 224/32 = 7
    fc_features = 512 * 7 * 7 # c * w * h
    fc_hidden_units = 4096 # 任意
    
    net = vgg(conv_arch, fc_features, fc_hidden_units)
    X = torch.rand(1, 1, 224, 224)
    
    # named_children获取一级子模块及其名字(named_modules会返回所有子模块,包括子模块的子模块)
    for name, blk in net.named_children(): 
        X = blk(X)
        print(name, 'output shape: ', X.shape)
    

    NiN(网络中的网络)

    NiN串联多个由卷积层和“全连接”(11的卷积层)层构成的小网络,特点是输出通道数等于类别数,可以直接用全局平均池化层求平均并直接用于分类。其中11的卷积核可以放缩通道数量,增加非线性

    PyTorch实现

    def nin_block(in_channels, out_channels, kernel_size, stride, padding):
        blk = nn.Sequential(nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding),
                            nn.ReLU(),
                            nn.Conv2d(out_channels, out_channels, kernel_size=1),
                            nn.ReLU(),
                            nn.Conv2d(out_channels, out_channels, kernel_size=1),
                            nn.ReLU())
        return blk
    
    class GlobalAvgPool2d(nn.Module):
        # 全局平均池化层可通过将池化窗口形状设置成输入的高和宽实现
        def __init__(self):
            super(GlobalAvgPool2d, self).__init__()
        def forward(self, x):
            return F.avg_pool2d(x, kernel_size=x.size()[2:])
    
    net = nn.Sequential(
        nin_block(1, 96, kernel_size=11, stride=4, padding=0),
        nn.MaxPool2d(kernel_size=3, stride=2),
        nin_block(96, 256, kernel_size=5, stride=1, padding=2),
        nn.MaxPool2d(kernel_size=3, stride=2),
        nin_block(256, 384, kernel_size=3, stride=1, padding=1),
        nn.MaxPool2d(kernel_size=3, stride=2), 
        nn.Dropout(0.5),
        # 标签类别数是10
        nin_block(384, 10, kernel_size=3, stride=1, padding=1),
        GlobalAvgPool2d(), 
        # 将四维的输出转成二维的输出,其形状为(批量大小, 10)
        Flatten())
    

    GoogLeNet

    它由Inception基础块组成,每个块相当于一个有四条线路的子网络,并用1*1的卷积层来减少通道数,降低复杂度,最终把四条线合并,即在通道维处合并



    它的完整模型:


    PyTorch实现

    b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
                       nn.ReLU(),
                       nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
    
    b2 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1),
                       nn.Conv2d(64, 192, kernel_size=3, padding=1),
                       nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
    
    b3 = nn.Sequential(Inception(192, 64, (96, 128), (16, 32), 32),
                       Inception(256, 128, (128, 192), (32, 96), 64),
                       nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
    
    b4 = nn.Sequential(Inception(480, 192, (96, 208), (16, 48), 64),
                       Inception(512, 160, (112, 224), (24, 64), 64),
                       Inception(512, 128, (128, 256), (24, 64), 64),
                       Inception(512, 112, (144, 288), (32, 64), 64),
                       Inception(528, 256, (160, 320), (32, 128), 128),
                       nn.MaxPool2d(kernel_size=3, stride=2, padding=1))
    
    b5 = nn.Sequential(Inception(832, 256, (160, 320), (32, 128), 128),
                       Inception(832, 384, (192, 384), (48, 128), 128),
                       GlobalAvgPool2d())
    net = nn.Sequential(b1, b2, b3, b4, b5, Flatten(), nn.Linear(1024, 10))
    

    4.循环神经网络进阶

    由于RNN容易出现梯度衰减或者梯度爆炸,所以有一些改进的RNN出现


    GRU

    重置门有助于捕捉时间序列里的短期依赖关系,更新门有助于捕捉长期依赖关系。


    num_hiddens=256
    num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
    pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']
    
    lr = 1e-2 # 注意调整学习率
    gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens)
    model = d2l.RNNModel(gru_layer, vocab_size).to(device)
    d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                    corpus_indices, idx_to_char, char_to_idx,
                                    num_epochs, num_steps, lr, clipping_theta,
                                    batch_size, pred_period, pred_len, prefixes)
    

    LSTM

    遗忘门:控制上一步时间的记忆细胞
    输入门:控制当前时间步的输入
    输出门:控制从记忆细胞到隐藏状态
    记忆细胞:一种特殊的隐藏状态


    num_hiddens=256
    num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
    pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']
    
    lr = 1e-2 # 注意调整学习率
    lstm_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens)
    model = d2l.RNNModel(lstm_layer, vocab_size)
    d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                    corpus_indices, idx_to_char, char_to_idx,
                                    num_epochs, num_steps, lr, clipping_theta,
                                    batch_size, pred_period, pred_len, prefixes)
    

    深度循环神经网络

    即多层的RNN级联


    num_hiddens=256
    num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
    pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']
    
    lr = 1e-2 # 注意调整学习率
    
    gru_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens,num_layers=2)
    model = d2l.RNNModel(gru_layer, vocab_size).to(device)
    d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                    corpus_indices, idx_to_char, char_to_idx,
                                    num_epochs, num_steps, lr, clipping_theta,
                                    batch_size, pred_period, pred_len, prefixes)
    

    双向循环神经网络

    num_hiddens=128
    num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e-2, 1e-2
    pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']
    
    lr = 1e-2 # 注意调整学习率
    
    gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens,bidirectional=True)
    model = d2l.RNNModel(gru_layer, vocab_size).to(device)
    d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                    corpus_indices, idx_to_char, char_to_idx,
                                    num_epochs, num_steps, lr, clipping_theta,
                                    batch_size, pred_period, pred_len, prefixes)
    

    5.机器翻译(MT)

    特点是输入是单词序列而不是单词。

    数据预处理

    首先要对数据进行预处理,去掉特殊字符,然后进行分词,变成单词组成的列表,然后建立词典

    def build_vocab(tokens):
        tokens = [token for line in tokens for token in line]
        return d2l.data.base.Vocab(tokens, min_freq=3, use_special_tokens=True)
    
    src_vocab = build_vocab(source)
    len(src_vocab)
    

    Encoder-Decoder

    即从输入到隐藏状态,再从隐藏状态到输出


    Seq2Seq模型


    具体结构:
    利用两个RNN,encoder-RNN负责将输入序列压缩成指定长度的向量,即将输入编码,decoder-RNN则可以将语义向量生成指定的序列,它只作为初始状态,后面的运算都与它无关


    PyTorch实现

    class Encoder(nn.Module):
        def __init__(self, **kwargs):
            super(Encoder, self).__init__(**kwargs)
    
        def forward(self, X, *args):
            raise NotImplementedError
    class Decoder(nn.Module):
        def __init__(self, **kwargs):
            super(Decoder, self).__init__(**kwargs)
    
        def init_state(self, enc_outputs, *args):
            raise NotImplementedError
    
        def forward(self, X, state):
            raise NotImplementedError
    
    class EncoderDecoder(nn.Module):
        def __init__(self, encoder, decoder, **kwargs):
            super(EncoderDecoder, self).__init__(**kwargs)
            self.encoder = encoder
            self.decoder = decoder
    
        def forward(self, enc_X, dec_X, *args):
            enc_outputs = self.encoder(enc_X, *args)
            dec_state = self.decoder.init_state(enc_outputs, *args)
            return self.decoder(dec_X, dec_state)
    

    损失函数:

    def SequenceMask(X, X_len,value=0):
        maxlen = X.size(1)
        mask = torch.arange(maxlen)[None, :].to(X_len.device) < X_len[:, None]   
        X[~mask]=value
        return X
    class MaskedSoftmaxCELoss(nn.CrossEntropyLoss):
        # pred shape: (batch_size, seq_len, vocab_size)
        # label shape: (batch_size, seq_len)
        # valid_length shape: (batch_size, )
        def forward(self, pred, label, valid_length):
            # the sample weights shape should be (batch_size, seq_len)
            weights = torch.ones_like(label)
            weights = SequenceMask(weights, valid_length).float()
            self.reduction='none'
            output=super(MaskedSoftmaxCELoss, self).forward(pred.transpose(1,2), label)
            return (output*weights).mean(dim=1)
    loss = MaskedSoftmaxCELoss()
    
    class Seq2SeqEncoder(d2l.Encoder):
        def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
                     dropout=0, **kwargs):
            super(Seq2SeqEncoder, self).__init__(**kwargs)
            self.num_hiddens=num_hiddens
            self.num_layers=num_layers
            self.embedding = nn.Embedding(vocab_size, embed_size)
            self.rnn = nn.LSTM(embed_size,num_hiddens, num_layers, dropout=dropout)
       
        def begin_state(self, batch_size, device):
            return [torch.zeros(size=(self.num_layers, batch_size, self.num_hiddens),  device=device),
                    torch.zeros(size=(self.num_layers, batch_size, self.num_hiddens),  device=device)]
        def forward(self, X, *args):
            X = self.embedding(X) # X shape: (batch_size, seq_len, embed_size)
            X = X.transpose(0, 1)  # RNN needs first axes to be time
            # state = self.begin_state(X.shape[1], device=X.device)
            out, state = self.rnn(X)
            # The shape of out is (seq_len, batch_size, num_hiddens).
            # state contains the hidden state and the memory cell
            # of the last time step, the shape is (num_layers, batch_size, num_hiddens)
            return out, state
    
    class Seq2SeqDecoder(d2l.Decoder):
        def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
                     dropout=0, **kwargs):
            super(Seq2SeqDecoder, self).__init__(**kwargs)
            self.embedding = nn.Embedding(vocab_size, embed_size)
            self.rnn = nn.LSTM(embed_size,num_hiddens, num_layers, dropout=dropout)
            self.dense = nn.Linear(num_hiddens,vocab_size)
    
        def init_state(self, enc_outputs, *args):
            return enc_outputs[1]
    
        def forward(self, X, state):
            X = self.embedding(X).transpose(0, 1)
            out, state = self.rnn(X, state)
            # Make the batch to be the first dimension to simplify loss computation.
            out = self.dense(out).transpose(0, 1)
            return out, state
    embed_size, num_hiddens, num_layers, dropout = 32, 32, 2, 0.0
    batch_size, num_examples, max_len = 64, 1e3, 10
    lr, num_epochs, ctx = 0.005, 300, d2l.try_gpu()
    src_vocab, tgt_vocab, train_iter = d2l.load_data_nmt(
        batch_size, max_len,num_examples)
    encoder = Seq2SeqEncoder(
        len(src_vocab), embed_size, num_hiddens, num_layers, dropout)
    decoder = Seq2SeqDecoder(
        len(tgt_vocab), embed_size, num_hiddens, num_layers, dropout)
    model = d2l.EncoderDecoder(encoder, decoder)
    def train_ch7(model, data_iter, lr, num_epochs, device):  # Saved in d2l
        model.to(device)
        optimizer = optim.Adam(model.parameters(), lr=lr)
        loss = MaskedSoftmaxCELoss()
        tic = time.time()
        for epoch in range(1, num_epochs+1):
            l_sum, num_tokens_sum = 0.0, 0.0
            for batch in data_iter:
                optimizer.zero_grad()
                X, X_vlen, Y, Y_vlen = [x.to(device) for x in batch]
                Y_input, Y_label, Y_vlen = Y[:,:-1], Y[:,1:], Y_vlen-1
                
                Y_hat, _ = model(X, Y_input, X_vlen, Y_vlen)
                l = loss(Y_hat, Y_label, Y_vlen).sum()
                l.backward()
    
                with torch.no_grad():
                    d2l.grad_clipping_nn(model, 5, device)
                num_tokens = Y_vlen.sum().item()
                optimizer.step()
                l_sum += l.sum().item()
                num_tokens_sum += num_tokens
            if epoch % 50 == 0:
                print("epoch {0:4d},loss {1:.3f}, time {2:.1f} sec".format( 
                      epoch, (l_sum/num_tokens_sum), time.time()-tic))
                tic = time.time()
    def train_ch7(model, data_iter, lr, num_epochs, device):  # Saved in d2l
        model.to(device)
        optimizer = optim.Adam(model.parameters(), lr=lr)
        loss = MaskedSoftmaxCELoss()
        tic = time.time()
        for epoch in range(1, num_epochs+1):
            l_sum, num_tokens_sum = 0.0, 0.0
            for batch in data_iter:
                optimizer.zero_grad()
                X, X_vlen, Y, Y_vlen = [x.to(device) for x in batch]
                Y_input, Y_label, Y_vlen = Y[:,:-1], Y[:,1:], Y_vlen-1
                
                Y_hat, _ = model(X, Y_input, X_vlen, Y_vlen)
                l = loss(Y_hat, Y_label, Y_vlen).sum()
                l.backward()
    
                with torch.no_grad():
                    d2l.grad_clipping_nn(model, 5, device)
                num_tokens = Y_vlen.sum().item()
                optimizer.step()
                l_sum += l.sum().item()
                num_tokens_sum += num_tokens
            if epoch % 50 == 0:
                print("epoch {0:4d},loss {1:.3f}, time {2:.1f} sec".format( 
                      epoch, (l_sum/num_tokens_sum), time.time()-tic))
                tic = time.time()
    

    Beam Search

    贪心法:



    集束搜索:


    6.注意力机制

    由于RNN存在梯度消失的问题,随着翻译长度的增加,效果会下降,所以引入了注意力机制


    注意力机制框架

    它是一种带权池化的方法,输入包括query和key-value pairs。
    query与每一个key计算注意力分数并进行权重归一化,输出与values维度一致的向量,这个向量与value加权求和输出向量o
    计算步骤:
    首先假设函数\alpha可以计算key和query的相似性,这样便可以求出所有的注意力分数(attention scores),然后使用softmax函数获得注意力权重b_1,...,b_n=softmax(a_1,...,a_n),最后对value加权求和:o=\sum b_iv_i

    点积注意力

    通常会除去\sqrt{d},即除以维度开根号来降低score对维度的依赖性

    多层感知机注意力

    在多层感知机中,我们将query和key映射到h向量,将score函数定义为:


    引入注意力机制的Seq2Seq模型

    在时间步为t时,attention layer保存着encoding看到的所有信息,即每步的输出。在decoding时,解码器t时隐藏状态被当作query,encoder每个时间步的隐藏状态作为key和value进行attention聚合,注意力模型输出上下文的文本向量



    解码器

    带有注意力机制的解码器有三个输入参数


    7.Transformer

    CNN易于并行运行,RNN适用于捕捉长序列依赖,但是不易并行,Transformer模型利用注意力机制实现了并行化捕捉序列依赖,同时处理每个序列的token。


    Transformer架构

    多头注意力层(muti-head attention)

    自注意力结构:
    序列的每一个元素的key value query是完全一致的



    一个多头注意力层包含h个并行的自注意力层,每一个这种层称为head。对每个头,这h个注意力头都会拼接后输出到一个线性层进行拟合。


    基于位置的前馈网络(FFN)

    它接受一个形状为(batch_size,seq_length, feature_size)的三维张量。它由两个全连接层组成,序列的每个位置都会被单独更新。它只会对最后一维的大小进行改变。

    Add and Norm

    相加归一化层,用于整合输入和其他层的输出,所以在每个muti-attention和FFN后都加一个残差归一化层。它可以防止层内的数值变化过大,有利于加快训练速度、提高泛化能力

    位置编码

    位置编码用于保持输入序列元素的位置,保留序列顺序信息。
    假设输入序列表示为X,序列长度为l的嵌入向量维度为d,位置编码为P,输出的向量就是二者相加X+P。i对应序列中的顺序,j对应embedding vector内部的维度索引,可以通过以下等式计算位置编码:


    编码器

    编码器包含一个多头注意力模块,一个position-wise FFN,两个Add and Norm层

    解码器

    解码器与编码器类似,但是多了一个多头注意力子模块


    相关文章

      网友评论

          本文标题:深度学习-2

          本文链接:https://www.haomeiwen.com/subject/lydafhtx.html