美文网首页
使用pytorch处理不同长度序列

使用pytorch处理不同长度序列

作者: 井底蛙蛙呱呱呱 | 来源:发表于2020-02-11 12:08 被阅读0次

    在使用RNN处理序列类型数据(以语句序列为例)时,常常会面临数据长度不同的情况。如果每次仅输入处理一个样本,由于RNN的参数共享机制,不同长度的序列并不会出现什么问题。但是如果需要批处理(batch)不同长度的序列,通常需要先将这些文件进行补齐(padding),常见的批处理过程如下:

    • 1、为句子中的单词建立index;
    • 2、使用pad_squence将长度不同的序列补齐到统一长度;
    • 3、对补齐(padded)序列进行embedding;
    • 4、使用pack_padded_sequence将序列进行“压缩”,然后送入RNN模型;
    • 5、RNN模型完成前向传播后,使用pad_packed_sequence对模型输出进行“解压缩”(也可理解为对齐);
    • 6、对模型结果进行评估/计算loss。

    本文将先通过逐步执行代码的方式来理解pad_sequencepack_padded_sequencepad_packed_sequence的作用,最后给出一个完整的训练LSTM模型的例子代码。

    1、数据准备

    为了简便,这里的数据直接使用数字,省去了将字符转为index的步骤。需要提一下的是,一般来说默认的padding index为0,因此将其保留为padding index。

    class MinimalDataset(Dataset):
        def __init__(self, data):
            self.data = data
        
        def __getitem__(self, index):
            return self.data[index]
        
        def __len__(self):
            return len(self.data)
    
    
    
    DATA = [
        [1, 2, 3],
        [4, 5],
        [6, 7, 8, 9],
        [4, 6, 2, 9, 0]
    ]
    
    DATA = list(map(lambda x: torch.tensor(x), DATA))
    # 词典大小,包含了padding token 0
    NUM_WORDS = 10
    BATCH_SIZE = 3
    LSTM_DIM = 5    # hidden dim
    
    dataset = MinimalDataset(DATA)
    data_loader = DataLoader(dataset, 
                             batch_size=BATCH_SIZE,
                             shuffle=False,
                             collate_fn=lambda x: x)
    
    # print(next(iter(data_loader)))
    
    # iterate through the dataset:
    for i, batch in enumerate(data_loader):
        print(f'{i}, {batch}')
    
    # 输出:
    # 0, [tensor([1, 2, 3]), tensor([4, 5]), tensor([6, 7, 8, 9])]
    # 1, [tensor([4, 6, 2, 9, 0])]
    

    2、使用pad_sequence将序列补齐

    # this always gets you the first batch of the dataset:
    batch = next(iter(data_loader))
    padded = pad_sequence(batch, batch_first=True)
    print(f' [0] padded: \n{padded}\n')
    
    # 输出
     [0] padded: 
    tensor([[1, 2, 3, 0],
            [4, 5, 0, 0],
            [6, 7, 8, 9]])
    

    3、使用补齐的序列进行embedding

    # need to store the sequence lengths explicitly if we want to later pack the sequence:
    lens = list(map(len, batch))
    
    embedding = Embedding(NUM_WORDS, EMB_DIM)
    pad_embed = embedding(padded)
    print(f'> pad_embed: \n{pad_embed}\n')
    
    # 输出:
    > pad_embed: 
    tensor([[[ 0.6698, -1.0585],
             [ 0.4706, -0.6251],
             [ 1.7170,  2.5883],
             [ 1.1310, -1.3275]],
    
            [[-1.7730,  0.5774],
             [-0.2044,  0.5833],
             [ 1.1310, -1.3275],
             [ 1.1310, -1.3275]],
    
            [[-1.1960,  0.5830],
             [ 0.3564,  0.1970],
             [-1.5276,  0.7346],
             [-0.0309,  0.0324]]], grad_fn=<EmbeddingBackward>)
    

    我们观察上面的输出,可以看到第一个子元素的最后一行的与第二个子元素的第3、4行embeding数值是一样的,这其实就是padding值0 embedding得到的向量。

    这时候的值其实是比较稀疏的,因为其中保存了padding值的embeding向量,这些向量在输入LSTM中的时候是不需要的,可以将这个embedding矩阵转换为稠密的,也即仅保存有数值的embedding行,略去padding的embedding行。

    4、压缩embedding矩阵后输入LSTM

    需要注意的是这个时候需要知道batch中每个序列的长度,这样我们才能知道去掉embedding矩阵中的那几行(padding的)。幸运的是我们在上面计算出了batch中各个sample 序列的长度,因此我们对padded embedding进行一个相当于压缩的操作。

    # pack it up to one sequence (where each element is EMB_DIM long)
    pad_embed_pack = pack_padded_sequence(pad_embed, lens, batch_first=True, enforce_sorted=False)
    print(f'> pad_embed_pack: \n{pad_embed_pack}\n')
    
    # 输出,可以看到这里的输出已经是被压紧了的:
    > pad_embed_pack: 
    PackedSequence(data=tensor([[-1.1960,  0.5830],
            [ 0.6698, -1.0585],
            [-1.7730,  0.5774],
            [ 0.3564,  0.1970],
            [ 0.4706, -0.6251],
            [-0.2044,  0.5833],
            [-1.5276,  0.7346],
            [ 1.7170,  2.5883],
            [-0.0309,  0.0324]], grad_fn=<PackPaddedSequenceBackward>), batch_sizes=tensor([3, 3, 2, 1]), sorted_indices=tensor([2, 0, 1]), unsorted_indices=tensor([1, 2, 0]))
    
    # run that through the lstm
    lstm = LSTM(input_size=EMB_DIM, hidden_size=LSTM_DIM, batch_first=True)
    pad_embed_pack_lstm = lstm(pad_embed_pack)
    print(f'> pad_embed_pack_lstm: \n{pad_embed_pack_lstm}\n')
    
    # 输出:
    > pad_embed_pack_lstm: 
    (PackedSequence(data=tensor([[ 5.2500e-02,  1.8381e-01,  5.6819e-02,  7.2193e-02, -1.0846e-01],
            [-2.6204e-04,  8.1389e-02,  5.3563e-02,  6.8382e-02, -3.8394e-02],
            [ 8.5924e-02,  1.7067e-01,  7.0421e-02,  5.6211e-02, -1.3094e-01],
            [ 1.7779e-02,  2.6416e-01,  6.2469e-02,  1.0087e-01, -9.8012e-02],
            [-1.7043e-03,  1.6277e-01,  7.3700e-02,  1.0188e-01, -7.2268e-02],
            [ 5.0581e-02,  2.8680e-01,  7.4511e-02,  1.0124e-01, -1.2804e-01],
            [ 8.8691e-02,  2.7159e-01,  7.8367e-02,  1.7065e-01, -1.5527e-01],
            [-1.6189e-01,  1.4869e-01,  1.2640e-03,  7.8151e-02,  1.6798e-02],
            [ 5.4739e-02,  3.3157e-01,  9.0170e-02,  1.3350e-01, -1.5051e-01]],
           grad_fn=<CatBackward>), batch_sizes=tensor([3, 3, 2, 1]), sorted_indices=tensor([2, 0, 1]), unsorted_indices=tensor([1, 2, 0])), (tensor([[[-0.1619,  0.1487,  0.0013,  0.0782,  0.0168],
             [ 0.0506,  0.2868,  0.0745,  0.1012, -0.1280],
             [ 0.0547,  0.3316,  0.0902,  0.1335, -0.1505]]],
           grad_fn=<IndexSelectBackward>), tensor([[[-0.2660,  0.2240,  0.0034,  0.4918,  0.0314],
             [ 0.1091,  0.6439,  0.2184,  0.3290, -0.2769],
             [ 0.1244,  0.8386,  0.2650,  0.4865, -0.3290]]],
           grad_fn=<IndexSelectBackward>)))
    

    乍一看LSTM的输出可能会令人觉得很迷惑,不太好理解,我们先查看一下官方文档看下LSTM的输出解释:


    LSTM的输出及其需要学习的参数

    再结合我们上面的结果,便不难理解了。我们的LSTM的输出包括三个矩阵:

    • 其中第一个是对应每个序列时刻的输出(也即上图中的output),如果还记得LSTM模型的网络结构的话应该知道每个时序都是可以有一个输出的。而我们的这一个batch的输入一共有9个非空字符,padding 了3个字符,而这里的输出是pack的,因此是9行,表示整个batch输入的9个时序状态(或称字符)对应的输出。其值其实就是序列里面每个状态的hidden值,也就是说他其实是包含了hidden输出的
    • 其中第二个输出对应的是hidden(上图中的h_n),为方便理解,我们直接认为hidden有三个元素(即三行),每一行其实对应的就是batch 中每个sample对应的最有一个时刻的hidden输出;
    • 第三个输出对应的是cell(上图中的c_n),其来源与h_n类似。

    5、对LSTM的输出进行unpacking(或称解压缩对齐)

    下面我们将LSTM的output给unpacking,看输出是否我们上面想的符合:

    # unpack the results (we can do that because it remembers how we packed the sentences)
    # the [0] just takes the first element ("out") of the LSTM output (hidden states after each timestep)
    pad_embed_pack_lstm_pad = pad_packed_sequence(pad_embed_pack_lstm[0], batch_first=True)
    print(f'> pad_embed_pack_lstm_pad: \n{pad_embed_pack_lstm_pad}\n')
    
    # 输出:
    > pad_embed_pack_lstm_pad: 
    (tensor([[[-2.6204e-04,  8.1389e-02,  5.3563e-02,  6.8382e-02, -3.8394e-02],
             [-1.7043e-03,  1.6277e-01,  7.3700e-02,  1.0188e-01, -7.2268e-02],
             [-1.6189e-01,  1.4869e-01,  1.2640e-03,  7.8151e-02,  1.6798e-02],
             [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00]],
    
            [[ 8.5924e-02,  1.7067e-01,  7.0421e-02,  5.6211e-02, -1.3094e-01],
             [ 5.0581e-02,  2.8680e-01,  7.4511e-02,  1.0124e-01, -1.2804e-01],
             [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
             [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00]],
    
            [[ 5.2500e-02,  1.8381e-01,  5.6819e-02,  7.2193e-02, -1.0846e-01],
             [ 1.7779e-02,  2.6416e-01,  6.2469e-02,  1.0087e-01, -9.8012e-02],
             [ 8.8691e-02,  2.7159e-01,  7.8367e-02,  1.7065e-01, -1.5527e-01],
             [ 5.4739e-02,  3.3157e-01,  9.0170e-02,  1.3350e-01, -1.5051e-01]]],
           grad_fn=<IndexSelectBackward>), tensor([3, 2, 4]))
    

    可以看到,unpacking后,输出结果我们想象的完全相同。

    但是,很多时候我们需要的仅仅是最后的hidden结果,以用来计算loss。可以直接通过ht[-1]来得到我们想要的hidden结果。

    # however, usually, we would just be interested in the last hidden state of the lstm for each sequence,
    # i.e., the [last] lstm state after it has processed the sentence
    # for this, the last unpacking/padding is not necessary, as we can obtain this already by:
    seq, (ht, ct) = pad_embed_pack_lstm
    print(f'lstm last state without unpacking:\n{ht[-1]}')
    
    # 输出:
    lstm last state without unpacking:
    tensor([[-0.1619,  0.1487,  0.0013,  0.0782,  0.0168],
            [ 0.0506,  0.2868,  0.0745,  0.1012, -0.1280],
            [ 0.0547,  0.3316,  0.0902,  0.1335, -0.1505]],
           grad_fn=<SelectBackward>)
    
    
    # which is the same as
    outs, lens = pad_embed_pack_lstm_pad
    print(f'lstm last state after unpacking:\n'
          f'{torch.cat([outs[i, len - 1] for i, len in enumerate(lens)]).view((BATCH_SIZE, -1))}')
    # i.e. the last non-masked/padded/null state
    # so, you probably shouldn't unpack the sequence if you don't need to
    
    # 输出:
    lstm last state after unpacking:
    tensor([[-0.1619,  0.1487,  0.0013,  0.0782,  0.0168],
            [ 0.0506,  0.2868,  0.0745,  0.1012, -0.1280],
            [ 0.0547,  0.3316,  0.0902,  0.1335, -0.1505]], grad_fn=<ViewBackward>)
    

    6、对模型进行评估/计算loss

    现在假设我们的任务是一个二分类模型,则可直接使用hidden结果作为模型提取到的特征输入到最后的全连接层然后输出进行分类:

    fc = nn.Linear(LSTM_DIM, 2)
    final_out = fc(ht[-1])
    loss = nn.CrossEntropyLoss()
    print(loss(final_out, tags))
    
    # 输出:
    tensor(0.6876, grad_fn=<NllLossBackward>)
    

    7、完整代码实例

    上面其实已经展示了如何使用使用pytorch处理不同长度序列的二分类任务(多对一的,即多个输入一个输出),下面我们将展示一个多对多的词性标注任务。

    import numpy as np
    import torch
    import torch.nn as nn
    from torch.nn import Embedding, LSTM
    from torch.utils.data import Dataset, DataLoader
    from torch.nn.utils.rnn import pad_sequence, \
        pack_padded_sequence, pad_packed_sequence
    
    ######################################################
    ######################################################
    # 数据准备
    # 句子取自大宋提刑官片头曲《满江红》,标注随便取的
    
    sentences = ['千古悠悠', '有多少冤魂嗟叹', '空怅望人寰无限', '丛生哀怨',
                 '泣血蝇虫笑苍天', '孤帆叠影锁白链', '残月升骤起烈烈风', '尽吹散'] 
    
    tags = [list(np.random.choice(list('ABCDE'), len(sent))) for sent in sentences]
    
    sentences = [list(sent) for sent in sentences]
    
    vocab = list(set(i for sent in sentences for i in sent))
    vocab_size = len(vocab)
    
    word_to_idx = {word: idx for idx, word in enumerate(vocab, 1)}
    tag_to_idx = {'A':1, 'B':2, 'C':3, 'D':4, 'E':5}
    tag_size = len(tag_to_idx)
    
    X = []
    Y = []
    for sent, tag in zip(sentences, tags):
        sent_idx = []
        tag_idx = []
        for word, t in zip(sent, tag):
            sent_idx.append(word_to_idx[word])
            tag_idx.append(tag_to_idx[t])
        X.append(sent_idx)
        Y.append(tag_idx)
    
    DATA = [[x,y] for x, y in zip(X, Y)]
    DATA = list(map(lambda x: torch.tensor(x), DATA))
    
    print(' X:\n', X)
    print('\n Y:\n', Y)
    print('\n DATA:\n', DATA)
    
    ######################################################
    ######################################################
    # 封装数据,方便后续以DataLoader的方式载入数据
    
    class DataDemo(Dataset):
        def __init__(self, data):
            super(DataDemo, self).__init__()
            self.data = data
        
        def __getitem__(self, index):
            return self.data[index]
        
        def __len__(self):
            return len(self.data)
        
    
    def pad_collate(batch):
        xx, yy = zip(*batch)
        x_lens = [len(x) for x in xx]
        y_lens = [len(y) for y in yy]
        xx_pad = pad_sequence(xx, batch_first=True, padding_value=0)
        yy_pad = pad_sequence(yy, batch_first=True, padding_value=0)
        return xx_pad, torch.tensor(x_lens), \
            yy_pad, torch.tensor(y_lens)
    
    dataset = DataDemo(DATA)
    data_loader = DataLoader(dataset=dataset,
                             batch_size=3,
                             shuffle=True,
                             collate_fn=pad_collate,
                             drop_last=True)
    
    x, _, y, _ = next(iter(data_loader))
    print(' x:\n', x)
    print('\n y:\n', y)
    
    ####################################################
    ####################################################
    # 创建LSTM模型
    class ModelDemo(nn.Module):
        def __init__(self, nb_layers=1, embedding_dim=3, 
                     hidden_dim=5, batch_size=3, 
                     vocab_size=vocab_size+1, tag_size=tag_size+1):
            super(ModelDemo, self).__init__()
            self.nb_layers = nb_layers
            self.embedding_dim = embedding_dim
            self.hidden_dim = hidden_dim
            self.batch_size = batch_size
            self.vocab_size = vocab_size
            
            self.embedding = Embedding(vocab_size, embedding_dim)
            self.lstm = LSTM(input_size=embedding_dim,
                             hidden_size=hidden_dim,
                             num_layers=nb_layers,
                             batch_first=True,
                             bias=True,
                             dropout=0,
                             bidirectional=False)
            self.hidden2tag = nn.Linear(hidden_dim, tag_size)
            
            # 每一个batch都要重新初始化hidden、cell,不然模型会将
            # 上一个batch的hidden、cell作为初始值
            self.hidden = self.init_hidden()
        
        def init_hidden(self):
            # 一开始并没有隐藏状态所以我们要先初始化一个,
            # 隐藏状态形状为 (nb_layers, batch_size, hidden_dim)
            hidden = torch.zeros(self.nb_layers, 
                                 self.batch_size, 
                                 self.hidden_dim)
            cell = torch.zeros(self.nb_layers, 
                               self.batch_size,
                               self.hidden_dim)
            
            # 也可以用随机初始化而不是用0初始化,但哪个比较好还没研究
            # hidden = torch.randn(self.nb_layres, 
            #                      self.batch_size, 
            #                      self.hidden_dim)
            # cell = torch.randn(self.nb_layers, 
            #                    self.batch_size,
            #                    self.hidden_dim)
            return (hidden, cell)
        
        def forward(self, x, x_lens):
            embeds = self.embedding(x)
            # packing
            embed_packed = pack_padded_sequence(embeds, x_lens, 
                                              batch_first=True,
                                              enforce_sorted=False)
            lstm_out, (hidden, cell) = self.lstm(embed_packed, self.hidden)
            # unpacking
            lstm_out, lens = pad_packed_sequence(lstm_out, batch_first=True)
            
            # 多输出的任务,如词性标注,使用每个时序的输出即output来计算loss
            tag_score = self.hidden2tag(lstm_out)
            return tag_score
    
    ####################################################
    ####################################################
    # 训练模型
    
    model = ModelDemo()
    optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
    loss_fn = nn.CrossEntropyLoss(reduction='sum', ignore_index=0)
    
    for epoch in range(100):
        for i, batch in enumerate(data_loader):
            x_pad, x_len, y_pad, y_len = batch
            
            # 梯度归零
            model.zero_grad()
            optimizer.zero_grad()
            tag_scores = model(x, x_len)
            #print(tag_scores)
            
            batch_loss = 0
            for i in range(tag_scores.size(0)):
                #print(tag_scores[i])
                loss = loss_fn(tag_scores[i], y_pad[i])
                batch_loss+=loss
            batch_loss /=3
            batch_loss.backward()
            
            optimizer.step()
            print(batch_loss)
    
    # 输出
    tensor(11.1638, grad_fn=<DivBackward0>)
    tensor(11.3081, grad_fn=<DivBackward0>)
    tensor(11.0003, grad_fn=<DivBackward0>)
    tensor(11.2022, grad_fn=<DivBackward0>)
    tensor(10.8549, grad_fn=<DivBackward0>)
    tensor(11.0947, grad_fn=<DivBackward0>)
    tensor(10.7103, grad_fn=<DivBackward0>)
    tensor(10.9861, grad_fn=<DivBackward0>)
    ... ...
    tensor(4.4008, grad_fn=<DivBackward0>)
    tensor(4.4095, grad_fn=<DivBackward0>)
    tensor(4.3886, grad_fn=<DivBackward0>)
    tensor(4.3933, grad_fn=<DivBackward0>)
    tensor(4.3764, grad_fn=<DivBackward0>)
    tensor(4.3776, grad_fn=<DivBackward0>)
    tensor(4.3643, grad_fn=<DivBackward0>)
    

    多对多的任务与多对一任务非常重要的一点不同就是,多对多任务一般是使用每个时序状态的输出作为输出与target计算loss,而多对一则一般是通过最后的hidden层来计算loss。在多对一的文本分类任务中,这个hidden可以看作是前面所有输入语句总结出来的语义信息,也即模型提取出来的语句语义特征,用于后续计算分类。

    另外,如果batch为1,即每次仅输入一个样本的话其实是用不上pad_pack的,可参考序列模型和长短句记忆(LSTM)模型

    参考:
    序列模型和长短句记忆(LSTM)模型
    https://gist.github.com/MikulasZelinka/9fce4ed47ae74fca454e88a39f8d911a
    Pad pack sequences for Pytorch batch processing with DataLoader
    pytorch官方文档

    相关文章

      网友评论

          本文标题:使用pytorch处理不同长度序列

          本文链接:https://www.haomeiwen.com/subject/lhzaxhtx.html