美文网首页
循环神经网络进阶

循环神经网络进阶

作者: 英文名字叫dawntown | 来源:发表于2020-02-19 18:51 被阅读0次

几种循环神经网络的变种:

  1. 门控循环神经网络 (GRU)
  2. 长短记忆 (long short-term memory, LSTM) 循环神经网络
  3. 深度循环神经网络
  4. 双向循环神经网络

首先下面使用的d2l.RNNModeld2l.train_and_predict_rnn_pytorch实现如d2l_jay9460.py

1. GRU

RNN存在的问题: 时序变长以后由于历史信息依赖关系的缘故, 梯度较容易出现衰减或爆炸
门控循环神经网络: 捕捉时间序列中时间步距离较⼤的依赖关系

RNN和GRU单元结构
RNN:

GRU:
  • \odot: 按元素乘

  • W_{xz}, W_{xr}, W_{xh}\in \mathbb{R}^{n_{vocab}\times n_{hidden}}

  • W_{hz},W_{hr},W_{hh}\in\mathbb{R}^{n_{hidden}\times n_{hidden}}

  • b_z,b_r,b_h\in\mathbb{R}^{n_{hidden}}

  • W_{hq}\in\mathbb{R}^{n_{hidden}\times n_{vocab}},\space b_q\in\mathbb{R}^{n_{vocab}}

  • 重置⻔有助于捕捉时间序列⾥短期的依赖关系;

  • 更新⻔有助于捕捉时间序列⾥⻓期的依赖关系.

  • \uparrow原理是什么?

def gru(inputs, state, params):
    W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q = params
    H, = state
    outputs = []
    for X in inputs:
        Z = torch.sigmoid(torch.matmul(X, W_xz) + torch.matmul(H, W_hz) + b_z)
        R = torch.sigmoid(torch.matmul(X, W_xr) + torch.matmul(H, W_hr) + b_r)
        H_tilda = torch.tanh(torch.matmul(X, W_xh) + R * torch.matmul(H, W_hh) + b_h)
        H = Z * H + (1 - Z) * H_tilda
        Y = torch.matmul(H, W_hq) + b_q
        outputs.append(Y)
    return outputs, (H,)

Pytorch简洁实现

num_hiddens=256
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']

lr = 1e-2 # 注意调整学习率
gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens)
model = d2l.RNNModel(gru_layer, vocab_size).to(device)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)

2. LSTM

LSTM中隐藏状态的计算
  • W_{xi}, W_{xf}, W_{xo}, W_{xc}\in \mathbb{R}^{n_{vocab}\times n_{hidden}}
  • W_{hi},W_{hf},W_{ho}, W_{hc}\in\mathbb{R}^{n_{hidden}\times n_{hidden}}
  • b_i,b_f,b_o,b_c\in\mathbb{R}^{n_{hidden}}
  • W_{hq}\in\mathbb{R}^{n_{hidden}\times n_{vocab}},\space b_q\in\mathbb{R}^{n_{vocab}}
  • 遗忘门: 控制上一时间步的记忆细胞
  • 输入门: 控制当前时间步的输入
  • 输出门: 控制从记忆细胞到隐藏状态
  • 记忆细胞: ⼀种特殊的隐藏状态的信息的流动
def lstm(inputs, state, params):
    [W_xi, W_hi, b_i, W_xf, W_hf, b_f, W_xo, W_ho, b_o, W_xc, W_hc, b_c, W_hq, b_q] = params
    (H, C) = state
    outputs = []
    for X in inputs:
        I = torch.sigmoid(torch.matmul(X, W_xi) + torch.matmul(H, W_hi) + b_i)
        F = torch.sigmoid(torch.matmul(X, W_xf) + torch.matmul(H, W_hf) + b_f)
        O = torch.sigmoid(torch.matmul(X, W_xo) + torch.matmul(H, W_ho) + b_o)
        C_tilda = torch.tanh(torch.matmul(X, W_xc) + torch.matmul(H, W_hc) + b_c)
        C = F * C + I * C_tilda
        H = O * C.tanh()
        Y = torch.matmul(H, W_hq) + b_q
        outputs.append(Y)
    return outputs, (H, C)

Pytorch简洁实现

num_hiddens=256
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']

lr = 1e-2 # 注意调整学习率
lstm_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens)
model = d2l.RNNModel(lstm_layer, vocab_size)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)

3. 深度循环神经网络和双向循环神经网络

深度RNN
num_hiddens=256
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']

lr = 1e-2 # 注意调整学习率

gru_layer = nn.LSTM(input_size=vocab_size, hidden_size=num_hiddens,num_layers=2)
model = d2l.RNNModel(gru_layer, vocab_size).to(device)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)
双向RNN

\begin{aligned} \overrightarrow{\boldsymbol{H}}_t &= \phi(\boldsymbol{X}_t \boldsymbol{W}_{xh}^{(f)} + \overrightarrow{\boldsymbol{H}}_{t-1} \boldsymbol{W}_{hh}^{(f)} + \boldsymbol{b}_h^{(f)})\\ \overleftarrow{\boldsymbol{H}}_t &= \phi(\boldsymbol{X}_t \boldsymbol{W}_{xh}^{(b)} + \overleftarrow{\boldsymbol{H}}_{t+1} \boldsymbol{W}_{hh}^{(b)} + \boldsymbol{b}_h^{(b)}) \end{aligned}\\\boldsymbol{H}_t=(\overrightarrow{\boldsymbol{H}}_{t}, \overleftarrow{\boldsymbol{H}}_t)\\\boldsymbol{O}_t = \boldsymbol{H}_t \boldsymbol{W}_{hq} + \boldsymbol{b}_q

num_hiddens=128
num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e-2, 1e-2
pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']

lr = 1e-2 # 注意调整学习率

gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens,bidirectional=True)
model = d2l.RNNModel(gru_layer, vocab_size).to(device)
d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,
                                corpus_indices, idx_to_char, char_to_idx,
                                num_epochs, num_steps, lr, clipping_theta,
                                batch_size, pred_period, pred_len, prefixes)

相关文章

网友评论

      本文标题:循环神经网络进阶

      本文链接:https://www.haomeiwen.com/subject/qlyvfhtx.html