1、nn.RNN示例:
import torch
import torch.nn as nn
# 参数一:输入x的特征维度,词嵌入的维度;
# 参数二:隐藏层神经元的个数;
# 参数三:隐藏层的层数
rnn = nn.RNN(5, 6, 1)
# 1:代表当前批次的样本个数,3:当前样本的sequence_length, 5:词嵌入的维度
input1 = torch.randn(1, 3, 5)
# 隐藏层的层数,3:当前样本的sequence_length, 6:
h0 = torch.randn(1, 3, 6)
output, hn = rnn(input1, h0)
print(output)
print(hn)
# 2、nn.LSTM示例:
# 参数一:输入x的特征维度,词嵌入的维度;
# 参数二:隐藏层神经元的个数;
# 参数三:隐藏层的层数
lstm = nn.LSTM(5, 6, 2)
# 1:代表当前批次的样本个数,3:当前样本的sequence_length, 5:词嵌入的维度
input1 = torch.randn(1, 3, 5)
# 2:隐藏层的层数, 3:当前样本的sequence_length,6:隐藏层神经元的个数
h0 = torch.randn(2, 3, 6)
c0 = torch.randn(2, 3, 6)
output, (hn, cn) = lstm(input1, (h0, c0))
print("output为:", output)
print("output形状为:", output.shape)
print("hn为:", hn)
print("cn为:", cn)
print("cn形状为:", cn.shape)
# 3、nn.GRU使用示例:
gru = nn.GRU(5, 6, 2)
input2 = torch.randn(1, 3, 5)
# 2:两个隐藏层; 3:序列长度sequence_length;6:吟唱层的神经元个数
h0 = torch.randn(2, 3, 6)
output, hn = rnn(gru, h0)
print("output:\n", output)
print("hn:\n", hn)
2、输出结果:
![](https://img.haomeiwen.com/i6909373/03cce5f55729cfac.png)
输出结果.png
网友评论