美文网首页js css html
torch.flatten vs torch.nn.Flatte

torch.flatten vs torch.nn.Flatte

作者: LabVIEW_Python | 来源:发表于2023-03-05 09:58 被阅读0次

    torch.flatten 和 torch.nn.Flatten 都用于把多维Tensor展平(flatten), 区别是:

    • torch.flatten是函数,使用前无需先实例化,默认从第0维开始展平,通用化好

    torch.flatten(input, start_dim=0, end_dim=- 1)

    • torch.nn.Flatten是类,使用前需要先实例化,由于其在torch.nn模块中,默认专门处理神经网数据的展平,而神经网络数据通常第0维是Batch_Size, Batch_Size无需展平,所以其默认从第1维开始展平。

    Class torch.nn.Flatten(start_dim=1, end_dim=- 1)

    测试范例程序如下:

    import torch
    
    input_tensor = torch.randn(32, 4, 5, 5)
    m = torch.nn.Flatten() #实例化Flatten
    output1 = m(input_tensor)
    print(output1.shape)
    output2 = torch.flatten(input_tensor)
    print(output2.shape)
    

    运行结果如下:

    torch.Size([32, 100])
    torch.Size([3200])

    另外,torch.nn.Flatten适合作为一个“神经网络层”,加入神经网络中,范例:

    def _create_fcs(self, split_size, num_boxes, num_classes):
            S, B, C = split_size, num_boxes, num_classes
            return nn.Sequential(
                nn.Flatten(),
                nn.Linear(1024 * S * S, 4096), 
                # Usually, dropout is placed on the fully connected layers only
                # A rule of thumb is to set the keep probability (1 - drop probability) to 0.5 when dropout is applied to fully connected layers
                # https://stackoverflow.com/questions/46841362/where-dropout-should-be-inserted-fully-connected-layer-convolutional-layer
                nn.Dropout(0.5),
                nn.LeakyReLU(0.1),
                # The predictions are encoded as an S × S × (B ∗ 5 + C) tensor
                nn.Linear(4096, S * S * (B * 5 + C)), # 7*7*(2*5+20)=1470
            )
    

    相关文章

      网友评论

        本文标题:torch.flatten vs torch.nn.Flatte

        本文链接:https://www.haomeiwen.com/subject/kmuuldtx.html