美文网首页
深度学习(4)Pytorch API

深度学习(4)Pytorch API

作者: WallE瓦力狗 | 来源:发表于2019-07-12 12:01 被阅读0次

    作者:aiqiu_gogogo
    来源:CSDN
    原文:https://blog.csdn.net/aiqiu_gogogo/article/details/78645887

    torch.nn.Module
    打印所有子模块:

    for sub_module in model.children():
        print(sub_module)
    

    按照名字打印子模块:

    for name, module in model.named_children():
        if name in ['conv4', 'conv5']:
            print(module)
    

    打印所有模块:

    for module in model.modules():
        print(module)
    

    按照名字打印所有模块:

    for name, module in model.named_modules():
        if name in ['conv4', 'conv5']:
            print(module)
    

    打印模型所有参数:

    for param in model.parameters():
        print(type(param.data), param.size())
    

    打印模型所有参数名字:

    model.state_dict().keys()
    model.cpu():将模型复制到CPU上;
    model.cuda():将模型复制到GPU上;
    model.double():将模型数据类型转换为double;
    model.eval():将模型设置成test模式,仅仅当模型中有Dropout和BatchNorm是才会有影响;
    model.float():将模型数据类型转换为float;
    model.half():将模型数据类型转换为half;
    model.load_state_dict(state_dict):用来加载模型参数,将state_dict中的parameters和buffers复制到此module和它的后代中,state_dict中的key必须和model.state_dict()返回的key一致;
    model.state_dict():返回一个字典,保存着module的所有状态;
    model.train():将模型设置为训练模式;
    model.zero_grad():将模型中的所有模型参数的梯度设置为0;
    torch.nn.Sequential时序模型例子

    model = nn.Sequential(
              nn.Conv2d(1,20,5),
              nn.ReLU(),
              nn.Conv2d(20,64,5),
              nn.ReLU()
            )
    ##################or##################
    model = nn.Sequential(OrderedDict([
              ('conv1', nn.Conv2d(1,20,5)),
              ('relu1', nn.ReLU()),
              ('conv2', nn.Conv2d(20,64,5)),
              ('relu2', nn.ReLU())
            ]))
    

    卷积层

    一维卷积:

    torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
    输入输出尺寸关系:Lout=floor((Lin+2padding−dilation(kernerlSize−1)−1)/stride+1)Lout=floor((Lin+2padding−dilation(kernerlSize−1)−1)/stride+1)

    二维卷积:

    torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
    输入输出尺寸关系:Hout=floor((Hin+2padding[0]−dilation0−1)/stride[0]+1)Hout=floor((Hin+2padding[0]−dilation0−1)/stride[0]+1) Wout=floor((Win+2padding[1]−dilation1−1)/stride[1]+1)Wout=floor((Win+2padding[1]−dilation1−1)/stride[1]+1)

    三维卷积:

    torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
    输入输出尺寸关系:Dout=floor((Din+2padding[0]−dilation0−1)/stride[0]+1)Dout=floor((Din+2padding[0]−dilation0−1)/stride[0]+1) Hout=floor((Hin+2padding[1]−dilation2−1)/stride[1]+1)Hout=floor((Hin+2padding[1]−dilation2−1)/stride[1]+1) Wout=floor((Win+2padding[2]−dilation2−1)/stride[2]+1)Wout=floor((Win+2padding[2]−dilation2−1)/stride[2]+1)

    一维反卷积:

    torch.nn.ConvTranspose1d(in_channels,out_channels,kernel_size,stride=1,padding=0,output_padding=0,groups=1,bias=True)
    输入输出尺寸关系:
    Lout=(Lin−1)stride−2padding+kernelSize+outputPaddingLout=(Lin−1)stride−2padding+kernelSize+outputPadding

    二维反卷积:

    torch.nn.ConvTranspose2d(in_channels,out_channels,kernel_size,stride=1,padding=0,output_padding=0,groups=1,bias=True)
    输入输出尺寸关系:
    Hout=(Hin−1)stride[0]−2padding[0]+kernelSize[0]+outputPadding[0]Hout=(Hin−1)stride[0]−2padding[0]+kernelSize[0]+outputPadding[0] Wout=(Win−1)stride[1]−2padding[1]+kernelSize[1]+outputPadding[1]Wout=(Win−1)stride[1]−2padding[1]+kernelSize[1]+outputPadding[1]

    三维反卷积:

    torch.nn.ConvTranspose3d(in_channels,out_channels,kernel_size,stride=1,padding=0,output_padding=0,groups=1,bias=True)
    输入输出尺寸关系:
    Dout=(Din−1)stride[0]−2padding[0]+kernelSize[0]+outputPadding[0]Dout=(Din−1)stride[0]−2padding[0]+kernelSize[0]+outputPadding[0] Hout=(Hin−1)stride[1]−2padding[1]+kernelSize[1]+outputPadding[0]Hout=(Hin−1)stride[1]−2padding[1]+kernelSize[1]+outputPadding[0] Wout=(Win−1)stride[2]−2padding[2]+kernelSize[2]+outputPadding[2]Wout=(Win−1)stride[2]−2padding[2]+kernelSize[2]+outputPadding[2]

    池化层

    一维池化:

    torch.nn.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
    输入输出尺寸关系:
    Lout=floor((Lin+2padding−dilation(kernelSize−1)−1)/stride+1Lout=floor((Lin+2padding−dilation(kernelSize−1)−1)/stride+1

    二维池化:

    torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
    输入输出尺寸关系:
    Hout=floor((Hin+2padding[0]−dilation0−1)/stride[0]+1Hout=floor((Hin+2padding[0]−dilation0−1)/stride[0]+1 Wout=floor((Win+2padding[1]−dilation1−1)/stride[1]+1Wout=floor((Win+2padding[1]−dilation1−1)/stride[1]+1

    三维池化:

    torch.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
    输入输出尺寸关系:
    Dout=floor((Din+2padding[0]−dilation0−1)/stride[0]+1)Dout=floor((Din+2padding[0]−dilation0−1)/stride[0]+1) Hout=floor((Hin+2padding[1]−dilation1−1)/stride[1]+1)Hout=floor((Hin+2padding[1]−dilation1−1)/stride[1]+1) Wout=floor((Win+2padding[2]−dilation2−1)/stride[2]+1)Wout=floor((Win+2padding[2]−dilation2−1)/stride[2]+1)

    一维反池化:

    torch.nn.MaxUnpool1d(kernel_size, stride=None, padding=0)
    输入输出尺寸关系:
    Hout=(Hin−1)stride[0]−2padding[0]+kernelSize[0]Hout=(Hin−1)stride[0]−2padding[0]+kernelSize[0]

    二维反池化:

    torch.nn.MaxUnpool2d(kernel_size, stride=None, padding=0)
    输入输出尺寸关系:
    Hout=(Hin−1)stride[0]−2padding[0]+kernelSize[0]Hout=(Hin−1)stride[0]−2padding[0]+kernelSize[0] Wout=(Win−1)stride[1]−2padding[1]+kernelSize[1]Wout=(Win−1)stride[1]−2padding[1]+kernelSize[1]

    三维反池化:

    torch.nn.MaxUnpool3d(kernel_size, stride=None, padding=0)
    输入输出尺寸关系:
    Dout=(Din−1)stride[0]−2padding[0]+kernelSize[0]Dout=(Din−1)stride[0]−2padding[0]+kernelSize[0] Hout=(Hin−1)stride[1]−2padding[0]+kernelSize[1]Hout=(Hin−1)stride[1]−2padding[0]+kernelSize[1] Wout=(Win−1)stride[2]−2padding[2]+kernelSize[2]Wout=(Win−1)stride[2]−2padding[2]+kernelSize[2]
    其他各种池化操作见:https://pytorch-cn.readthedocs.io/zh/latest/package_references/torch-nn/#containers
    非线性激活层
    torch.nn.ReLU(inplace=False):ReLU(x)=max(0,x)ReLU(x)=max(0,x);
    torch.nn.ReLU6(inplace=False):ReLU6(x)=min(max(0,x),6)ReLU6(x)=min(max(0,x),6);
    torch.nn.ELU(alpha=1.0, inplace=False):f(x)=max(0,x)+min(0,alpha∗(ex−1))f(x)=max(0,x)+min(0,alpha∗(ex−1));
    torch.nn.PReLU(num_parameters=1, init=0.25):f(x)=max(0,x)+negativeslope∗min(0,x)f(x)=max(0,x)+negativeslope∗min(0,x);
    torch.nn.Threshold(threshold, value, inplace=False):relu的一般情况;
    torch.nn.Hardtanh(min_value=-1, max_value=1, inplace=False);
    torch.nn.Sigmoid();
    orch.nn.Tanh();
    torch.nn.LogSigmoid();
    torch.nn.Softplus(beta=1, threshold=20);
    torch.nn.Softshrink(lambd=0.5);
    torch.nn.Softsign();
    torch.nn.Softshrink(lambd=0.5);
    torch.nn.Softmin();
    torch.nn.Softmax();
    torch.nn.LogSoftmax();

    BN层

    一维BN层:torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True);

    二维BN层:torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True);

    三维BN层:torch.nn.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True);

    插值层

    二维最近邻插值层:torch.nn.UpsamplingNearest2d(size=None, scale_factor=None);

    二维双线性插值层:torch.nn.UpsamplingBilinear2d(size=None, scale_factor=None);

    其他重要层

    全链接层:torch.nn.Linear(in_features, out_features, bias=True);

    Dropout层:

    torch.nn.Dropout(p=0.5, inplace=False)
    torch.nn.Dropout2d(p=0.5, inplace=False)
    torch.nn.Dropout3d(p=0.5, inplace=False)
    范数距离层:torch.nn.PairwiseDistance(p=2, eps=1e-06);
    L1损失层:torch.nn.L1Loss(size_average=True);
    L2损失层:torch.nn.MSELoss(size_average=True);
    交叉熵损失层:torch.nn.CrossEntropyLoss(weight=None, size_average=True);
    损失层用法及大量其它损失层见中文文档;
    多GPU使用
    关键函数:torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0)
    解释:此容器通过将mini-batch划分到不同的设备上来实现给定module的并行。在forward过程中,module会在每个设备上都复制一遍,每个副本都会处理部分输入。在backward过程中,副本上的梯度会累加到原始module上,具体用法见多GPU实例博客;

    torch.nn.functional
    其中有大量功能函数,同torch.nn的函数功能相同,用法不同。

    相关文章

      网友评论

          本文标题:深度学习(4)Pytorch API

          本文链接:https://www.haomeiwen.com/subject/dldekctx.html