美文网首页
torch MSELoss

torch MSELoss

作者: JeremyL | 来源:发表于2020-04-28 20:49 被阅读0次

*CLASS* torch.nn.MSELoss(size_average=None, reduce=None, reduction=mean)

torch.nn.functional.mse_loss(input, target, size_average=None, reduce=None, reduction=mean) → Tensor


参数

  • size_average: 默认为True, 计算一个batch中所有loss的均值;reduce为 False时,忽略这个参数;
  • reduce: 默认为True, 计算一个batch中所有loss的均值或者和;
    • reduce = False,size_average 参数失效,返回的 loss是向量,维度为 (batch_size, ) ;
    • reduce = True,size_average 参数失效,返回的 loss是标量;
      • size_average = True,返回 loss.mean();
      • size_average = False,返回 loss.sum();
  • reduction'none' | 'mean' | 'sum',默认均值;指定size_averagereduce参数就不使用reduction ,与之相反。

输入

"mse_cpu" not implemented for 'Int'

  • Input: (N,∗) where *∗ means, any number of additional dimensions;input.float()
  • Target: (N,∗) , same shape as the input;Target.float()

例子

loss = nn.MSELoss(reduce=False, size_average=False)
input = torch.randn(3, 5)
target = torch.randn(3, 5)
output = loss(input.float(), target.float())
print(output)

tensor([[1.2459e+01, 5.8741e-02, 1.8397e-01, 4.9688e-01, 7.3362e-02],
        [8.0921e-01, 1.8580e+00, 4.5180e+00, 7.5342e-01, 4.1929e-01],
        [2.6371e-02, 1.5204e+00, 1.5778e+00, 1.1634e+00, 9.5338e-03]])
loss = nn.MSELoss(reduce = True, size_average=True)
input = torch.randn(3, 5)
target = torch.randn(3, 5)
output = loss(input, target)
print(output)

tensor(1.2368)

参考

TORCH.NN
torch.nn.functional

相关文章

网友评论

      本文标题:torch MSELoss

      本文链接:https://www.haomeiwen.com/subject/cotiwhtx.html