计算多标签分类时候的损失函数一般选择BCELoss和BCEWithLogitsLoss,这两者的区别在于:
- BCELoss 是处理经过Sigmoid之后输出的概率值
- BCEWithLogitsLoss是把两者合到一起Sigmoid-BCELoss
具体计算例子:
- 准备输入input:
import torch
import torch.nn as nn
input = torch.tensor([[-0.4089,-1.2471,0.5907],
[-0.4897,-0.8267,-0.7349],
[0.5241,-0.1246,-0.4751]])
print(input)
tensor([[-0.4089, -1.2471, 0.5907],
[-0.4897, -0.8267, -0.7349],
[ 0.5241, -0.1246, -0.4751]])
- sigmoid 将输出值约束到0-1之间:
m=nn.Sigmoid()
S_input=m(input)
print(S_input)
tensor([[0.3992, 0.2232, 0.6435],
[0.3800, 0.3043, 0.3241],
[0.6281, 0.4689, 0.3834]])
- 准备目标值target:
target=torch.FloatTensor([[0,1,1],[0,0,1],[1,0,1]])
print(target)
tensor([[0., 1., 1.],
[0., 0., 1.],
[1., 0., 1.]])
- 接着使用BCELoss计算损失值:
BCELoss=nn.BCELoss()
loss=BCELoss(S_input,target)
print(loss)
tensor(0.7193)
-
如下图看BCELoss如何计算多标签分类的损失,验证计算结果一致:
- 下面通过具体实现验证图示的计算过程:
loss = 0.0
for i in range(S_input.shape[0]):
for j in range(S_input.shape[1]):
loss += -(target[i][j] * torch.log(S_input[i][j]) + (1 - target[i][j]) * torch.log(1 - S_input[i][j]))
print(loss/(S_input.shape[0]*S_input.shape[1])) # 默认取均值
tensor(0.7193)
- BCEWithLogitsLoss 就是把求Sigmoid 和上图的取log等计算loss合到一起:
BCEWithLogitsLoss=nn.BCEWithLogitsLoss()
loss=BCEWithLogitsLoss(input,target)
print(loss)
tensor(0.7193)
网友评论