美文网首页
Softmax、交叉熵损失函数及导数

Softmax、交叉熵损失函数及导数

作者: 迎风漂扬 | 来源:发表于2019-04-13 13:18 被阅读0次

    这是一个简单的神经网络,输出层的激活函数为SoftMax,根据定义,输出层各节点的输出值为:
    y_{j}=\frac{e^{u_{j}}}{\sum_{k} e^{u_{k}}}

    其中u_{j}是该节点的输入
    u_{j}=\sum_{i} w_{i j} x_{i}

    x_{i}是上一层节点的输出值,w_{i,j}是权重,所以:
    y_{j}=\operatorname{softmax}\left(u_{j}\right)=\frac{e^{u_{j}}}{\sum_{k} e^{u_{k}}}

    再来看损失函数:
    L=-\sum_{j} z_{j} \ln y_{j}

    z_{j}是训练实例的标签值:
    Z=\left\{z_{1}, z_{2}, \cdots, z_{j}\right\}
    显然,只有一个是正确分类,所以向量里只有一个分量值为1,其余都是0:
    Z=\left\{0,0, \cdots, z_{t}, \cdots\right\}=\{0,0, \cdots, 1, \cdots\}

    t是正确类别的下标,所以:
    L=-\ln y_{t}

    例如一个三分类的任务, 正确分类是第二个,输出结果是[0.3,0.5,0.2],所以这里的误差为:
    L=-\ln (0.5)=0.693
    再比如输出为[0.4,0.15,0.45]:
    L=-\ln (0.15)=1.897
    显然,输出是[0,1,0]时误差是0,现在要根据误差来求得w_{i,j}的梯度:
    \frac{\partial L}{\partial w_{i j}}=\frac{\partial L}{\partial u_{j}} \frac{\partial u_{j}}{\partial w_{i j}}=\frac{\partial L}{\partial u_{j}} x_{i}
    \frac{\partial L}{\partial u_{j}}=-\frac{\partial \ln y_{t}}{\partial u_{j}}=-\frac{\partial \ln \frac{e^{u_{t}}}{\sum_{k} e^{u_{k}}}}{\partial u_{j}}
    \ln \frac{e^{u_{t}}}{\sum_{k} e^{u_{k}}}=\ln e^{u_{t}}-\ln \sum_{k} e^{u_{k}}

    这里求的是L关于u_{j}的梯度,所以要分两种情况讨论,第一种是当j=t时:
    \frac{\partial \ln \frac{e^{u_{t}}}{\sum_{k} e^{u_{k}}}}{\partial u_{j}}=\frac{\partial\left(\ln e^{u_{j}}-\ln \sum_{k} e^{u_{k}}\right)}{\partial u_{j}}=\frac{\partial \ln e^{u_{j}}}{\partial u_{j}}-\frac{\partial \ln \sum_{k} e^{u_{k}}}{\partial u_{j}}=1-\frac{\partial \ln \sum_{k} e^{u_{k}}}{\partial u_{j}}
    \frac{\partial \ln \sum_{k} e^{u_{k}}}{\partial u_{j}}=\frac{\partial \ln \sum_{k} e^{u_{k}}}{\partial \sum_{k} e^{u_{k}}} \frac{\partial \sum_{k} e^{u_{k}}}{\partial_{k} e^{u_{k}}}=\frac{1}{\sum_{k} e^{u_{k}}} \frac{\partial \sum_{k} e^{u_{k}}}{\partial u_{j}}=\frac{e^{u_{j}}}{\sum_{k} e^{u_{k}}}=y_{j}

    所以:
    \frac{\partial L}{\partial u_{j}}=y_{j}-1
    \frac{\partial L}{\partial w_{i j}}=\frac{\partial L}{\partial u_{j}} \frac{\partial u_{j}}{\partial w_{i j}}=\left(y_{j}-1\right) x_{i}

    而当j \neq t时,u_{j}并不影响e^{u_{t}},所以:
    \frac{\partial \ln \frac{e^{u_{t}}}{\sum_{k} e^{u_{k}}}}{\partial u_{j}}=\frac{\partial\left(\ln e^{u_{t}}-\ln \sum_{k} e^{u_{k}}\right)}{\partial u_{j}}=-y_{j}
    \frac{\partial L}{\partial u_{j}}=y_{j}
    \frac{\partial L}{\partial w_{i j}}=y_{j} x_{i}

    相关文章

      网友评论

          本文标题:Softmax、交叉熵损失函数及导数

          本文链接:https://www.haomeiwen.com/subject/enylwqtx.html