美文网首页
转一篇关于KL divergence与cross entropy

转一篇关于KL divergence与cross entropy

作者: neroop | 来源:发表于2019-01-14 18:35 被阅读0次

    KL divergence is a natural way to measure the difference between two probability distributions. The entropy H(p) of a distribution p gives the minimum possible number of bits per message that would be needed (on average) to losslessly encode events drawn from p. Achieving this bound would require using an optimal code designed for p, which assigns shorter code words to higher probability events. D_{KL}(p \parallel q) can be interpreted as the expected number of extra bits per message needed to encode events drawn from true distribution p, if using an optimal code for distribution q rather than p. It has some nice properties for comparing distributions. For example, if p and q are equal, then the KL divergence is 0.

    The cross entropy H(p,q) can be interpreted as the number of bits per message needed (on average) to encode events drawn from true distribution p, if using an optimal code for distribution q. Note the difference: D_{KL}(p \parallel q) measures the average number of extra bits per message, whereas H(p,q) measures the average number of total bits per message. It's true that, for fixed p, H(p,q) will grow as q becomes increasingly different from p. But, if p isn't held fixed, it's hard to interpret H(p,q) as an absolute measure of the difference, because it grows with the entropy of p.

    KL divergence and cross entropy are related as:

    D_{KL}(p \parallel q)=H(p,q)−H(p)

    We can see from this expression that, when p and q are equal, the cross entropy is not zero; rather, it's equal to the entropy of p.

    Cross entropy commonly shows up in loss functions in machine learning. In many of these situations, p is treated as the 'true' distribution, and q as the model that we're trying to optimize. For example, in classification problems, the commonly used cross entropy loss (aka log loss), measures the cross entropy between the empirical distribution of the labels (given the inputs) and the distribution predicted by the classifier. The empirical distribution for each data point simply assigns probability 1 to the class of that data point, and 0 to all other classes. Side note: The cross entropy in this case turns out to be proportional to the negative log likelihood, so minimizing it is equivalent maximizing the likelihood.

    Note that p (the empirical distribution in this example) is fixed. So, it would be equivalent to say that we're minimizing the KL divergence between the empirical distribution and the predicted distribution. As we can see in the expression above, the two are related by the additive term H(p) (the entropy of the empirical distribution). Because p is fixed, H(p) doesn't change with the parameters of the model, and can be disregarded in the loss function. We might still want to talk about the KL divergence for theoretical/philosophical reasons but, in this case, they're equivalent from the perspective of solving the optimization problem. This may not be true for other uses of cross entropy and KL divergence, where p might vary.

    t-SNE fits a distribution p in the input space. Each data point is mapped into the embedding space, where corresponding distribution q is fit. The algorithm attempts to adjust the embedding to minimize D_{KL}(p \parallel q). As above, p is held fixed. So, from the perspective of the optimization problem, minimizing the KL divergence and minimizing the cross entropy are equivalent. Indeed, van der Maaten and Hinton (2008) say in section 2: "A natural measure of the faithfulness with which q_{j|i} models p_{j|i} is the Kullback-Leibler divergence (which is in this case equal to the cross-entropy up to an additive constant)."

    van der Maaten and Hinton (2008). Visualizing data using t-SNE.

    原文见StackExchange questions #265966

    相关文章

      网友评论

          本文标题:转一篇关于KL divergence与cross entropy

          本文链接:https://www.haomeiwen.com/subject/kpsckqtx.html