Welcome To My Blog
word2vec包含两种框架,一种是CBOW(Continuous Bag-of-Words Model),另一种是Skip-gram(Continuous Skip-gram Model),如下图所示。这两种模型的任务是:进行词的预测,CBOW是预测P(w|context(w)),Skip-gram是预测P(context(w)|w)。当整个词典中所有词的预测任务整体达到最优时,此时的词向量便是我们想要的结果。
![](https://img.haomeiwen.com/i9608551/9b2c3a2881181197.png)
word2vec有两种计算方式专门提升训练速度,分别是:Hierarchical Softmax 和 Negative Sampling。
本篇文章只写出有关模型的数学推导过程,其它细节可参考peghoty的word2vec 中的数学,我也是根据这篇文章学习的
Hierarchical Softmax with Continuous Bag-of-Words Model
![](https://img.haomeiwen.com/i9608551/9422833f31693a44.jpg)
Hierarchical Softmax with Continuous Skip-gram Model
![](https://img.haomeiwen.com/i9608551/a849676592ecc953.jpg)
Negative Sampling with Continuous Bag-of-Words Model
![](https://img.haomeiwen.com/i9608551/1a0d7fd737b39752.jpg)
Negative Sampling with Continuous Skip-gram Model
![](https://img.haomeiwen.com/i9608551/ee82de712a870565.jpg)
参考
Tomas Mikolov, Efficient Estimation of Word Representations in Vector Space
peghoty, word2vec 中的数学
网友评论