TensorFlow 语言模型训练实战

作者: 风驰电掣一瓜牛 | 来源:发表于2017-05-11 13:44 被阅读1640次

    实验1:PTB数据集实验

    教程: https://www.tensorflow.org/versions/r0.12/tutorials/recurrent/

    数据地址: http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz

    下载解压后,./simple-examples/data下的文件:

    README
    ptb.char.test.txt
    ptb.char.train.txt
    ptb.char.valid.txt
    ptb.test.txt
    ptb.train.txt
    ptb.valid.txt
    

    ptb.*.txt 格式一样,每行一个句子,每个单词用空格相隔,分别作为训练集、验证集和测试集

    ptb.char.*.txt 格式一样,每个字符用空格相隔,每个单词用"_"相隔

    代码地址: https://github.com/tensorflow/models/blob/master/tutorials/rnn/ptb/ptb_word_lm.py

    运行code:

    cd models/rnn/ptb
    python ptb_word_lm.py --data_path=./simple-examples/data/ --model medium
    

    迭代39次,最后两次迭代结果如下:

    Epoch: 38 Learning rate: 0.001
    0.008 perplexity: 53.276 speed: 8650 wps 
    0.107 perplexity: 47.396 speed: 8614 wps 
    0.206 perplexity: 49.082 speed: 8635 wps 
    0.306 perplexity: 48.002 speed: 8643 wps 
    0.405 perplexity: 47.800 speed: 8646 wps 
    0.505 perplexity: 47.917 speed: 8649 wps 
    0.604 perplexity: 47.110 speed: 8650 wps 
    0.704 perplexity: 47.361 speed: 8651 wps 
    0.803 perplexity: 46.620 speed: 8652 wps 
    0.903 perplexity: 45.850 speed: 8652 wps 
    Epoch: 38 Train Perplexity: 45.906
    Epoch: 38 Valid Perplexity: 88.246
    Epoch: 39 Learning rate: 0.001
    0.008 perplexity: 52.994 speed: 8653 wps 
    0.107 perplexity: 47.077 speed: 8655 wps 
    0.206 perplexity: 48.910 speed: 8493 wps 
    0.306 perplexity: 48.088 speed: 8545 wps 
    0.405 perplexity: 47.966 speed: 8573 wps 
    0.505 perplexity: 47.977 speed: 8589 wps 
    0.604 perplexity: 47.122 speed: 8601 wps 
    0.704 perplexity: 47.305 speed: 8609 wps 
    0.803 perplexity: 46.564 speed: 8615 wps 
    0.903 perplexity: 45.826 speed: 8620 wps 
    Epoch: 39 Train Perplexity: 45.873
    Epoch: 39 Valid Perplexity: 88.185
    Test Perplexity: 83.922
    

    在Tesla M40 24GB上训练花了大约70分钟。

    其他参考:
    http://www.cnblogs.com/edwardbi/p/5554353.html

    实验2:Char-RNN 实验

    代码和教程: https://github.com/sherjilozair/char-rnn-tensorflow

    训练数据:福尔摩斯探案全集 (下载地址)

    下载下来是纯文本文件,一共66766行。按照教程放在./data/sherlock下并重命名为input.txt.

    目标: 训练语言模型,然后输出句子

    训练

    python train.py --data_dir=./data/sherlock > 1.log 2>&1&
    

    有很多参数可调,结果默认保存在目录./save下。训练一共花了约1小时22分。

    默认是迭代50个epoch,实验中发现采用默认参数大约迭代10个epoch训练loss就没下降了,所以训练时可以加参数 --num_epochs 10.

    测试

    python sample.py --save_dir ./save -n 100
    

    输出100个字符:

    示例1(含空格)
       very occasion I could never see, this people, for if Lestrade to the Fingers for me. These pinded
    示例2(含空格)
       CHAPTER V CORA" 2I Uppard in his leggy. You will give she.
    
         "But you
         remember that
    示例3(含空格)
       CHAPTEBENII
         But the pushfuit who had honour had danger with such an instrumented. This sprang
    

    语句并不是很通顺,但是单词基本上还是对的。

    如果要进一步提升效果的话,可以清洗下语料,使每个输入都是完整的句子,同时尝试不同的模型参数。

    不过更值得尝试的是中文数据,下次找一篇中文小说训练看看。

    相关文章

      网友评论

      • a87693e1609a:我想请问一下, 训练好之后要怎么使用呀?
        新手一枚(●'◡'●)
        风驰电掣一瓜牛:@喵_喵喵 语言模型就是用来计算一个句子的概率的,具体应用可以看看机器翻译或者语音识别,比如在机器翻译中,翻译句子的概率 = 语言模型概率 * 翻译模型概率

      本文标题:TensorFlow 语言模型训练实战

      本文链接:https://www.haomeiwen.com/subject/vbeotxtx.html