训练一个聊天机器人的很重要的一步是词向量训练,无论是生成式聊天机器人还是检索式聊天机器人,都需要将文字转化为词向量,时下最火的词向量训练模型是word2vec,所以,今天小编文文带你使用维基百科训练词向量。
4、繁简转换
上一篇中讲到了将文档从xml中抽取出来,下一步是将繁体字转换为简体字,那么我们使用opencc工具进行繁简转换,首先去下载opencc:https://bintray.com/package/files/byvoid/opencc/OpenCC
下载完成之后解压即可,随后使用命令:
opencc -i wiki.zh.text -o wiki.zh.jian.text -c t2s.json进行转换
效果如下:
5、文章分词:
使用jieba分词器对文章及进行分词,代码如下:
import jieba
import jieba.analyse
import jieba.posseg as pseg
import codecs,sys
def cut_words(sentence):
#print sentence
return " ".join(jieba.cut(sentence)).encode('utf-8')
f=codecs.open('wiki.zh.jian.text','r',encoding="utf8")
target = codecs.open("wiki.zh.jian.seg.txt", 'w',encoding="utf8")
print ('open files')
line_num=1
line = f.readline()
while line:
print('---- processing', line_num, 'article----------------')
line_seg = " ".join(jieba.cut(line))
target.writelines(line_seg)
line_num = line_num + 1
line = f.readline()
f.close()
target.close()
exit()
while line:
curr = []
for oneline in line:
#print(oneline)
curr.append(oneline)
after_cut = map(cut_words, curr)
target.writelines(after_cut)
print ('saved',line_num,'articles')
exit()
line = f.readline1()
f.close()
target.close()
6、训练词向量
接下来就可以训练词向量啦,代码如下:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import os
import sys
import multiprocessing
from gensim.models import Word2Vec
from gensim.models.word2vec import LineSentence
if __name__ == '__main__':
program = os.path.basename(sys.argv[0])
logger = logging.getLogger(program)
logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
logging.root.setLevel(level=logging.INFO)
logger.info("running %s" % ' '.join(sys.argv))
# check and process input arguments
if len(sys.argv) < 4:
print(globals()['__doc__'] % locals())
sys.exit(1)
inp, outp1, outp2 = sys.argv[1:4]
model = Word2Vec(LineSentence(inp), size=400, window=5, min_count=5,
workers=multiprocessing.cpu_count(),iter=100)
# trim unneeded model memory = use(much) less RAM
# model.init_sims(replace=True)
model.save(outp1)
model.wv.save_word2vec_format(outp2, binary=False)
使用命令开始训练
python train_word2vec_model.py wiki.zh.jian.seg.txt wiki.zh.text.model wiki.zh.text.vector
发现训练开始:
今天先记录到这里啦,下一篇,小编带你一起体验一下word2vec的训练结果。
网友评论