美文网首页
Python词云

Python词云

作者: 夏林的每个蓝天 | 来源:发表于2017-02-07 15:20 被阅读0次

    很少记录自己的学习历程,无奈太健忘,而且刚入手Python,还是写下来供以后参考和思考。

    本篇主要利用python jieba分词和wordcloud进行词的可视化,其中去了停用词,单独计算了词频。也可以利用结巴自带的关键词提取方法。

    附网址jieba:https://github.com/fxsjy/jieba

    wordcloud:https://github.com/amueller/word_cloud

    #导入要用的包

    import pandas as pd

    import numpy as np

    import jieba

    import jieba.analyse

    import wordcloud.WordCloud

    import os

    import matplotlib.pyplot as plt

    import matplotlib

    matplotlib.style.use('ggplot')

    %matplotlib inline

    #我将要处理的文件放在了D盘,文件类似dataframe结构

    os.chdir('D:')

    comtent = pd.read_csv('dataframe.csv',dtype = 'object')

    #将每列的keyword合并成一个字符串以便于处理

    action = ''

    for kw in content['keyword']:

          action += kw.strip() + ' '

    stopwords = open('stopword.txt').read().strip().splitlines()

    seg = jieba.cut(action)

    seg = ' '.join(seg).split()      ######分词后的词都是Unicode格式

    words = ''

    for word in seg:

        word = word.encode('utf-8')      #####因为停用词是utf-8编码的,所以将其也编码为utf-8

        if word not in stopwords:

            words += word.strip() + ' '

    words =words.decode('utf-8')

    此处可以直接用wordcloud直接画图啦,用generate()函数

    #计算词频

    words =  words.split()

    word_freq = {}

    for word in words:

        if word in word_freq:

            word_freq[word] += 1

        else:

            word_freq[word] = 1

    #按词频排序,将dict类型转换成list类型

    sort_word = []

    for word,freq in word_freq.items():

        sort_word.append((word,freq))

    sorted_word = sorted(sort_word,key = lambda x:x[1],reverse = True)

    ##查看前100个高频词

    for word in sorted_word[:100]:

        print word[0],word[1]

    #发现一个字的挺多,所以选择长度大于2的词

    lengther = []

    for word in sorted_word:

        if len(word[0]) > 1:

            lengther.append(word)

    #画图啦

    wordcloud1 = WordCloud(font_path = '..matplotlib\\mpl-data\\fonts\\ttf\\msyh.ttf',background_color = 'white',max_words = 200,stopwords =               stopwords).generate_from_frequencies(dict(lengther))

    plt.imshow(wordcloud1)

    plt.axis('off')

    plt.show()

    结巴自动关键词提取(tf-idf,textrank)

    tf-idf = jieba.analyse.extract_tags(action,topK = 200,withWeight = True)

    textrank = jieba.analyse.textrank(action,topK = 200,withWeight = True)

    画图部分省略,和上面的一样。。。

    注:本文为原创,转载请注明出处。

    相关文章

      网友评论

          本文标题:Python词云

          本文链接:https://www.haomeiwen.com/subject/klopittx.html