美文网首页
Gensim Tutorials

Gensim Tutorials

作者: chaaffff | 来源:发表于2017-09-20 09:28 被阅读0次

    Preliminaries

    All the examples can be directly copied to your Python interpreter shell.IPython‘scpastecommand is especially handy for copypasting code fragments, including the leading>>>characters.

    Gensim uses Python’s standardloggingmodule to log various stuff at various priority levels; to activate logging (this is optional), run

    import logging

    logging.basicConfig(format='%(asctime)s : %(levelname) : %(message)s', level=logging.INFO)

    Quick Example

    First, let’s import gensim and create a small corpus of nine documents and twelve features[1]:

    from gensim import corpora,models,similarities

    corpus = [[(0, 1.0), (1, 1.0), (2, 1.0)],

    [(2, 1.0), (3, 1.0), (4, 1.0), (5, 1.0), (6, 1.0), (8, 1.0)],

    [(1, 1.0), (3, 1.0), (4, 1.0), (7, 1.0)],

    [(0, 1.0), (4, 2.0), (7, 1.0)],

    [(3, 1.0), (5, 1.0), (6, 1.0)],

    [(9, 1.0)],

    [(9, 1.0), (10, 1.0)],

    [(9, 1.0), (10, 1.0), (11, 1.0)],

    [(8, 1.0), (10, 1.0), (11, 1.0)]]

    Ingensimacorpusis simply an object which, when iterated over, returns its documents represented as sparse vectors. In this case we’re using a list of list of tuples. If you’re not familiar with thevector space model, we’ll bridge the gap betweenraw strings,corporaandsparse vectorsin the next tutorial onCorpora and Vector Spaces.

    If you’re familiar with the vector space model, you’ll probably know that the way you parse your documents and convert them to vectors has major impact on the quality of any subsequent applications.

    In this example, the whole corpus is stored in memory, as a Python list. However, the corpus interface only dictates that a corpus must support iteration over its constituent documents. For very large corpora, it is advantageous to keep the corpus on disk, and access its documents sequentially, one at a time. All the operations and transformations are implemented in such a way that makes them independent of the size of the corpus, memory-wise.

    Next, let’s initialize atransformation:

    tfidf_model = models.TfidfModel(corpus)

    A transformation is used to convert documents from one vector representation into another:

    vec = [(0,1),(4,1)]

    print(tfidf_model[vec])# 此处使用中括号

    Here, we usedTf-Idf, a simple transformation which takes documents represented as bag-of-words counts and applies a weighting which discounts common terms (or, equivalently, promotes rare terms). It also scales the resulting vector to unit length (in theEuclidean norm).

    Transformations are covered in detail in the tutorial onTopics and Transformations.

    To transform the whole corpus via TfIdf and index it, in preparation for similarity queries:

    index = similarities.SparseMatrixSimilarity(tfidf_model[corpus], num_features=12)

    and to query the similarity of our query vector against every document in the corpus:

    sims = index[tfidf_model[vec]]

    print(list(enumerate(sims)))

    [(0, 0.4662244), (1, 0.19139354), (2, 0.24600551), (3, 0.82094586), (4, 0.0), (5, 0.0), (6, 0.0), (7, 0.0), (8, 0.0)]

    How to read this output? Document number zero (the first document) has a similarity score of 0.466=46.6%, the second document has a similarity score of 19.1% etc.

    Thus, according to TfIdf document representation and cosine similarity measure, the most similar to our query documentvecis document no. 3, with a similarity score of 82.1%. Note that in the TfIdf representation, any documents which do not share any common features with at all (documents no. 4–8) get a similarity score of 0.0. See theSimilarity Queriestutorial for more detail.

    相关文章

      网友评论

          本文标题:Gensim Tutorials

          本文链接:https://www.haomeiwen.com/subject/oyunsxtx.html