Similarity Queries for Security

作者: blade_he | 来源:发表于2018-05-17 10:59 被阅读26次

    Introduction of Gensim

    Gensim is a free Python library designed to automatically extract semantic topics from documents, as efficiently (computer-wise) and painlessly (human-wise) as possible.

    Gensim is designed to process raw, unstructured digital texts (“plain text”). The algorithms in gensim, such as Latent Semantic Analysis, Latent Dirichlet Allocation and Random Projections discover semantic structure of documents by examining statistical co-occurrence patterns of the words within a corpus of training documents. These algorithms are unsupervised, which means no human input is necessary – you only need a corpus of plain text documents.

    Once these statistical patterns are found, any plain text documents can be succinctly expressed in the new, semantic representation and queried for topical similarity against other documents.

    Flowchart Diagram

    (original flowchart diagram, no related diagram in Gensim official website)


    2018-05-17 10_53_55-Similarity Queries for Security Name by Gensim - Data Collection Technology - Mo.png

    Code Example

    Train data sample:
    F1234567OX~Undrly Alba (Crus) Gth Prop 2 Life~Undrly Alba (Crus) Gth Prop 2 Life
    F7654321OY~Undrly Alba (Crus) Mixed Pen~Undrly Alba (Crus) Mixed Pen
    FABCDEF9P0~Undrly Alba (Crus) Nth Am Pen~Undrly Alba (Crus) Nth Am Pen
    FFEDCBA9P4~Undrly Alba (Crus) Secure Inc Pen~Undrly Alba (Crus) Secure Inc Pen
    F1234567P5~Undrly Alba (Crus) UK Pen~Undrly Alba (Crus) UK Pen
    It means: security id~security name~security legal name
    The code splits every single line via character '~', and only apply security legal name to construct dictionary and model.

    print('Begin read data source')
    data_train = []
        for security in open(securitynamepath, encoding='utf-8'):
            if len(security.split('~')) == 3:
                data_train.append([word for word in security.split('~')[2].lower().split()
                       if word not in stoplist])
    print('End read data source')
    

    To get similarity of security name, the POC applies tf-idf algorithm to build model.

    The sample code is less than 100 lines,
    To initial dictionary and model like this,it will spend less than one second to get query result.

    import time
    from gensim import corpora, models, similarities
    from collections import defaultdict
    import os
     
    dictpath = './data/model/security.dict'
    modelpath = './data/model/security.mm'
    securitynamepath = './data/security/securityname.txt'
    start = time.time()
    alltext = [security for security in open(securitynamepath, encoding='utf-8')]
    end = time.time()
    print('Read security name list cost: ', end - start)
     
    def startjob(regeneratemodel=False, usertext='DSP BlackRock FMP Sr 229 51 Mn Dir Gr'):
        if regeneratemodel or (not os.path.exists(dictpath) or not os.path.exists(modelpath)):
            generatemodel()
     
        time_start = time.time()
        print('Load model start')
        load_start = time.time()
        corpus = corpora.MmCorpus(modelpath)
        dictionary = corpora.Dictionary.load(dictpath)
        tfidf_model = models.TfidfModel(corpus)
        index = similarities.SparseMatrixSimilarity(
            tfidf_model[corpus],
            num_features=len(dictionary.keys()))
        load_end = time.time()
        print('Load model cost: ', load_end - load_start)
        print('Load model end')
        ###############By LSI#####################
        # corpus_tfidf = tfidf_model[corpus]
        # dictionary = corpora.Dictionary.load(dictpath)
        # lsi_model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2)
        # corpus_lsi = lsi_model[corpus_tfidf]
        # corpus_simi_matrix = similarities.MatrixSimilarity(corpus_lsi)
        # 计算一个新的文本与既有文本的相关度
        # test_text = usertext.lower().split()
        # test_bow = dictionary.doc2bow(test_text)
        # test_tfidf = tfidf_model[test_bow]
        # test_lsi = lsi_model[test_tfidf]
        # test_simi = corpus_simi_matrix[test_lsi]
        # test_simi = sorted(enumerate(test_simi), key=lambda item: -item[1])
        ###############By LSI#####################
     
        ###############By tfidf#####################
        print('Query start')
        query_start = time.time()
        test_text = usertext.lower().split()
        doc_test_vec = dictionary.doc2bow(test_text)
        
        test_simi = index[tfidf_model[doc_test_vec]]
        test_simi = sorted(enumerate(test_simi), key=lambda item: -item[1])
        ###############By tfidf#####################
     
        outputlist = [test for test in test_simi if test[1] > 0.3]
        for output in outputlist:
            print(alltext[output[0]], output[1])
            if len(alltext[output[0]].split('~')) == 3 and alltext[output[0]].split('~')[1] == usertext:
                print("Congratulations, you find the right answer!")
                break
        time_end = time.time()
        print('Query cost: ', time_end - query_start)
        print('Totally cost: ', time_end - time_start)
        print('Query end')
     
    def generatemodel():
        print('Begin genertate model')
        stoplist = set('for a of the and to in'.split())
        print('Begin read data source')
        data_train = []
        count = 0
        for security in open(securitynamepath, encoding='utf-8'):
            if len(security.split('~')) == 3:
                data_train.append([word for word in security.split('~')[2].lower().split()
                       if word not in stoplist])
            count += 1
            print(count)
        print('End read data source')
        #去除只出现一次的单词,查询security name相似度的需求不需要这个特性
        # frequency = defaultdict(int)
        # for text in data_train:
        #     for token in text:
        #         frequency[token] += 1
        # data_train = [[token for token in text if frequency[token] > 1]
        #               for text in data_train]
        print(data_train)
        dictionary = corpora.Dictionary(data_train)
        dictionary.save(dictpath)
        corpus = [dictionary.doc2bow(text) for text in data_train]
        corpora.MmCorpus.serialize(modelpath, corpus)
        print('End genertate model')
     
    if __name__ == '__main__':
        startjob(False, u'Undrly Alba LASPEN Property')
    

    Output Analyzation

    The run console output is:


    2018-05-11 11_22_06-similarsecurity - [D__GIT_researchinit_similarsecurity] - ..._main.py - PyCharm .png

    The information of output:

    "Using TensorFlow backend": does it means Gensim using TensorFlow? But there is no official description about it

    There is time cost information in output list:

    Load security name cost (amount: 599214 records): 0.26 second

    Load dictionary and model: 8.94 seconds

    Similarity query for test text: 19.87 seconds

    User test text:

    Undrly Alba LASPEN Property

    Similarity query result:

    The result from gensim is key:value structure: Index:Probability, such as: 10:0.9348, means the probalility of the security name which index is 10, is 0.9348

    To be easy to get full information, output security id, security name with abbreviation and security legal name by result index.

    The result sorts in descending order by probility, such as:

    F1234567PM~Undrly Alba LASPEN Property PP~Undrly Alba LASPEN Property PP
    0.9348221
    F7654321PI~Undrly Alba LASPEN UK Equity PP~Undrly Alba LASPEN UK Equity PP
    0.83671427

    相关文章

      网友评论

        本文标题:Similarity Queries for Security

        本文链接:https://www.haomeiwen.com/subject/xvxbdftx.html