美文网首页
利用NLTK进行分句分词

利用NLTK进行分句分词

作者: sunney0 | 来源:发表于2020-01-08 17:00 被阅读0次

    .输入一个段落,分成句子(Punkt句子分割器)

    import nltk
    import nltk.data

    def splitSentence(paragraph):
    tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
    sentences = tokenizer.tokenize(paragraph)
    return sentences

    if name == 'main':
    print splitSentence("My name is Tom. I am a boy. I like soccer!")
    结果为['My name is Tom.', 'I am a boy.', 'I like soccer!']
    2.输入一个句子,分成词组

    from nltk.tokenize import WordPunctTokenizer

    def wordtokenizer(sentence):
    #分段
    words = WordPunctTokenizer().tokenize(sentence)
    return words

    if name == 'main':
    print wordtokenizer("My name is Tom.")
    结果为['My', 'name', 'is', 'Tom', '.']

    转载于:https://my.oschina.net/u/3346994/blog/911733

    相关文章

      网友评论

          本文标题:利用NLTK进行分句分词

          本文链接:https://www.haomeiwen.com/subject/qepsactx.html