美文网首页程序员
WSTA-Review(1):Preprocessing

WSTA-Review(1):Preprocessing

作者: 阿漠不冷漠 | 来源:发表于2018-05-03 11:59 被阅读0次

1. Words and Corpora

  • corpus(corpora)
    A collection of text or speech
  • lemma
    A lemma is a set of forms having the same stem, the same major part-of-speech, and the same word sense.
  • Word Type
    Types are the number of distinct words in a corpus.
  • Word Token
    Tokens are the total number N of running words

e.g. They picnicked by the pool, then lay back on the grass and looked at the stars.

The sentence above has 16 tokens and 14 types.

  • Herdan's Law or Heap's Law
    $$|V|=kN^β$$
    • The number of types ----- |V|

    • The number of tokens ---- N

    • k and β are positive constants, and 0<β<1.

2. Text Normalization

possible procedures

  • Remove unwanted formatting(e.g. HTML)
  • Segment structure(e.g. sentences)
  • Tokenise words
  • Normalise words
  • Remove unwanted words(e.g. Stop words)

2.1 Segmenting/Tokenizing

Tokenization: The task of segmenting running text into words

  • expanding clitic: 'm in I'm expands to am

2.2 Normalizing

  • Normalization: The task of putting words/tokens in a standard format
  • Case folding: everything mapped to lower case
  • Removing morphology
    • Lemmatization: The task of determining that two words have the same root, despite their surface differences.(e.g. is, are, am share lemma be)
lemmatizer = nltk.stem.wordnet.WordNetLemmatizer()
def lemmatize(word):
    lemma = lemmatizer.lemmatize(word,'v')
    if lemma == word:
        lemma = lemmatizer.lemmatize(word,'n')
    return lemma
  • Stemming:

    • stems: the central morpheme of the word, supply the main meaning, often not an actual lexical item.
    • affixes: adding 'additional' meanings of various words

    Stemming strips off all affixes, leaving a stem. e.g. automate --> automat

    • the Porter Stemmer(most popular stemmer for English
    stemmer = nltk.stem.porter.PorterStemmer()
    print ([stemmer.stem(token) for token in tokenized_sentence])
    
  • Correct spelling

  • expanding abbreviations

2.3 Segmenting

  • the MaxMatch algorithm
    used to segment in Chinese.

The maximum matching algorithm starts by pointing at the beginning of a string. It chooses the longest word in the dictionary that matches the input at the current position. The pointer is then advanced to the end of that word in the string. If no word matches, the pointer is instead advanced one character. The algorithm is then iteratively applied again starting from the new Pointer position.[1]

The code below is an example of the MaxMatch algorithm in English, however, the algorithm works better in Chinese than English.

def max_match(text_string, dictionary, word_list):
    '''
    @para: text_string: an alphabetic characters string
    @para: dictionary: existing dict
    @word_list: a collection used to store matched words.

    This method is used to find words existed in the dictionary from an alphabetic characters string.
    '''
    if len(text_string) <= 0:
          return word_list
    for i in range(len(text_string), 1, -1):
        first_word = text_string[:i]
        remainder = text_string[i:]
        if lemmatize(first_word) in dictionary: # todo first_word need to be lemma
            break
    else:
        first_word = text_string[0]
        remainder = text_string[1:]

    word_list.append(first_word)
    return max_match(remainder, dictionary, word_list)

  1. J&M3 Ch2, P15

相关文章

网友评论

    本文标题:WSTA-Review(1):Preprocessing

    本文链接:https://www.haomeiwen.com/subject/dxtkrftx.html