美文网首页Data Engineering
Literature Review: Neural Archit

Literature Review: Neural Archit

作者: hxiaom | 来源:发表于2017-12-26 11:03 被阅读0次

    Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., & Dyer, C. (2016). Neural Architectures for Named Entity Recognition. https://doi.org/10.18653/v1/N16-1030

    Research Gap

    State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available.

    Research Work

    They introduce two new neural architectures

    • one based on bidirectional LSTMs and conditional random fields
    • one that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers.

    Their model rely on two sources of information about words:

    • character-based word representations learned from the supervised corpus.
    • unsupervised word representations learned from unannotated corpora.

    Token-level evidence for "being a name" includes both

    • orthographic evidence (what does the word being tagged as a name look like?)
      • use character-based word representation model (Ling et al., 2015b) to capture orthographic sensitivity.
    • distributional evidence (where does the word being tagged tend to occur in a corpus)
      • use distributional representations (Mikolov et al., 2013b) to capture distributional sensitivity.

    Results

    obtain state-of-the-art performance in NER in four languages

    Innovation

    without resorting to any language-specific knowledge or resources such as gazetteers.

    相关文章

      网友评论

        本文标题:Literature Review: Neural Archit

        本文链接:https://www.haomeiwen.com/subject/blsdgxtx.html