美文网首页
Note 2: ELMo

Note 2: ELMo

作者: qin7zhen | 来源:发表于2020-07-11 16:45 被阅读0次

Deep contextualized word representations

Peters et al, 2018
  1. ELMo (Embeddings from Language Models) learns a linear combination of the vectors stacked above each input word for each end task, which markedly improves performance over just using the top LSTM layer.
    • High-level captures the context-dependent aspects of word meaning.
    • Low-level captures the basic syntax.
    • Different from others, ELMo word representations are functions of the entire input sentence.


      [Devlin et al. 2019]

2. Bidirectional language models (biLM)

Given a sequence of N tokens [t_1, t_2, \ldots, t_N],

  • A forward language model computes the probability of the sequence by modeling the probability of token t_k given the history [t_1,\ldots,t_{k-1}]:
    p(t_1, \ldots, t_N)=\prod_{k=1}^{N}{p(t_k|t_1, \ldots, t_{k-1})}
  • A backward language model computes the probability of the token t_k given the future context [t_{k+1},\ldots,t_N]:
    p(t_1, \ldots, t_N)=\prod_{k=1}^{N}{p(t_k|t_{k+1}, \ldots, t_N)}
  • A biLM combines both a forward and backward LM and jointly maximizes the log likelihoods of both directions:
    \sum_{k=1}^{N}{\log{p(t_k|t_1,\ldots,t_{k-1};\Theta_x, \overrightarrow{\Theta}_{LSTM}, \Theta_s)} \\+ \log{p(t_k|t_{k+1},\ldots, t_N;\Theta_x, \overleftarrow{\Theta}_{LSTM}, \Theta_s)}}
    where the forward and backward LMs share the parameters in token representation \Theta_x layer and Softmax layer \Theta_s, except their LSTMs \Theta_{LSTM}.

3. ELMO

  • ELMo is a task specific combination of the intermediate layer representations in the biLM.
  • For each token t_k, a L-layer biLM computes a set of 2L+1 representations
    R_k=\{x_k^{LM}, \overrightarrow{h}_{k,j}^{LM}, \overleftarrow{h}_{k,j}^{LM}|j=1,\ldots,L\}=\{h_{k,j}^{LM}|j=0,\ldots,L\}
    • h_{k,0}^{LM} is the token layer.
    • h_{k,j}^{LM}=[\overrightarrow{h}_{k,j}^{LM}; \overleftarrow{h}_{k,j}^{LM}] contains two outputs from the j-th forward and backward BiLSTM layer at the position k.
  • In practice, ELMo has to collapse all layers in R into a single vector.
    • Simply, ELMo just selects the top layer E(R_k)=h_{k,j}^{LM}.
    • Generally, ELMo computes a task specific weighting of all biLM layers:
      {ELMo}_k^{task}=E(R_k; \Theta^{task})=\gamma^{task}\sum_{j=0}^{L}{s_j^{task}h_{k,j}^{LM}}
      where s^{task} are softmax-normalized weights and the scalar parameter \gamma^{task} allows the task model to scale the tire ELMo vector.

Reference

Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

相关文章

  • Note 2: ELMo

    Deep contextualized word representations ELMo (Embeddings...

  • 任务3:解决多语义问题一种解决方案:ELMO

    1. ELMo的目标函数就是语言模型2. ELMO是结合上下文的 1. LSTM输出的事每个单词的概率分布,Gro...

  • ELMo

    ELMo: Embeddings from Language Models ELMo用到上文提到的双向的langu...

  • ELMo 的研究

    ELMO 的全称是 Embedding from Language Models ELMo于今年二月由AllenN...

  • ELMo

    1.Deep contextualized word representations2.NAACL2018 一种新...

  • 双主模型keepalived高可用集群

    ipvs(keepalived) 1、note1、note2,设置单主模式keepalived 2、note1、n...

  • 安卓机型 💕

    三星 Note2 S3 Note3 S4 Note4 S5 Note5 ...

  • Note 2

    我依然相信你吗? 被这几天媒体的反转报道搅得心神不宁。本来以为昨天事情就结束了,哪晓得却还未结束。今天又继续爆出不...

  • Note #2

    Parallelism The following sequence implements an atomic e...

  • BERT

    1.结构 或者 (1) 上下文+GPT-->Bert(2) 将ELMo的 Bi-LSTM替换为Transforme...

网友评论

      本文标题:Note 2: ELMo

      本文链接:https://www.haomeiwen.com/subject/dolicktx.html