美文网首页
文本摘要内容

文本摘要内容

作者: James | 来源:发表于2019-02-19 13:25 被阅读0次

    https://github.com/ematvey/tensorflow-seq2seq-tutorials tensorflow-seq2seq-tutorials
    https://github.com/tensorflow/nmt#hands-on--lets-train-an-nmt-model tensorflow/nmt
    https://github.com/OpenNMT/OpenNMT-py OpenNMT/OpenNMT-py
    https://github.com/lmthang/thesis Thang Luong's Thesis on Neural Machine Translation
    https://github.com/google/seq2seq google/seq2seq
    https://google.github.io/seq2seq/
    https://github.com/HadoopIt/rnn-nlu Attention-based RNN model for Spoken Language Understanding (Intent Detection & Slot Filling)
    https://plmsmile.github.io/2017/10/10/attention-model/
    https://github.com/applenob/RNN-for-Joint-NLU/blob/master/tensorflow_dynamic_seq2seq.md
    http://www.crownpku.com/2017/09/27/%E6%B5%85%E8%B0%88%E5%9E%82%E7%9B%B4%E9%A2%86%E5%9F%9F%E7%9A%84chatbot.html
    http://www.crownpku.com/2017/07/27/%E7%94%A8Rasa_NLU%E6%9E%84%E5%BB%BA%E8%87%AA%E5%B7%B1%E7%9A%84%E4%B8%AD%E6%96%87NLU%E7%B3%BB%E7%BB%9F.html
    https://plmsmile.github.io/2017/10/10/attention-model/
    https://plmsmile.github.io/2018/03/31/36-alime-chat/
    https://github.com/crownpku/rasa_nlu_chi
    https://www.jianshu.com/p/3a9f49834c4a
    https://github.com/HadoopIt/rnn-nlu
    https://rasa.com/docs/nlu/
    https://www.jianshu.com/p/3a9f49834c4a
    https://github.com/ematvey/tensorflow-seq2seq-tutorials
    https://github.com/tensorflow/nmt#hands-on--lets-train-an-nmt-model
    https://github.com/lmthang/thesis
    https://github.com/google/seq2seq
    https://www.cnblogs.com/azheng333/p/5908025.html 理解LSTM一种递归神经网络(RNN) 递归神经网络结构
    https://blog.csdn.net/rockingdingo/article/details/55224282
    http://opennmt.net/OpenNMT-py/main.html
    https://github.com/pytorch/fairseq
    https://github.com/facebookresearch/DrQA
    https://github.com/allenai/allennlp
    https://jianwenjun.xyz/2018/07/18/Seq2Seq%E7%9A%84%E9%82%A3%E4%BA%9B%E4%BA%8B/ Seq2Seq的那些事
    http://opennmt.net/OpenNMT-py/Summarization.html
    http://www.52nlp.cn/tag/opennmt
    https://github.com/OpenNMT/OpenNMT-tf
    http://opennmt.net/
    https://github.com/tensorflow/nmt
    https://github.com/tensorflow/nmt#hands-on--lets-train-an-nmt-model Neural Machine Translation (seq2seq) Tutorial
    http://opennmt.net/OpenNMT-tf/package/opennmt.html
    https://www.jianshu.com/p/f65397983d07 pyTorch版OpenNMT的学习笔记
    http://opennmt.net/OpenNMT-tf/package/opennmt.html
    https://www.jianshu.com/p/f65397983d07 pyTorch版OpenNMT的学习笔记
    https://github.com/OpenNMT/Hackathon/tree/master/unsupervised-nmt
    http://fanyi.aipatent.com/
    https://blog.csdn.net/liuchonge/article/details/78555958 记忆网络之在对话系统中的应用
    https://blog.csdn.net/xizero00/article/details/51182003 论文阅读:End-To-End Memory Networks
    http://fanyi.aipatent.com/
    https://zhuanlan.zhihu.com/p/43822482
    http://opennmt.net/OpenNMT-py/onmt.modules.html#copy-attention
    https://github.com/OpenNMT/OpenNMT-py/issues/741
    https://zhuanlan.zhihu.com/p/43822482
    http://opennmt.net/OpenNMT-py/options/train.html#

    The idea is quite the same as in abisee, for each sequence, there is:
    * a generation probability over the (decoder) vocabulary
    * a soft copy switch (probability to copy)
    * a copy probability over source tokens (here, attention scores are used, eq 2 of See 2017)
    Final scores takes all of that into account, therefore there is no "coverage loss", it's just part of the loss calculation.

    http://opennmt.net/OpenNMT/
    OpenNMT is a generic deep learning framework mainly specialized in sequence-to-sequence models covering a variety of tasks such as machine translation, summarization, image to text, and speech recognition. The framework has also been extended for other non sequence-to-sequence tasks like language modelling and sequence tagging.
    All these applications are reusing and sometimes extending a collection of easy-to-reuse modules: encoders, decoders, embeddings layers, attention layers, and more.
    The framework is implemented to be as generic as possible and can be used either via command line applications, client-server, or libraries.

    Modeling Coverage for Neural Machine Translation
    https://arxiv.org/pdf/1601.04811.pdf
    http://forum.opennmt.net/

    神经机器翻译里面的中文句子,是分词还是分字?
    https://www.zhihu.com/question/65878227/answer/341341103
    现在比较广泛的,平衡 1)巨大的词表和2)巨长字符串(用字以后序列变长)以及3)用字丢失语言学信息(把“包子”拆成“包” 和 “子” 就不能吃了)的方法是,
    用混合模型,就是GNMT里面说的wordpiece,工具就用Rsennrich大神的rsennrich/subword-nmt 好了,操作简单。
    对于需要分词的语言,关于用字和用词到底哪个好,
    厦门大学最近有一篇论文Lattice-to-sequence attentional Neural Machine Translation modelsLattice-to-sequence attentional Neural Machine Translation models​www.sciencedirect.com得出的结论是,用Bahdanau 大佬的RNNserach做实验的话,不管用啥分词,反正分词的就比不分的好,当然这有可能是RNNsearch自身的长距离依赖的问题,但确实说明了分词这个任务在神经网络序列生成模型里并不是一无是处的。个人强烈建议用混合字、词以及subword 的 wordpiece

    https://github.com/facebookresearch/fairseq
    https://github.com/deepmind/sonnet

    https://github.com/OpenNMT/OpenNMT-py
    http://opennmt.net/
    http://pavel.surmenok.com/2016/10/15/how-to-run-text-summarization-with-tensorflow/

    OpenNMT-py: Open-Source Neural Machine Translation

    https://zhuanlan.zhihu.com/p/43822482 读论文/项目笔记
    https://github.com/tuzhaopeng/NMT-Coverage NMT-Coverage
    https://blog.csdn.net/thriving_fcl/article/details/74165062 Tensorflow新版Seq2Seq接口使用

    Modeling Coverage for Neural Machine Translation
    1601.04811.pdf
    Get To The Point: Summarization with Pointer-Generator Networks
    1704.04368.pdf

    http://cairohy.github.io/2017/04/11/deeplearning/NLP-Hyperparams-train-arXiv2017-%E3%80%8AMassive%20Exploration%20of%20Neural%20Machine%20Translation%20Architectures%E3%80%8B/
    http://cairohy.github.io/2017/04/11/deeplearning/NLP-Hyperparams-train-arXiv2017-%E3%80%8AMassive%20Exploration%20of%20Neural%20Machine%20Translation%20Architectures%E3%80%8B/
    seq2seq的NMT模型怎样训练-论文《Massive Exploration of Neural Machine Translation Architectures》
    https://nmt-keras.readthedocs.io/en/latest/index.html
    https://nlp.stanford.edu/projects/nmt/

    https://github.com/tensorflow/nmt Neural Machine Translation (seq2seq) Tutorial
    https://github.com/OpenNMT/OpenNMT-tf
    https://github.com/OpenNMT/OpenNMT-py
    http://opennmt.net/OpenNMT-py/options/train.html#
    http://opennmt.net/OpenNMT-py/Summarization.html
    https://nmt-keras.readthedocs.io/en/latest/index.html#
    https://blog.csdn.net/thormas1996/article/details/81536977

    重复问题论文:
    Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu,
    and Hang Li. 2016. Modeling coverage for neural
    machine translation. In Association for Computational
    Linguistics.

    Pointer networks:
    Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly.
    2015. Pointer networks. In Neural Information Processing
    Systems.

    相关文章

      网友评论

          本文标题:文本摘要内容

          本文链接:https://www.haomeiwen.com/subject/ucwxyqtx.html