同步wx订阅号(arXiv每日论文速递),支持后台回复'search 关键词'查询相关的最新论文。有些许帮助的话,麻烦关注一下哦(* ̄rǒ ̄)
cs.CL 方向,今日共计22篇
[cs.CL]:
【1】 Neural Mention Detection
标题:神经提及检测
作者: Juntao Yu, Massimo Poesio
链接:https://arxiv.org/abs/1907.12524
【2】 Joey NMT: A Minimalist NMT Toolkit for Novices
标题:Joey NMT:面向新手的最简NMT工具包
作者: Julia Kreutzer, Stefan Riezler
链接:https://arxiv.org/abs/1907.12484
【3】 Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
标题:利用预先训练的检查点进行序列生成任务
作者: Sascha Rothe, Aliaksei Severyn
链接:https://arxiv.org/abs/1907.12461
【4】 A Baseline Neural Machine Translation System for Indian Languages
标题:印度语言基线神经网络机器翻译系统
作者: Jerin Philip, C.V. Jawahar
链接:https://arxiv.org/abs/1907.12437
【5】 VIANA: Visual Interactive Annotation of Argumentation
标题:Viana:论证的可视化交互注释
作者: Fabian Sperrle, Mennatallah El-Assady
备注:Proceedings of IEEE Conference on Visual Analytics Science and Technology (VAST), 2019
链接:https://arxiv.org/abs/1907.12413
【6】 ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
标题:Ernie 2.0:语言理解的持续预培训框架
作者: Yu Sun, Haifeng Wang
链接:https://arxiv.org/abs/1907.12412
【7】 Hierarchical Multi-Label Dialog Act Recognition on Spanish Data
标题:关于西班牙数据的分层多标签对话动作识别
作者: Eugénio Ribeiro, David Martins de Matos
链接:https://arxiv.org/abs/1907.12316
【8】 A Mathematical Model for Linguistic Universals
标题:语言共性的数学模型
作者: Weinan E, Yajun Zhou
备注:Main text (9 pages, 6 figures); Materials and Methods (iii+275 pages, 20 figures, 5 tables)
链接:https://arxiv.org/abs/1907.12293
【9】 Legal entity recognition in an agglutinating language and document connection network for EU Legislation and EU/Hungarian Case Law
标题:用于欧盟立法和欧盟/匈牙利判例法的粘合语言和文件连接网络中的法律实体识别
作者: György Görög, Péter Weisz
链接:https://arxiv.org/abs/1907.12280
【10】 Hybrid Code Networks using a convolutional neural network as an input layer achieves higher turn accuracy
标题:使用卷积神经网络作为输入层的混合编码网络实现了更高的转弯精度
作者: Petr Marek
备注:Proceedings of the International Student Scientific Conference Poster 23/2019
链接:https://arxiv.org/abs/1907.12162
【11】 CAiRE: An End-to-End Empathetic Chatbot
标题:Caire:一个端到端的移情聊天机器人
作者: Zhaojiang Lin, Pascale Fung
链接:https://arxiv.org/abs/1907.12108
【12】 What Should I Ask? Using Conversationally Informative Rewards for Goal-Oriented Visual Dialog
标题:我该问什么?为面向目标的可视对话使用会话信息性奖励
作者: Pushkar Shukla, William Yang Wang
备注:Accepted to ACL 2019
链接:https://arxiv.org/abs/1907.12021
【13】 Representation Degeneration Problem in Training Natural Language Generation Models
标题:自然语言生成模型训练中的表示退化问题
作者: Jun Gao, Tie-Yan Liu
备注:ICLR 2019
链接:https://arxiv.org/abs/1907.12009
【14】 A Hybrid Neural Network Model for Commonsense Reasoning
标题:常识推理的混合神经网络模型
作者: Pengcheng He, Jianfeng Gao
链接:https://arxiv.org/abs/1907.11983
【15】 Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment
标题:伯特真的很强壮吗?自然语言对文本分类和蕴涵的攻击
作者: Di Jin, Peter Szolovits
链接:https://arxiv.org/abs/1907.11932
【16】 Nefnir: A high accuracy lemmatizer for Icelandic
标题:Nefnir:一种用于冰岛语的高精度柠檬消毒剂
作者: Svanhvít Lilja Ingólfsdóttir, Kristín Bjarnadóttir
备注:Presented at NoDaLiDa 2019, Turku, Finland
链接:https://arxiv.org/abs/1907.11907
【17】 Towards Effective Rebuttal: Listening Comprehension using Corpus-Wide Claim Mining
标题:走向有效反驳:使用语料库范围内的权利要求挖掘的听力理解
作者: Tamar Lavee, Noam Slonim
备注:6th Argument Mining Workshop @ ACL 2019
链接:https://arxiv.org/abs/1907.11889
【18】 Analyzing Linguistic Complexity and Scientific Impact
标题:分析语言复杂性和科学影响
作者: Chao Lu, Chengzhi Zhang
链接:https://arxiv.org/abs/1907.11843
【19】 Supervised and unsupervised neural approaches to text readability
标题:文本可读性的有监督和无监督神经方法
作者: Matej Martinc, Marko Robnik Šikonja
链接:https://arxiv.org/abs/1907.11779
【20】 Automatically Learning Construction Injury Precursors from Text
标题:从文本中自动学习施工伤害前兆
作者: Henrietta Baker, Antoine J.-P. Tixier
链接:https://arxiv.org/abs/1907.11769
【21】 An Empirical Study on Leveraging Scene Graphs for Visual Question Answering
标题:利用场景图进行视觉问答的实证研究
作者: Cheng Zhang, Dong Xuan
备注:Accepted as oral presentation at BMVC 2019
链接:https://arxiv.org/abs/1907.12133
【22】 Probabilistic Models of Relational Implication
标题:关系蕴涵的概率模型
作者: Xavier Holt
链接:https://arxiv.org/abs/1907.12048
翻译:腾讯翻译君
网友评论