美文网首页
2019-10-16对话系统调研篇

2019-10-16对话系统调研篇

作者: 布口袋_天晴了 | 来源:发表于2019-10-16 18:00 被阅读0次

2.赵东岩 北京大学 自然语言处理 语义数据管理 问答系统 对话系统
h指数

[1]Two are better than one: An ensemble of retrieval-and generation-based dialog systems | 35 | 2016 |
[2]Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems | 30 | 2018 |

一、Two are better than one: An ensemble of retrieval-and generation-based dialog systems

摘要:
Open-domain human-computer conversation has attracted much attention in the field of NLP. Contraryto rule- or template-based domain-specific dialog systems, open-domain conversation usually requires data-driven approaches, which can be roughly divided into two categories:  retrieval-based and generation-based systems. 
Retrieval systems search a user-issued utterance (called a query) in a large database, and return a reply that best matches the query. 
Generative approaches, typically based on recurrent neural networks (RNNs), cansynthesize  new  replies,  but  they  suffer  from  the  problem  of  generating  short,  meaningless  utterances.   
In this paper, we propose a novel ensemble of retrieval-based and generation-based dialog systems in the open domain.  
In our approach, the retrieved candidate, in addition to the original query, is fed to an RNN-based reply generator, so that the neural model is aware of more information.  
The generated reply is then fed backas a new candidate for post-reranking. 
Experimental results show that such ensemble out performs each single part of it by a large margin.
开放领域的人机对话
1.基于规则或基于模板的特定领域对话系统有两种:
i)基于检索的对话系统
ii)基于生成的对话系统
2.基于检索的系统,去数据库里查询最佳匹配,作为对话返回。
3.基于生成的系统,利用RNN生成新的回复,但它生成的语句短而无意义。
该篇论文的研究内容:
将检索操作和生成操作,进行集成
将检索得到的返回信息,输入到RNN回复生成器中,增强生成系统的特征输入
将生成的回复作为新候选,反馈给后候选集中,进行重新排序

二、Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems

摘要:
Open-domain  human-computer  conversation  has  been  attracting  increasing  attention  over  the  past  few  years. 
 However, there does not exist a standard automatic evaluation metric for open-domain dialog systems; researchers usually resort to human annotation for model evaluation, which is time and labor intensive. 
In this paper, we propose RUBER, a Referenced metric and Unreferenced metric Blended EvaluationRoutine, which evaluates a reply by taking into consideration both a groundtruth reply and a query (previous user-issued utterance). 
Our metric is learnable, but its training does not require labels of human satisfaction. Hence, RUBERis flexible and extensible to different datasets and languages. Experiments  on  both  retrieval  and  generative  dialog  systems show that RUBER has a high correlation with human annotation, and that RUBER has fair transferability over different datasets.
开放域人机对话,没有统一的自动评估方法
人工评估,费时费力
该篇论文的研究内容:
构建一个自动化的人机系统对话评估方法

三、脑洞

对话系统的学习语料,大多从网战上爬取(微博/百度贴吧/百度知道/豆瓣)。可是这些语料也很杂很乱,无意义回复很多。正常的对话/很长的对话流,很难从网上公开网站上找到。
深度学习,对标注数据有很强的依赖,分量要足,质量要好。
数据很重要!!!如果是先有数据,然后再做挖掘,可探究性就更强了。好多学术研究的数据集与显示的数据集有很大的差别。

相关文章

网友评论

      本文标题:2019-10-16对话系统调研篇

      本文链接:https://www.haomeiwen.com/subject/zavymctx.html