2.赵东岩 北京大学 自然语言处理 语义数据管理 问答系统 对话系统
h指数
![](https://img.haomeiwen.com/i6102062/8762f57098a284a7.png)
[1]Two are better than one: An ensemble of retrieval-and generation-based dialog systems | 35 | 2016 |
[2]Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems | 30 | 2018 |
一、Two are better than one: An ensemble of retrieval-and generation-based dialog systems
![](https://img.haomeiwen.com/i6102062/f599f653743bc4cf.png)
摘要:
Open-domain human-computer conversation has attracted much attention in the field of NLP. Contraryto rule- or template-based domain-specific dialog systems, open-domain conversation usually requires data-driven approaches, which can be roughly divided into two categories: retrieval-based and generation-based systems.
Retrieval systems search a user-issued utterance (called a query) in a large database, and return a reply that best matches the query.
Generative approaches, typically based on recurrent neural networks (RNNs), cansynthesize new replies, but they suffer from the problem of generating short, meaningless utterances.
In this paper, we propose a novel ensemble of retrieval-based and generation-based dialog systems in the open domain.
In our approach, the retrieved candidate, in addition to the original query, is fed to an RNN-based reply generator, so that the neural model is aware of more information.
The generated reply is then fed backas a new candidate for post-reranking.
Experimental results show that such ensemble out performs each single part of it by a large margin.
开放领域的人机对话
1.基于规则或基于模板的特定领域对话系统有两种:
i)基于检索的对话系统
ii)基于生成的对话系统
2.基于检索的系统,去数据库里查询最佳匹配,作为对话返回。
3.基于生成的系统,利用RNN生成新的回复,但它生成的语句短而无意义。
该篇论文的研究内容:
将检索操作和生成操作,进行集成
将检索得到的返回信息,输入到RNN回复生成器中,增强生成系统的特征输入
将生成的回复作为新候选,反馈给后候选集中,进行重新排序
![](https://img.haomeiwen.com/i6102062/23e10b5c5b2e06ae.png)
![](https://img.haomeiwen.com/i6102062/f072d949ae40b577.png)
二、Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems
![](https://img.haomeiwen.com/i6102062/5349582eb7a1801e.png)
摘要:
Open-domain human-computer conversation has been attracting increasing attention over the past few years.
However, there does not exist a standard automatic evaluation metric for open-domain dialog systems; researchers usually resort to human annotation for model evaluation, which is time and labor intensive.
In this paper, we propose RUBER, a Referenced metric and Unreferenced metric Blended EvaluationRoutine, which evaluates a reply by taking into consideration both a groundtruth reply and a query (previous user-issued utterance).
Our metric is learnable, but its training does not require labels of human satisfaction. Hence, RUBERis flexible and extensible to different datasets and languages. Experiments on both retrieval and generative dialog systems show that RUBER has a high correlation with human annotation, and that RUBER has fair transferability over different datasets.
开放域人机对话,没有统一的自动评估方法
人工评估,费时费力
该篇论文的研究内容:
构建一个自动化的人机系统对话评估方法
![](https://img.haomeiwen.com/i6102062/688639b6f8f5f2ba.png)
![](https://img.haomeiwen.com/i6102062/698be49bccae23e7.png)
网友评论