Abstract
We propose an end-to-end dialogue model based on a hierarchical encoder-decoder, which employed a discrete latent variable to learn underlying dialogue intentions. The system is able to model the structure of utterances dominated by statistics of the language and the dependencies among utterances in dialogues without manual dialogue state design. We argue that the latent discrete variable interprets the intentions that guide machine responses generation. We also propose a model which can be refined autonomously with reinforcement learning, due to that intention selection at each dialogue turn can be formulated as a sequential decision-making process. Our experiments show that exact MLE optimized model is much more robust than neural variational inference on dialogue success rate with limited BLEU sacrifice.
Original language | English |
---|---|
Pages (from-to) | 1989-2002 |
Number of pages | 14 |
Journal | World Wide Web |
Volume | 23 |
Issue number | 3 |
Early online date | 7 Jun 2019 |
DOIs | |
Publication status | Published - May 2020 |
Externally published | Yes |
Funding
This work was supported by the Shenzhen Science and Technology Innovation Committee with the project name of Intelligent Question Answering Robot, under grant NO. CKCY20170508121036342.
Keywords
- Dialogue intention
- Dialogue model
- Hierarchical encoder-decoder
- Log-likelihood optimization