End-to-end latent-variable task-oriented dialogue system with exact log-likelihood optimization

Haotian XU, Haiyun PENG, Haoran XIE, Erik CAMBRIA, Liuyang ZHOU*, Weiguo ZHENG

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

23 Citations (Scopus)

Abstract

We propose an end-to-end dialogue model based on a hierarchical encoder-decoder, which employed a discrete latent variable to learn underlying dialogue intentions. The system is able to model the structure of utterances dominated by statistics of the language and the dependencies among utterances in dialogues without manual dialogue state design. We argue that the latent discrete variable interprets the intentions that guide machine responses generation. We also propose a model which can be refined autonomously with reinforcement learning, due to that intention selection at each dialogue turn can be formulated as a sequential decision-making process. Our experiments show that exact MLE optimized model is much more robust than neural variational inference on dialogue success rate with limited BLEU sacrifice.

Original languageEnglish
Pages (from-to)1989-2002
Number of pages14
JournalWorld Wide Web
Volume23
Issue number3
Early online date7 Jun 2019
DOIs
Publication statusPublished - May 2020
Externally publishedYes

Funding

This work was supported by the Shenzhen Science and Technology Innovation Committee with the project name of Intelligent Question Answering Robot, under grant NO. CKCY20170508121036342.

Keywords

  • Dialogue intention
  • Dialogue model
  • Hierarchical encoder-decoder
  • Log-likelihood optimization

Fingerprint

Dive into the research topics of 'End-to-end latent-variable task-oriented dialogue system with exact log-likelihood optimization'. Together they form a unique fingerprint.

Cite this