Home > Research > Publications & Outputs > Encoding Word Confusion Networks with Recurrent...

Links

Text available via DOI:

View graph of relations

Encoding Word Confusion Networks with Recurrent Neural Networks for Dialog State Tracking

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date7/09/2017
Host publicationProceedings of the Workshop on Speech-Centric Natural Language Processing
Place of PublicationStroudsburg, PA
PublisherAssociation for Computational Linguistics
Pages10-17
Number of pages8
ISBN (print)9781945626920
<mark>Original language</mark>English
EventSpeech-Centric Natural Language Processing Workshop, co-located with EMNLP 2017 - Copenhagen, Denmark
Duration: 7/09/2017 → …
https://speechnlp.github.io/2017/

Conference

ConferenceSpeech-Centric Natural Language Processing Workshop, co-located with EMNLP 2017
Abbreviated titleSCNLP
Country/TerritoryDenmark
CityCopenhagen
Period7/09/17 → …
Internet address

Conference

ConferenceSpeech-Centric Natural Language Processing Workshop, co-located with EMNLP 2017
Abbreviated titleSCNLP
Country/TerritoryDenmark
CityCopenhagen
Period7/09/17 → …
Internet address

Abstract

This paper presents our novel method to encode word confusion networks, which can represent a rich hypothesis space of automatic speech recognition systems, via recurrent neural networks. We demonstrate the utility of our approach for the task of dialog state tracking in spoken dialog systems that relies on automatic speech recognition output. Encoding confusion networks outperforms encoding the best hypothesis of the automatic speech recognition in a neural system for dialog state tracking on the well-known second Dialog State Tracking Challenge dataset.