A Graphical Digital Personal Assistant That Grounds and Learns Autonomously

Casey Kennington, Aprajita Shukla

Research output: Chapter in Book/Report/Conference proceedingChapter

3 Scopus citations

Abstract

We present a speech-driven digital personal assistant that is robust despite little or no training data and autonomously improves as it interacts with users. The system is able to establish and build common ground between itself and users by signaling understanding and by learning a mapping via interaction between the words that users actually speak and the system actions. We evaluated our system with real users and found an overall positive response. We further show through objective measures that autonomous learning improves performance in a simple itinerary filling task.

Original languageAmerican English
Title of host publicationHAI '17 Proceedings of the 5th International Conference on Human Agent Interaction
Pages353-357
Number of pages5
ISBN (Electronic)9781450351133
DOIs
StatePublished - 1 Jan 2017
Event5th International Conference on Human Agent Interaction, HAI 2017 - Bielefeld, Germany
Duration: 17 Oct 201720 Oct 2017

Publication series

NameHAI 2017 - Proceedings of the 5th International Conference on Human Agent Interaction

Conference

Conference5th International Conference on Human Agent Interaction, HAI 2017
Country/TerritoryGermany
CityBielefeld
Period17/10/1720/10/17

Keywords

  • grounding
  • interactive dialogue
  • personal assistant

EGS Disciplines

  • Computer Sciences

Fingerprint

Dive into the research topics of 'A Graphical Digital Personal Assistant That Grounds and Learns Autonomously'. Together they form a unique fingerprint.

Cite this