Placing Objects in Gesture Space: Toward Incremental Interpretation of Multimodal Spatial Descriptions

Ting Han, Casey Kennington, David Schlangen

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

When describing routes not in the current environment, a common strategy is to anchor the description in configurations of salient landmarks, complementing the verbal descriptions by “placing” the non-visible landmarks in the gesture space. Understanding such multimodal descriptions and later locating the landmarks from real world is a challenging task for the hearer, who must interpret speech and gestures in parallel, fuse information from both modalities, build a mental representation of the description, and ground the knowledge to real world landmarks. In this paper, we model the hearer’s task, using a multimodal spatial description corpus we collected. To reduce the variability of verbal descriptions, we simplified the setup to use simple objects as landmarks. We describe a real-time system to evaluate the separate and joint contributions of the modalities. We show that gestures not only help to improve the overall system performance, even if to a large extent they encode redundant information, but also result in earlier final correct interpretations. Being able to build and apply representations incrementally will be of use in more dialogical settings, we argue, where it can enable immediate clarification in cases of mismatch.

Original languageAmerican English
Title of host publicationThe Thirty-Second AAAI Conference on Artificial Intelligence: The Thirtieth Innovative Applications of Artificial Intelligence
StatePublished - 1 Jan 2018

Keywords

  • abstract deixis
  • co-verbal gestures
  • incremental processing
  • multimodal interface
  • real time system

EGS Disciplines

  • Computer Sciences

Fingerprint

Dive into the research topics of 'Placing Objects in Gesture Space: Toward Incremental Interpretation of Multimodal Spatial Descriptions'. Together they form a unique fingerprint.

Cite this