Composing and Embedding the Words-as-Classifiers Model of Grounded Semantics

Daniele Moro, Stacy Black, Casey Kennington

Research output: Working paperPreprint

Abstract

The words-as-classifiers model of grounded lexical semantics learns a semantic fitness score between physical entities and the words that are used to denote those entities. In this paper, we explore how such a model can incrementally perform composition and how the model can be unified with a distributional representation. For the latter, we leverage the classifier coefficients as an embedding. For composition, we leverage the underlying mechanics of three different classifier types (i.e., logistic regression, decision trees, and multilayer perceptrons) to arrive at a several systematic approaches to composition unique to each classifier including both denotational and connotational methods of composition. We compare these approaches to each other and to prior work in a visual reference resolution task using the refCOCO dataset. Our results demonstrate the need to expand upon existing composition strategies and bring together grounded and distributional representations.
Original languageAmerican English
StatePublished - 8 Nov 2019

EGS Disciplines

  • Computer Sciences

Fingerprint

Dive into the research topics of 'Composing and Embedding the Words-as-Classifiers Model of Grounded Semantics'. Together they form a unique fingerprint.

Cite this