TY - GEN
T1 - Predicting Human Interpretations of Affect and Valence in a Social Robot
AU - McNeill, David
AU - Kennington, Casey
N1 - Publisher Copyright:
© 2019, Robotics: Science and Systems. All rights reserved.
PY - 2019
Y1 - 2019
N2 - In this paper we seek to understand how people interpret a social robot’s performance of an emotion, what we term ‘affective display,’ and the positive or negative valence of that affect. To this end, we tasked annotators with observing the Anki Cozmo robot perform its over 900 pre-scripted behaviors and labeling those behaviors with 16 possible affective display labels (e.g., interest, boredom, disgust, etc.). In our first experiment, we trained a neural network to predict annotated labels given multimodal information about the robot’s movement, face, and audio. The results suggest that pairing affects to predict the valence between them is more informative, which we confirmed in a second experiment. Both experiments show that certain modalities are more useful for predicting displays of affect and valence. For our final experiment, we generated novel robot behaviors and tasked human raters with assigning scores to valence pairs instead of applying labels, then compared our model’s predictions of valence between the affective pairs and compared the results to the human ratings. We conclude that some modalities have information that can be contributory or inhibitive when considered in conjunction with other modalities, depending on the emotional valence pair being considered.
AB - In this paper we seek to understand how people interpret a social robot’s performance of an emotion, what we term ‘affective display,’ and the positive or negative valence of that affect. To this end, we tasked annotators with observing the Anki Cozmo robot perform its over 900 pre-scripted behaviors and labeling those behaviors with 16 possible affective display labels (e.g., interest, boredom, disgust, etc.). In our first experiment, we trained a neural network to predict annotated labels given multimodal information about the robot’s movement, face, and audio. The results suggest that pairing affects to predict the valence between them is more informative, which we confirmed in a second experiment. Both experiments show that certain modalities are more useful for predicting displays of affect and valence. For our final experiment, we generated novel robot behaviors and tasked human raters with assigning scores to valence pairs instead of applying labels, then compared our model’s predictions of valence between the affective pairs and compared the results to the human ratings. We conclude that some modalities have information that can be contributory or inhibitive when considered in conjunction with other modalities, depending on the emotional valence pair being considered.
UR - http://www.scopus.com/inward/record.url?scp=85108218958&partnerID=8YFLogxK
UR - https://scholarworks.boisestate.edu/cs_facpubs/315
U2 - 10.15607/RSS.2019.XV.041
DO - 10.15607/RSS.2019.XV.041
M3 - Conference contribution
SN - 9780992374754
T3 - Robotics: Science and Systems
BT - Proceedings of Robotics: Science and Systems
A2 - Bicchi, Antonio
A2 - Kress-Gazit, Hadas
A2 - Hutchinson, Seth
T2 - 15th Robotics: Science and Systems, RSS 2019
Y2 - 22 June 2019 through 26 June 2019
ER -