TY - JOUR
T1 - “My Name is Alexa. What’s Your Name?” The Impact of Reciprocal Self-Disclosure on Post-Interaction Trust in Conversational Agents
AU - Saffarizadeh, Kambiz
AU - Keil, Mark
AU - Boodraj, Maheshwar
AU - Alashoor, Tawfiq
PY - 2024/1/1
Y1 - 2024/1/1
N2 - The use of conversational AI agents (CAs), such as Alexa and Siri, has steadily increased over the past several years. However, the functionality of these agents relies on the personal data obtained from their users. While evidence suggests that user disclosure can be increased through reciprocal self-disclosure (i.e., a process in which a CA discloses information about itself with the expectation that the user would reciprocate by disclosing similar information about themself), it is not clear whether and through which mechanism the process of reciprocal self-disclosure influences users’ post-interaction trust. We theorize that anthropomorphism (i.e., the extent to which a user attributes humanlike attributes to a nonhuman entity) serves as an inductive inference mechanism for understanding reciprocal self-disclosure, enabling users to build conceptually distinct cognitive and affective foundations upon which to form their post-interaction trust. We found strong support for our theory through two randomized experiments that used custom-developed text-based and voice-based CAs. Specifically, we found that reciprocal self-disclosure increases anthropomorphism and anthropomorphism increases cognition-based trustworthiness and affect-based trustworthiness. Our results show that reciprocal self-disclosure has an indirect effect on cognition-based trustworthiness and affect-based trustworthiness and is fully mediated by anthropomorphism. These findings conceptually bridge prior research on motivations of anthropomorphism and research on cognitive and affective bases of trust.
AB - The use of conversational AI agents (CAs), such as Alexa and Siri, has steadily increased over the past several years. However, the functionality of these agents relies on the personal data obtained from their users. While evidence suggests that user disclosure can be increased through reciprocal self-disclosure (i.e., a process in which a CA discloses information about itself with the expectation that the user would reciprocate by disclosing similar information about themself), it is not clear whether and through which mechanism the process of reciprocal self-disclosure influences users’ post-interaction trust. We theorize that anthropomorphism (i.e., the extent to which a user attributes humanlike attributes to a nonhuman entity) serves as an inductive inference mechanism for understanding reciprocal self-disclosure, enabling users to build conceptually distinct cognitive and affective foundations upon which to form their post-interaction trust. We found strong support for our theory through two randomized experiments that used custom-developed text-based and voice-based CAs. Specifically, we found that reciprocal self-disclosure increases anthropomorphism and anthropomorphism increases cognition-based trustworthiness and affect-based trustworthiness. Our results show that reciprocal self-disclosure has an indirect effect on cognition-based trustworthiness and affect-based trustworthiness and is fully mediated by anthropomorphism. These findings conceptually bridge prior research on motivations of anthropomorphism and research on cognitive and affective bases of trust.
KW - AI Agent
KW - Affect-Based Trust
KW - Anthropomorphism
KW - Chatbot
KW - Cognition-Based Trust
KW - Conversational AI
KW - Reciprocal Self-Disclosure
UR - https://aisel.aisnet.org/jais/vol25/iss3/9
U2 - 10.17705/1jais.00839
DO - 10.17705/1jais.00839
M3 - Article
JO - Journal of the Association for Information Systems
JF - Journal of the Association for Information Systems
ER -