Mental models and expectation violations in conversational AI interactions

作者:

Highlights:

• Better conversational capabilities result in more positive CA evaluations.

• People have higher expectations when told an agent is human as opposed to computer.

• Violating expectations impacts evaluations more strongly than meeting expectations.

• Set low expectations so users will be less impacted by negative violations.

摘要

Artificial Intelligence is increasingly becoming integrated in many aspects of human life. One particular AI comes in the form of conversational agents (CAs) such as Siri, Alexa, and chatbots used for customer service on websites and other information systems. It is widely accepted that humans treat systems as social actors. Leveraging this bias, companies sometimes attempt to masquerade a CA as a human customer service representative. In addition to the ethical and legal questions around this practice, the benefits and drawbacks of a CA pretending to be human are unclear due to a lack of study. While more human-like interactions can improve outcomes, when users find out that the CA is not human, they may have a negative reaction that may cause reputation harm in the company. In this research we use Expectation Violation Theory to explain what happens when users have high or low expectations of a conversation. We conducted an experiment with 175 participants where some participants were told they were interacting with a CA while others were told they were interacting with a human. We further divided the groups so that some participants interacted with a CA with low conversational capability while others interacted with a CA with high conversational capability. The results show that expectations formed by the user before the interaction change how the user evaluates the CA beyond the actual performance of the CA. These findings provide guidance to developers not just of conversational agents, but also for other technologies where users may be uncertain of a system's capabilities.

论文关键词:Conversational AI,Chatbots,Conversational agents,Engagement

论文评审过程:Received 30 June 2020, Revised 29 January 2021, Accepted 30 January 2021, Available online 13 February 2021, Version of Record 25 March 2021.

论文官网地址:https://doi.org/10.1016/j.dss.2021.113515