Why Emotions Should be Integrated into Conversational Agents
When building conversational agents that are to take part in social interaction with humans, an important question is whether psychological concepts like emotions or personality of the
agents need to be incorporated. In this chapter we argue for the integration of an emotion system into a conversational agent to enable the simulation of having “own emotions”. We first clarify the concept of emotions and we discuss different approaches to modeling emotions and personality in artificial systems. Drawing on our work on the multimodal conversational agent Max, we present motives for the integration of emotions as integral parts of an agent’s
cognitive architecture. Our approach combines different psychological emotion theories and
distinguishes between primary and secondary emotions as originating from different levels
of this architecture. Exemplary application scenarios are described to show how the agent’s believability can be increased by the integration of emotions. In a cooperative setting, Max is employed as a virtual interactive guide in a public computer museum, where his emotion module enhances his acceptance as a coequal conversational partner. We further quote an empirical study that yields evidence that the same emotion module supports the believability and lifelikeness of the agent in a competitive gaming scenario.
Only registered members are allowed to comment.