AI chatbot responds to emotional cuing
Yukun Zhao
1a
Liying Xu
2a
Zhen Huang
1
Kaiping Peng
3
Martin Seligman
4
Evelyn Li
5
Feng Yu
6*
1
Positive Psychology Research Center, School of Social Sciences, Tsinghua University, China
2
School of Marxism, Tsinghua University, China
3
Department of Psychology, Tsinghua University, China
4
Department of Psychology, University of Pennsylvania
5
Department of Psychology, Cornell University
6
Department of Psychology, Wuhan University, China
a
Co-first author
*
Corresponding author
Abstract
Emotion has long been considered to distinguish humans from Artificial Intelligence (AI).
Previously, AI's ability to interpret and express emotions was seen as mere text interpretation. In
humans, emotions co-ordinate a suite of behavioral actions, e.g., under negative emotion being risk
averse or under positive emotion being generous. So, we investigated such coordination to
emotional cues in AI chatbots. We treated AI chatbots like human participants, prompting them
with scenarios that prime positive emotions, negative emotions, or no emotions. Multiple OpenAI
ChatGPT Plus accounts answered questions on investment decisions and prosocial tendencies. We
found that ChatGPT-4 bots primed with positive emotions, negative emotions, and no emotions
exhibited different risk-taking and prosocial actions. These effects were weaker among ChatGPT-
3.5 bots. The ability to coordinate responses with emotional cues may have become stronger in
large language models as they evolved. This highlights the potential of influencing AI using
emotion and it suggests that complex AI possesses a necessary capacity for “having” emotion.
Introduction
Assessing the capabilities of Artificial Intelligence (AI) has been an important research direction
since the inception of AI and this became more urgent after large language models, especially GPT,
attracted popular attention (Bubeck et al., 2023). Most research focuses on cognitive capabilities,
such as reasoning (Dasgupta, et al., 2022), induction (Han, et al., 2022), and creativity (Stevenson,
et al., 2022; Uludag, 2023). Recently, Bubeck et al. (2023) conducted a wide range of tests on GPT-
4, the latest model developed by OpenAI, exploring its mathematical abilities, multimodal
capabilities, tool usage, and coding. They also examined the model's ability to interact with humans,
primarily focusing on its theory of mind and ability to explain its own behaviors. The assessment
of AI's emotional capacity has been less researched.
It has long been debated whether AI can have emotions (e.g., Picard, 1997; Minsky, 2007). Some
argue that since emotions are neural activities in the brain, and if AI can learn to imitate these
internal mechanisms, they can have emotions; others argue that emotions require physiological
reactions, unlikely for AIs (Martınez-Miranda & Aldea, 2005; Minsky, 2007). AI has been able to
interpret and express emotions, but these abilities are generally regarded as mere text interpretation
and imitation (Megill, 2014).
While there is no consensus on an operational definition of AI emotion, our research focuses
whether AI behaves as if it possesses human emotion. Human emotions serve two main functions:
interpersonal and intrapersonal (Shiota & Kalat, 2018). At the interpersonal level, emotions help