Publication Type
Journal Article
Version
acceptedVersion
Publication Date
9-2025
Abstract
The integration of conversational artificial intelligence (AI) into mental health care promises a new horizon for therapist-client interactions, aiming to closely emulate the depth and nuance of human conversations. Despite the potential, the current landscape of conversational AI is markedly limited by its reliance on single-modal data, constraining the systems’ ability to empathize and provide effective emotional support. This limitation stems from a paucity of resources that encapsulate the multimodal nature of human communication essential for therapeutic counseling. To address this gap, we introduce the Multimodal Emotional Support Conversation (MESC) dataset, a first-of-its-kind resource enriched with comprehensive annotations across text, audio, and video modalities. This dataset captures the intricate interplay of user emotions, system strategies, system emotions, and system responses, setting a new precedent in the field. Leveraging the MESC dataset, we propose a general Sequential Multimodal Emotional Support framework (SMES) grounded in Therapeutic Skills Theory. Tailored for multimodal dialogue systems, the SMES framework incorporates an LLM-based reasoning model that sequentially generates user emotion recognition, system strategy prediction, system emotion prediction, and response generation. Our rigorous evaluations demonstrate that this framework significantly enhances the capability of AI systems to mimic therapist behaviors with heightened empathy and strategic responsiveness. By integrating multimodal data in this innovative manner, we bridge the critical gap between emotion recognition and emotional support, marking a significant advancement in conversational AI for mental health support. This work not only pushes the boundaries of AI’s role in mental health care but also establishes a foundation for developing conversational agents that can provide more empathetic and effective emotional support.
Keywords
Multimodality, Emotional support conversation
Discipline
Artificial Intelligence and Robotics | Graphics and Human Computer Interfaces
Research Areas
Intelligent Systems and Optimization
Areas of Excellence
Digital transformation
Publication
IEEE Transactions on Multimedia
Volume
27
First Page
8276
Last Page
8287
ISSN
1520-9210
Identifier
10.1109/TMM.2025.3604951
Publisher
Institute of Electrical and Electronics Engineers
Citation
CHU, Yuqi; LIAO, Lizi; ZHOU, Zhiyuan; NGO, Chong-wah; and HONG, Richang.
Towards multimodal emotional support conversation systems. (2025). IEEE Transactions on Multimedia. 27, 8276-8287.
Available at: https://ink.library.smu.edu.sg/sis_research/10618
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
https://doi.org/10.1109/TMM.2025.3604951
Included in
Artificial Intelligence and Robotics Commons, Graphics and Human Computer Interfaces Commons