DialogueLLM: Context and emotion knowledge-tuned large language models for emotion recognition in conversations.

Saved in:
Bibliographic Details
Title: DialogueLLM: Context and emotion knowledge-tuned large language models for emotion recognition in conversations.
Authors: Zhang Y; College of Intelligence and Computing, Tianjin University, Tianjin, China; School of Nursing, The Hong Kong Polytechnic University, Hong Kong. Electronic address: yzzhang@zzuli.edu.cn., Wang M; Software Engineering College, Zhengzhou University of Light Industry, Zhengzhou, China. Electronic address: wangmengyao516@outlook.com., Wu Y; School of Artificial Intelligence, Hebei University of Technology, Tianjin, China. Electronic address: wuc@scse.hebut.edu.cn., Tiwari P; School of Information Technology, Halmstad University, Sweden. Electronic address: prayag.tiwari@ieee.org., Li Q; Department of Computer Science, University of Copenhagen, Denmark. Electronic address: qiuchi.li@di.ku.dk., Wang B; School of Data Science, The Chinese University of Hong Kong, Shenzhen, China., Qin J; School of Nursing, The Hong Kong Polytechnic University, Hong Kong. Electronic address: harry.qin@polyu.edu.hk.
Source: Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2025 Dec; Vol. 192, pp. 107901. Date of Electronic Publication: 2025 Jul 23.
Publication Type: Journal Article
Language: English
Journal Info: Publisher: Pergamon Press Country of Publication: United States NLM ID: 8805018 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1879-2782 (Electronic) Linking ISSN: 08936080 NLM ISO Abbreviation: Neural Netw Subsets: MEDLINE
Imprint Name(s): Original Publication: New York : Pergamon Press, [c1988-
MeSH Terms: Emotions*/physiology , Language* , Natural Language Processing* , Recognition, Psychology*/physiology , Neural Networks, Computer*, Humans ; Large Language Models
Abstract: Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Large language models (LLMs) and their variants have shown extraordinary efficacy across numerous downstream natural language processing tasks. Despite their remarkable performance in natural language generating, LLMs lack a distinct focus on the emotion understanding domain. As a result, using LLMs for emotion recognition may lead to suboptimal and inadequate precision. Another limitation of the current LLMs is that they are typically trained without leveraging multi-modal information. To overcome these limitations, we formally model emotion recognition as text generation tasks, and thus propose DialogueLLM, a context and emotion knowledge tuned LLM that is obtained by fine-tuning foundation large language models. In particular, it is a context-aware model, which can accurately capture the dynamics of emotions throughout the dialogue. We also prompt ERNIE Bot with expert-designed prompts to generate the textual descriptions of the videos. To support the training of emotional LLMs, we create a large scale dataset of over 24K utterances to serve as a knowledge corpus. Finally, we offer a comprehensive evaluation of DialogueLLM on three benchmarking datasets and significantly outperform 15 state-of-the-art baselines and 3 state-of-the-art LLMs. The emotion intelligence test shows that DialogueLLM achieves 109 score and surpasses 72 % humans. Additionally, DialogueLLM-7B can be easily reproduced using LoRA on a 40GB A100 GPU in 5 hours.
(Copyright © 2025 Elsevier Ltd. All rights reserved.)
Contributed Indexing: Keywords: Context modeling; Emotion recognition; Large language models; Natural language processing
Entry Date(s): Date Created: 20250802 Date Completed: 20251122 Latest Revision: 20251122
Update Code: 20251122
DOI: 10.1016/j.neunet.2025.107901
PMID: 40752409
Database: MEDLINE
Be the first to leave a comment!
You must be logged in first