DialogueLLM: Context and emotion knowledge-tuned large language models for emotion recognition in conversations.

Uložené v:
Podrobná bibliografia
Názov: DialogueLLM: Context and emotion knowledge-tuned large language models for emotion recognition in conversations.
Autori: Zhang Y; College of Intelligence and Computing, Tianjin University, Tianjin, China; School of Nursing, The Hong Kong Polytechnic University, Hong Kong. Electronic address: yzzhang@zzuli.edu.cn., Wang M; Software Engineering College, Zhengzhou University of Light Industry, Zhengzhou, China. Electronic address: wangmengyao516@outlook.com., Wu Y; School of Artificial Intelligence, Hebei University of Technology, Tianjin, China. Electronic address: wuc@scse.hebut.edu.cn., Tiwari P; School of Information Technology, Halmstad University, Sweden. Electronic address: prayag.tiwari@ieee.org., Li Q; Department of Computer Science, University of Copenhagen, Denmark. Electronic address: qiuchi.li@di.ku.dk., Wang B; School of Data Science, The Chinese University of Hong Kong, Shenzhen, China., Qin J; School of Nursing, The Hong Kong Polytechnic University, Hong Kong. Electronic address: harry.qin@polyu.edu.hk.
Zdroj: Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2025 Dec; Vol. 192, pp. 107901. Date of Electronic Publication: 2025 Jul 23.
Spôsob vydávania: Journal Article
Jazyk: English
Informácie o časopise: Publisher: Pergamon Press Country of Publication: United States NLM ID: 8805018 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1879-2782 (Electronic) Linking ISSN: 08936080 NLM ISO Abbreviation: Neural Netw Subsets: MEDLINE
Imprint Name(s): Original Publication: New York : Pergamon Press, [c1988-
Výrazy zo slovníka MeSH: Emotions*/physiology , Language* , Natural Language Processing* , Recognition, Psychology*/physiology , Neural Networks, Computer*, Humans ; Large Language Models
Abstrakt: Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Large language models (LLMs) and their variants have shown extraordinary efficacy across numerous downstream natural language processing tasks. Despite their remarkable performance in natural language generating, LLMs lack a distinct focus on the emotion understanding domain. As a result, using LLMs for emotion recognition may lead to suboptimal and inadequate precision. Another limitation of the current LLMs is that they are typically trained without leveraging multi-modal information. To overcome these limitations, we formally model emotion recognition as text generation tasks, and thus propose DialogueLLM, a context and emotion knowledge tuned LLM that is obtained by fine-tuning foundation large language models. In particular, it is a context-aware model, which can accurately capture the dynamics of emotions throughout the dialogue. We also prompt ERNIE Bot with expert-designed prompts to generate the textual descriptions of the videos. To support the training of emotional LLMs, we create a large scale dataset of over 24K utterances to serve as a knowledge corpus. Finally, we offer a comprehensive evaluation of DialogueLLM on three benchmarking datasets and significantly outperform 15 state-of-the-art baselines and 3 state-of-the-art LLMs. The emotion intelligence test shows that DialogueLLM achieves 109 score and surpasses 72 % humans. Additionally, DialogueLLM-7B can be easily reproduced using LoRA on a 40GB A100 GPU in 5 hours.
(Copyright © 2025 Elsevier Ltd. All rights reserved.)
Contributed Indexing: Keywords: Context modeling; Emotion recognition; Large language models; Natural language processing
Entry Date(s): Date Created: 20250802 Date Completed: 20251122 Latest Revision: 20251122
Update Code: 20251122
DOI: 10.1016/j.neunet.2025.107901
PMID: 40752409
Databáza: MEDLINE
FullText Text:
  Availability: 0
CustomLinks:
  – Url: https://resolver.ebscohost.com/openurl?sid=EBSCO:cmedm&genre=article&issn=18792782&ISBN=&volume=192&issue=&date=20251201&spage=107901&pages=107901&title=Neural networks : the official journal of the International Neural Network Society&atitle=DialogueLLM%3A%20Context%20and%20emotion%20knowledge-tuned%20large%20language%20models%20for%20emotion%20recognition%20in%20conversations.&aulast=Zhang%20Y&id=DOI:10.1016/j.neunet.2025.107901
    Name: Full Text Finder
    Category: fullText
    Text: Full Text Finder
    Icon: https://imageserver.ebscohost.com/branding/images/FTF.gif
    MouseOverText: Full Text Finder
  – Url: https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=EBSCO&SrcAuth=EBSCO&DestApp=WOS&ServiceName=TransferToWoS&DestLinkType=GeneralSearchSummary&Func=Links&author=Y%20Z
    Name: ISI
    Category: fullText
    Text: Nájsť tento článok vo Web of Science
    Icon: https://imagesrvr.epnet.com/ls/20docs.gif
    MouseOverText: Nájsť tento článok vo Web of Science
Header DbId: cmedm
DbLabel: MEDLINE
An: 40752409
AccessLevel: 3
PubType: Academic Journal
PubTypeId: academicJournal
PreciseRelevancyScore: 0
IllustrationInfo
Items – Name: Title
  Label: Title
  Group: Ti
  Data: DialogueLLM: Context and emotion knowledge-tuned large language models for emotion recognition in conversations.
– Name: Author
  Label: Authors
  Group: Au
  Data: <searchLink fieldCode="AU" term="%22Zhang+Y%22">Zhang Y</searchLink>; College of Intelligence and Computing, Tianjin University, Tianjin, China; School of Nursing, The Hong Kong Polytechnic University, Hong Kong. Electronic address: yzzhang@zzuli.edu.cn.<br /><searchLink fieldCode="AU" term="%22Wang+M%22">Wang M</searchLink>; Software Engineering College, Zhengzhou University of Light Industry, Zhengzhou, China. Electronic address: wangmengyao516@outlook.com.<br /><searchLink fieldCode="AU" term="%22Wu+Y%22">Wu Y</searchLink>; School of Artificial Intelligence, Hebei University of Technology, Tianjin, China. Electronic address: wuc@scse.hebut.edu.cn.<br /><searchLink fieldCode="AU" term="%22Tiwari+P%22">Tiwari P</searchLink>; School of Information Technology, Halmstad University, Sweden. Electronic address: prayag.tiwari@ieee.org.<br /><searchLink fieldCode="AU" term="%22Li+Q%22">Li Q</searchLink>; Department of Computer Science, University of Copenhagen, Denmark. Electronic address: qiuchi.li@di.ku.dk.<br /><searchLink fieldCode="AU" term="%22Wang+B%22">Wang B</searchLink>; School of Data Science, The Chinese University of Hong Kong, Shenzhen, China.<br /><searchLink fieldCode="AU" term="%22Qin+J%22">Qin J</searchLink>; School of Nursing, The Hong Kong Polytechnic University, Hong Kong. Electronic address: harry.qin@polyu.edu.hk.
– Name: TitleSource
  Label: Source
  Group: Src
  Data: <searchLink fieldCode="JN" term="%228805018%22">Neural networks : the official journal of the International Neural Network Society</searchLink> [Neural Netw] 2025 Dec; Vol. 192, pp. 107901. <i>Date of Electronic Publication: </i>2025 Jul 23.
– Name: TypePub
  Label: Publication Type
  Group: TypPub
  Data: Journal Article
– Name: Language
  Label: Language
  Group: Lang
  Data: English
– Name: TitleSource
  Label: Journal Info
  Group: Src
  Data: <i>Publisher: </i><searchLink fieldCode="PB" term="%22Pergamon+Press%22">Pergamon Press </searchLink><i>Country of Publication: </i>United States <i>NLM ID: </i>8805018 <i>Publication Model: </i>Print-Electronic <i>Cited Medium: </i>Internet <i>ISSN: </i>1879-2782 (Electronic) <i>Linking ISSN: </i><searchLink fieldCode="IS" term="%2208936080%22">08936080 </searchLink><i>NLM ISO Abbreviation: </i>Neural Netw <i>Subsets: </i>MEDLINE
– Name: PublisherInfo
  Label: Imprint Name(s)
  Group: PubInfo
  Data: <i>Original Publication</i>: New York : Pergamon Press, [c1988-
– Name: SubjectMESH
  Label: MeSH Terms
  Group: Su
  Data: <searchLink fieldCode="MM" term="%22Emotions%22">Emotions*</searchLink>/<searchLink fieldCode="MM" term="%22Emotions+physiology%22">physiology</searchLink> <br /><searchLink fieldCode="MM" term="%22Language%22">Language*</searchLink> <br /><searchLink fieldCode="MM" term="%22Natural+Language+Processing%22">Natural Language Processing*</searchLink> <br /><searchLink fieldCode="MM" term="%22Recognition%2C+Psychology%22">Recognition, Psychology*</searchLink>/<searchLink fieldCode="MM" term="%22Recognition%2C+Psychology+physiology%22">physiology</searchLink> <br /><searchLink fieldCode="MM" term="%22Neural+Networks%2C+Computer%22">Neural Networks, Computer*</searchLink><br /><searchLink fieldCode="MH" term="%22Humans%22">Humans</searchLink> ; <searchLink fieldCode="MH" term="%22Large+Language+Models%22">Large Language Models</searchLink>
– Name: Abstract
  Label: Abstract
  Group: Ab
  Data: Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.<br />Large language models (LLMs) and their variants have shown extraordinary efficacy across numerous downstream natural language processing tasks. Despite their remarkable performance in natural language generating, LLMs lack a distinct focus on the emotion understanding domain. As a result, using LLMs for emotion recognition may lead to suboptimal and inadequate precision. Another limitation of the current LLMs is that they are typically trained without leveraging multi-modal information. To overcome these limitations, we formally model emotion recognition as text generation tasks, and thus propose DialogueLLM, a context and emotion knowledge tuned LLM that is obtained by fine-tuning foundation large language models. In particular, it is a context-aware model, which can accurately capture the dynamics of emotions throughout the dialogue. We also prompt ERNIE Bot with expert-designed prompts to generate the textual descriptions of the videos. To support the training of emotional LLMs, we create a large scale dataset of over 24K utterances to serve as a knowledge corpus. Finally, we offer a comprehensive evaluation of DialogueLLM on three benchmarking datasets and significantly outperform 15 state-of-the-art baselines and 3 state-of-the-art LLMs. The emotion intelligence test shows that DialogueLLM achieves 109 score and surpasses 72 % humans. Additionally, DialogueLLM-7B can be easily reproduced using LoRA on a 40GB A100 GPU in 5 hours.<br /> (Copyright © 2025 Elsevier Ltd. All rights reserved.)
– Name: SubjectMinor
  Label: Contributed Indexing
  Group:
  Data: <i>Keywords: </i>Context modeling; Emotion recognition; Large language models; Natural language processing
– Name: DateEntry
  Label: Entry Date(s)
  Group: Date
  Data: <i>Date Created: </i>20250802 <i>Date Completed: </i>20251122 <i>Latest Revision: </i>20251122
– Name: DateUpdate
  Label: Update Code
  Group: Date
  Data: 20251122
– Name: DOI
  Label: DOI
  Group: ID
  Data: 10.1016/j.neunet.2025.107901
– Name: AN
  Label: PMID
  Group: ID
  Data: 40752409
PLink https://erproxy.cvtisr.sk/sfx/access?url=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=cmedm&AN=40752409
RecordInfo BibRecord:
  BibEntity:
    Identifiers:
      – Type: doi
        Value: 10.1016/j.neunet.2025.107901
    Languages:
      – Code: eng
        Text: English
    PhysicalDescription:
      Pagination:
        StartPage: 107901
    Subjects:
      – SubjectFull: Humans
        Type: general
      – SubjectFull: Large Language Models
        Type: general
      – SubjectFull: Emotions physiology
        Type: general
      – SubjectFull: Language
        Type: general
      – SubjectFull: Natural Language Processing
        Type: general
      – SubjectFull: Recognition, Psychology physiology
        Type: general
      – SubjectFull: Neural Networks, Computer
        Type: general
    Titles:
      – TitleFull: DialogueLLM: Context and emotion knowledge-tuned large language models for emotion recognition in conversations.
        Type: main
  BibRelationships:
    HasContributorRelationships:
      – PersonEntity:
          Name:
            NameFull: Zhang Y
      – PersonEntity:
          Name:
            NameFull: Wang M
      – PersonEntity:
          Name:
            NameFull: Wu Y
      – PersonEntity:
          Name:
            NameFull: Tiwari P
      – PersonEntity:
          Name:
            NameFull: Li Q
      – PersonEntity:
          Name:
            NameFull: Wang B
      – PersonEntity:
          Name:
            NameFull: Qin J
    IsPartOfRelationships:
      – BibEntity:
          Dates:
            – D: 01
              M: 12
              Text: 2025 Dec
              Type: published
              Y: 2025
          Identifiers:
            – Type: issn-electronic
              Value: 1879-2782
          Numbering:
            – Type: volume
              Value: 192
          Titles:
            – TitleFull: Neural networks : the official journal of the International Neural Network Society
              Type: main
ResultId 1