Modality-uncertainty-aware knowledge distillation framework for multimodal sentiment analysis
Multimodal sentiment analysis (MSA) has become increasingly important for understanding human emotions, with applications in areas such as human-computer interaction, social media analysis, and emotion recognition. MSA leverages multimodal data, including text, audio, and visual inputs, to achieve b...
Gespeichert in:
| Veröffentlicht in: | Complex & intelligent systems Jg. 12; H. 1; S. 14 - 22 |
|---|---|
| Hauptverfasser: | , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Cham
Springer International Publishing
01.01.2026
Springer Nature B.V Springer |
| Schlagworte: | |
| ISSN: | 2199-4536, 2198-6053 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | Multimodal sentiment analysis (MSA) has become increasingly important for understanding human emotions, with applications in areas such as human-computer interaction, social media analysis, and emotion recognition. MSA leverages multimodal data, including text, audio, and visual inputs, to achieve better performance in emotion recognition. However, existing methods face challenges, particularly when dealing with missing modalities. While some approaches attempt to handle modality dropout, they often fail to effectively recover missing information or account for the complex interactions between different modalities. Moreover, many models treat modalities equally, not fully utilizing the unique strengths of each modality. To address these limitations, we propose the Modality-Uncertainty-aware Knowledge Distillation Framework (MUKDF). Specifically, we introduce a modality random missing strategy that enhances the model's adaptability to uncertain modality scenarios. To further improve performance, we incorporate a Dual-Branch Modality Knowledge Extractor (DMKE) that balances feature contributions across modalities and a multimodal masked transformer (MMT) designed to capture nuanced interactions between modalities. Additionally, we present a contrastive feature-level and align-based representation distillation mechanism (CFD&ARD), which strengthens the alignment between teacher and student representations, ensuring effective knowledge transfer and improved robustness in learning. Comprehensive experiments conducted on two benchmark datasets demonstrate that MUKDF outperforms several baseline models, achieving superior performance not only under complete modality conditions but also in the more challenging scenario with incomplete modalities. This highlights the effectiveness of our framework in handling the uncertainty and complexities inherent in multimodal sentiment analysis. |
|---|---|
| AbstractList | Abstract Multimodal sentiment analysis (MSA) has become increasingly important for understanding human emotions, with applications in areas such as human-computer interaction, social media analysis, and emotion recognition. MSA leverages multimodal data, including text, audio, and visual inputs, to achieve better performance in emotion recognition. However, existing methods face challenges, particularly when dealing with missing modalities. While some approaches attempt to handle modality dropout, they often fail to effectively recover missing information or account for the complex interactions between different modalities. Moreover, many models treat modalities equally, not fully utilizing the unique strengths of each modality. To address these limitations, we propose the Modality-Uncertainty-aware Knowledge Distillation Framework (MUKDF). Specifically, we introduce a modality random missing strategy that enhances the model's adaptability to uncertain modality scenarios. To further improve performance, we incorporate a Dual-Branch Modality Knowledge Extractor (DMKE) that balances feature contributions across modalities and a multimodal masked transformer (MMT) designed to capture nuanced interactions between modalities. Additionally, we present a contrastive feature-level and align-based representation distillation mechanism (CFD&ARD), which strengthens the alignment between teacher and student representations, ensuring effective knowledge transfer and improved robustness in learning. Comprehensive experiments conducted on two benchmark datasets demonstrate that MUKDF outperforms several baseline models, achieving superior performance not only under complete modality conditions but also in the more challenging scenario with incomplete modalities. This highlights the effectiveness of our framework in handling the uncertainty and complexities inherent in multimodal sentiment analysis. Multimodal sentiment analysis (MSA) has become increasingly important for understanding human emotions, with applications in areas such as human-computer interaction, social media analysis, and emotion recognition. MSA leverages multimodal data, including text, audio, and visual inputs, to achieve better performance in emotion recognition. However, existing methods face challenges, particularly when dealing with missing modalities. While some approaches attempt to handle modality dropout, they often fail to effectively recover missing information or account for the complex interactions between different modalities. Moreover, many models treat modalities equally, not fully utilizing the unique strengths of each modality. To address these limitations, we propose the Modality-Uncertainty-aware Knowledge Distillation Framework (MUKDF). Specifically, we introduce a modality random missing strategy that enhances the model's adaptability to uncertain modality scenarios. To further improve performance, we incorporate a Dual-Branch Modality Knowledge Extractor (DMKE) that balances feature contributions across modalities and a multimodal masked transformer (MMT) designed to capture nuanced interactions between modalities. Additionally, we present a contrastive feature-level and align-based representation distillation mechanism (CFD&ARD), which strengthens the alignment between teacher and student representations, ensuring effective knowledge transfer and improved robustness in learning. Comprehensive experiments conducted on two benchmark datasets demonstrate that MUKDF outperforms several baseline models, achieving superior performance not only under complete modality conditions but also in the more challenging scenario with incomplete modalities. This highlights the effectiveness of our framework in handling the uncertainty and complexities inherent in multimodal sentiment analysis. |
| ArticleNumber | 14 |
| Author | Wang, Nan Wang, Qi |
| Author_xml | – sequence: 1 givenname: Nan surname: Wang fullname: Wang, Nan organization: School of Management Science and Information Engineering, Jilin University of Finance and Economics, Institute of Big Data and Interdisciplinary Sciences, Jilin University of Finance and Economics – sequence: 2 givenname: Qi orcidid: 0000-0003-1058-5068 surname: Wang fullname: Wang, Qi email: 6231193013@s.jlufe.edu.cn organization: School of Management Science and Information Engineering, Jilin University of Finance and Economics |
| BookMark | eNp9UctOwzAQtFCRgMIPcIrEOWBnHTs5ooqXBOICR2Q5flRuUxvsVFH_HrdBcOPg9Wg9M6v1nKGZD94gdEnwNcGY3ySKOeUlrup8CNTleIROK9I2JcM1zA64LWkN7ARdpOQ6nHHFOSWn6OMlaNm7YVduvTJxkM5nLEcZTbH2YeyNXppCuzS4vpeDC76wUW7MGOK6sCEWm20_uM3epEjGZ5hLIb3sd8mlc3RsZZ_Mxc89R-_3d2-Lx_L59eFpcftcKlrXQwmdpZRXlVK04R1rDa5510mogOCGatZoA1x3GihlTDddfiSW2bZRZM8yMEdPk68OciU-o9vIuBNBOnFohLgUMg5O9UYAxi0QRpUmLQUFElutuITaMrBNnjFHV5PXZwxfW5MGsQrbmBdKIn8abnhFGGRWNbFUDClFY3-nEiz2qYgpFZFTEYdUxJhFMIlSJvuliX_W_6i-AVDKk90 |
| Cites_doi | 10.1016/j.inffus.2023.101973 10.1016/j.inffus.2024.102711 10.1016/j.ins.2023.01.116 10.1109/TIP.2021.3093397 10.3390/electronics13132645 10.1016/j.neucom.2023.127201 10.1109/TASLP.2019.2957872 10.1109/ICASSP49660.2025.10889029 10.1109/ACCESS.2022.3219200 10.18653/v1/P19-1239 10.1007/s11042-020-10285-x 10.18653/v1/W18-3302 10.1007/978-3-662-44415-3_16 10.1016/j.knosys.2023.110502 10.52202/079017-1779 10.1109/TASLP.2024.3430543 10.1007/s10579-008-9076-6 10.18653/v1/P19-1455 10.18653/v1/2022.emnlp-main.717 10.1109/TASLP.2021.3049898 10.1109/TMM.2013.2267205 10.1109/TASLP.2021.3068598 10.1016/j.knosys.2024.111724 10.1145/3517139 10.1007/s10489-022-03343-4 10.1109/TAFFC.2022.3172360 10.1145/3477495.3532064 10.1016/j.inffus.2024.102454 10.1109/TPAMI.2023.3234553 10.1109/TAFFC.2023.3282410 10.1109/TMM.2023.3267882 10.1007/s10462-021-09958-2 10.1108/IJICC-08-2024-0384 10.1109/CVPR52733.2024.01184 10.1016/j.neunet.2024.106397 |
| ContentType | Journal Article |
| Copyright | The Author(s) 2025 The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| Copyright_xml | – notice: The Author(s) 2025 – notice: The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| DBID | C6C AAYXX CITATION 8FE 8FG ABUWG AFKRA ARAPS AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ P5Z P62 PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS DOA |
| DOI | 10.1007/s40747-025-02135-w |
| DatabaseName | Springer Nature OA Free Journals CrossRef ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Computer Science Collection ProQuest Central Essentials Download PDF from ProQuest Central Technology collection ProQuest One ProQuest Central Korea SciTech Premium Collection Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central China DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef Publicly Available Content Database Advanced Technologies & Aerospace Collection Technology Collection ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central Advanced Technologies & Aerospace Database ProQuest One Applied & Life Sciences ProQuest One Academic UKI Edition ProQuest Central Korea ProQuest Central (New) ProQuest One Academic ProQuest One Academic (New) |
| DatabaseTitleList | Publicly Available Content Database |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: PIMPY name: Publicly Available Content Database url: http://search.proquest.com/publiccontent sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Mathematics |
| EISSN | 2198-6053 |
| EndPage | 22 |
| ExternalDocumentID | oai_doaj_org_article_30093164cd1943c3a0fdc7a35f63f8bd 10_1007_s40747_025_02135_w |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62076108 funderid: http://dx.doi.org/10.13039/501100001809 – fundername: School-level Projects of Jilin University of Finance and Economics grantid: 2023YB021, 2024PY010 – fundername: Scientific Research Project of Jilin Provincial Department of Education grantid: JJKH20250758KJ |
| GroupedDBID | 0R~ 8FE 8FG AAJSJ AAKKN AASML ABEEZ ABFTD ACACY ACGFS ACULB ADMLS AFFHD AFGXO AFKRA AHBYD AHYZX ALMA_UNASSIGNED_HOLDINGS AMKLP ARAPS ASPBG AVWKF BAPOH BENPR BGLVJ C24 C6C CCPQU EBLON EBS GROUPED_DOAJ HCIFZ IAO ISR ITC OK1 P62 PHGZM PHGZT PIMPY PQGLB PROAC SOJ AAYXX AHSBF CITATION EJD ABUWG AZQEC DWQXO PKEHL PQEST PQQKQ PQUKI PRINS |
| ID | FETCH-LOGICAL-c455t-3bf44722cc487b69e057bba3231084d68de37dbd34466d8b7bb1f6f98c1ba32e3 |
| IEDL.DBID | DOA |
| ISICitedReferencesCount | 0 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001617004200002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2199-4536 |
| IngestDate | Mon Nov 17 19:31:00 EST 2025 Wed Nov 12 09:51:38 EST 2025 Thu Nov 13 04:28:06 EST 2025 Wed Nov 12 01:12:11 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Keywords | Attention mechanism Knowledge distillation Multimodal sentiment analysis Multimodal learning |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c455t-3bf44722cc487b69e057bba3231084d68de37dbd34466d8b7bb1f6f98c1ba32e3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0003-1058-5068 |
| OpenAccessLink | https://doaj.org/article/30093164cd1943c3a0fdc7a35f63f8bd |
| PQID | 3270872163 |
| PQPubID | 2044308 |
| PageCount | 22 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_30093164cd1943c3a0fdc7a35f63f8bd proquest_journals_3270872163 crossref_primary_10_1007_s40747_025_02135_w springer_journals_10_1007_s40747_025_02135_w |
| PublicationCentury | 2000 |
| PublicationDate | 2026-01-01 |
| PublicationDateYYYYMMDD | 2026-01-01 |
| PublicationDate_xml | – month: 01 year: 2026 text: 2026-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Cham |
| PublicationPlace_xml | – name: Cham – name: Heidelberg |
| PublicationTitle | Complex & intelligent systems |
| PublicationTitleAbbrev | Complex Intell. Syst |
| PublicationYear | 2026 |
| Publisher | Springer International Publishing Springer Nature B.V Springer |
| Publisher_xml | – name: Springer International Publishing – name: Springer Nature B.V – name: Springer |
| References | T Pan (2135_CR2) 2023; 2023 J Zeng (2135_CR60) 2022; 2022 V Rozgić (2135_CR21) 2012; 2012 2135_CR71 F Chen (2135_CR4) 2020; 2020 S Mai (2135_CR49) 2021; 29 2135_CR72 M Huddar (2135_CR24) 2021; 80 Z Yuan (2135_CR47) 2021; 2021 2135_CR73 N Cummins (2135_CR22) 2018; 2018 2135_CR37 2135_CR38 Z Yuan (2135_CR10) 2024; 26 J Yu (2135_CR18) 2020; 28 2135_CR39 Z Zhao (2135_CR58) 2021; 30 D Hazarika (2135_CR61) 2020; 2020 B Li (2135_CR3) 2023; 2023 Y Li (2135_CR16) 2023; 2023 H Sun (2135_CR65) 2022; 2022 M Ma (2135_CR70) 2021; 35 C Busso (2135_CR53) 2008; 42 Z Liu (2135_CR17) 2024; 101 2135_CR64 J Yu (2135_CR19) 2019; 2019 2135_CR25 F Acheampong (2135_CR6) 2021; 54 S Wei (2135_CR43) 2023; 2023 J Pennington (2135_CR56) 2014; 2014 H Pham (2135_CR15) 2019; 2019 J Zhao (2135_CR66) 2021; 2021 E Shutova (2135_CR29) 2016; 2016 J Wang (2135_CR20) 2021; 2021 M Wang (2135_CR23) 2014; 2014 V Pérez-Rosas (2135_CR27) 2013; 2013 Z Yuan (2135_CR13) 2024; 32 X Xue (2135_CR1) 2023; 35 2135_CR51 2135_CR52 M Xu (2135_CR63) 2022; 10 2135_CR11 S Castro (2135_CR8) 2019; 2019 2135_CR14 2135_CR59 W Xie (2135_CR69) 2025; 18 B Arumugam (2135_CR7) 2022; 2022 P Wei (2135_CR45) 2024; 13 Y Zhang (2135_CR46) 2024; 177 L Morency (2135_CR26) 2011; 2011 E Morvant (2135_CR30) 2014; 2014 M Hu (2135_CR50) 2020; 2020 Y Deng (2135_CR44) 2025; 114 H Yang (2135_CR33) 2023; 2023 J Wang (2135_CR68) 2023; 628 2135_CR5 J Arevalo (2135_CR32) 2017; 2017 M Ma (2135_CR12) 2021; 2021 Z Zhang (2135_CR34) 2020; 2020 2135_CR40 G Evangelopoulos (2135_CR31) 2013; 15 Y Wang (2135_CR54) 2019; 2019 2135_CR42 G Degottex (2135_CR57) 2014; 2014 Y Fu (2135_CR55) 2024; 571 H Wang (2135_CR41) 2023; 2023 2135_CR48 J Williams (2135_CR28) 2018; 2018 H Zuo (2135_CR67) 2023; 2023 Z Lian (2135_CR35) 2021; 29 S Mai (2135_CR36) 2023; 14 Y Cai (2135_CR9) 2019; 2019 W Yu (2135_CR62) 2021; 35 |
| References_xml | – volume: 101 start-page: 101973 year: 2024 ident: 2135_CR17 publication-title: Inf Fusion doi: 10.1016/j.inffus.2023.101973 – volume: 2013 start-page: 973 year: 2013 ident: 2135_CR27 publication-title: Proc Annu Meet Assoc Comput Linguist – ident: 2135_CR73 – volume: 114 start-page: 102711 year: 2025 ident: 2135_CR44 publication-title: Inf Fusion doi: 10.1016/j.inffus.2024.102711 – volume: 2014 start-page: 1532 year: 2014 ident: 2135_CR56 publication-title: Proc Conf Empir Methods Nat Lang Process – ident: 2135_CR64 – volume: 628 start-page: 208 year: 2023 ident: 2135_CR68 publication-title: Inf Sci doi: 10.1016/j.ins.2023.01.116 – volume: 2011 start-page: 169 year: 2011 ident: 2135_CR26 publication-title: Proc Int Conf Multimodal Interfaces – volume: 30 start-page: 6544 year: 2021 ident: 2135_CR58 publication-title: IEEE Trans Image Process doi: 10.1109/TIP.2021.3093397 – volume: 2023 start-page: 6267 year: 2023 ident: 2135_CR33 publication-title: Proc Assoc Comput Linguist – volume: 35 start-page: 5105 issue: 5 year: 2023 ident: 2135_CR1 publication-title: IEEE Trans Knowl Data Eng – volume: 13 start-page: 2645 year: 2024 ident: 2135_CR45 publication-title: Electronics doi: 10.3390/electronics13132645 – volume: 571 start-page: 127201 year: 2024 ident: 2135_CR55 publication-title: Neurocomputing doi: 10.1016/j.neucom.2023.127201 – volume: 2023 start-page: 20039 year: 2023 ident: 2135_CR43 publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – volume: 35 start-page: 10790 issue: 12 year: 2021 ident: 2135_CR62 publication-title: Proc AAAI Conf Artif Intell – volume: 28 start-page: 429 year: 2020 ident: 2135_CR18 publication-title: IEEE/ACM Trans Audio Speech Lang Process doi: 10.1109/TASLP.2019.2957872 – volume: 2019 start-page: 7216 year: 2019 ident: 2135_CR54 publication-title: Proc AAAI Conf Artif Intell – volume: 2018 start-page: 4954 year: 2018 ident: 2135_CR22 publication-title: Proc IEEE Int Conf Acoust Speech Signal Process – ident: 2135_CR40 doi: 10.1109/ICASSP49660.2025.10889029 – volume: 10 start-page: 131671 year: 2022 ident: 2135_CR63 publication-title: IEEE Access doi: 10.1109/ACCESS.2022.3219200 – volume: 2023 start-page: 5923 year: 2023 ident: 2135_CR3 publication-title: Proc ACM Int Conf Multimed – volume: 2020 start-page: 3183 year: 2020 ident: 2135_CR34 publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – volume: 2019 start-page: 2506 year: 2019 ident: 2135_CR9 publication-title: Proc Annu Meet Assoc Comput Linguist doi: 10.18653/v1/P19-1239 – volume: 2023 start-page: 216 year: 2023 ident: 2135_CR41 publication-title: Proc Med Image Comput Comput Assist Interv – volume: 2019 start-page: 6892 year: 2019 ident: 2135_CR15 publication-title: Proc AAAI Conf Artif Intell – volume: 2021 start-page: 4400 year: 2021 ident: 2135_CR47 publication-title: Proc ACM Multimed Conf – volume: 80 start-page: 13059 issue: 9 year: 2021 ident: 2135_CR24 publication-title: Multimed Tools Appl doi: 10.1007/s11042-020-10285-x – volume: 2018 start-page: 11 year: 2018 ident: 2135_CR28 publication-title: Proc Grand Chall Workshop Hum Multimodal Lang doi: 10.18653/v1/W18-3302 – volume: 2014 start-page: 153 year: 2014 ident: 2135_CR30 publication-title: Proc Struct Syntactic Stat Pattern Recognit doi: 10.1007/978-3-662-44415-3_16 – ident: 2135_CR48 doi: 10.1016/j.knosys.2023.110502 – ident: 2135_CR52 – volume: 2012 start-page: 1 year: 2012 ident: 2135_CR21 publication-title: Proc Asia Pac Signal Inf Process Assoc Annu Summit Conf – volume: 2017 start-page: 1 year: 2017 ident: 2135_CR32 publication-title: Proc Int Conf Learn Represent – ident: 2135_CR11 doi: 10.52202/079017-1779 – volume: 2022 start-page: 1210 year: 2022 ident: 2135_CR7 publication-title: Proc IEEE Int Symp Circuits Syst – volume: 32 start-page: 3669 year: 2024 ident: 2135_CR13 publication-title: IEEE/ACM Trans Audio Speech Lang Process doi: 10.1109/TASLP.2024.3430543 – volume: 2020 start-page: 772 year: 2020 ident: 2135_CR50 publication-title: Proc Med Image Comput Comput Assist Interv – volume: 2021 start-page: 2608 year: 2021 ident: 2135_CR66 publication-title: Proc Annu Meet Assoc Comput Linguist – volume: 42 start-page: 335 year: 2008 ident: 2135_CR53 publication-title: Lang Resour Eval doi: 10.1007/s10579-008-9076-6 – volume: 2019 start-page: 4619 year: 2019 ident: 2135_CR8 publication-title: Proc Annu Meet Assoc Comput Linguist doi: 10.18653/v1/P19-1455 – ident: 2135_CR14 doi: 10.18653/v1/2022.emnlp-main.717 – volume: 2020 start-page: 1122 year: 2020 ident: 2135_CR61 publication-title: Proc ACM Int Conf Multimed – volume: 29 start-page: 985 year: 2021 ident: 2135_CR35 publication-title: IEEE/ACM Trans Audio Speech Lang Process doi: 10.1109/TASLP.2021.3049898 – volume: 15 start-page: 1553 issue: 7 year: 2013 ident: 2135_CR31 publication-title: IEEE Trans Multimed doi: 10.1109/TMM.2013.2267205 – volume: 29 start-page: 1424 year: 2021 ident: 2135_CR49 publication-title: IEEE/ACM Trans Audio Speech Lang Process doi: 10.1109/TASLP.2021.3068598 – ident: 2135_CR5 doi: 10.1016/j.knosys.2024.111724 – volume: 2019 start-page: 5408 year: 2019 ident: 2135_CR19 publication-title: Proc Int Joint Conf Artif Intell – ident: 2135_CR37 doi: 10.1145/3517139 – ident: 2135_CR25 doi: 10.1007/s10489-022-03343-4 – volume: 35 start-page: 2302 year: 2021 ident: 2135_CR70 publication-title: Proc AAAI Conf Artif Intell (AAAI) – volume: 2021 start-page: 3 year: 2021 ident: 2135_CR20 publication-title: Proc Chin Conf Pattern Recognit Comput Vis – volume: 2016 start-page: 160 year: 2016 ident: 2135_CR29 publication-title: Proc Conf North Am Chapter Assoc Comput Linguist – volume: 14 start-page: 2276 issue: 3 year: 2023 ident: 2135_CR36 publication-title: IEEE Trans Affect Comput doi: 10.1109/TAFFC.2022.3172360 – volume: 2022 start-page: 1545 year: 2022 ident: 2135_CR60 publication-title: Proc Int ACM SIGIR Conf Res Dev Inf Retr doi: 10.1145/3477495.3532064 – ident: 2135_CR42 doi: 10.1016/j.inffus.2024.102454 – ident: 2135_CR71 doi: 10.1109/TPAMI.2023.3234553 – volume: 2023 start-page: 1 year: 2023 ident: 2135_CR67 publication-title: Proc IEEE Int Conf Acoust Speech Signal Process – ident: 2135_CR39 doi: 10.1109/TAFFC.2023.3282410 – ident: 2135_CR51 – ident: 2135_CR59 – volume: 2020 start-page: 82 year: 2020 ident: 2135_CR4 publication-title: Proc CEUR Workshop – volume: 2014 start-page: 960 year: 2014 ident: 2135_CR57 publication-title: Proc IEEE Int Conf Acoust Speech Signal Process – volume: 2021 start-page: 2302 year: 2021 ident: 2135_CR12 publication-title: Proc AAAI Conf Artif Intell – volume: 2023 start-page: 6631 year: 2023 ident: 2135_CR16 publication-title: Proc IEEE/CVF Conf Comput Vis Pattern Recognit – volume: 2014 start-page: 76 year: 2014 ident: 2135_CR23 publication-title: Proc Int Conf Internet Multimed Comput Serv – volume: 26 start-page: 529 year: 2024 ident: 2135_CR10 publication-title: IEEE Trans Multimed doi: 10.1109/TMM.2023.3267882 – volume: 54 start-page: 5789 issue: 8 year: 2021 ident: 2135_CR6 publication-title: Artif Intell Rev doi: 10.1007/s10462-021-09958-2 – volume: 2022 start-page: 3722 year: 2022 ident: 2135_CR65 publication-title: Proc ACM Int Conf Multimed – ident: 2135_CR38 – volume: 18 start-page: 298 year: 2025 ident: 2135_CR69 publication-title: Int J Intell Comput Cybernetics doi: 10.1108/IJICC-08-2024-0384 – ident: 2135_CR72 doi: 10.1109/CVPR52733.2024.01184 – volume: 177 start-page: 106397 year: 2024 ident: 2135_CR46 publication-title: Neural Netw doi: 10.1016/j.neunet.2024.106397 – volume: 2023 start-page: 5879 year: 2023 ident: 2135_CR2 publication-title: Proc ACM Int Conf Multimed |
| SSID | ssib045327741 ssj0001778302 ssib044733412 |
| Score | 2.3135717 |
| Snippet | Multimodal sentiment analysis (MSA) has become increasingly important for understanding human emotions, with applications in areas such as human-computer... Abstract Multimodal sentiment analysis (MSA) has become increasingly important for understanding human emotions, with applications in areas such as... |
| SourceID | doaj proquest crossref springer |
| SourceType | Open Website Aggregation Database Index Database Publisher |
| StartPage | 14 |
| SubjectTerms | Attention mechanism Audio data Complexity Computational Intelligence Data Structures and Information Theory Deep learning Effectiveness Emotion recognition Emotions Engineering Knowledge Knowledge distillation Knowledge representation Multimodal learning Multimodal sentiment analysis Original Paper Performance enhancement Semantics Sentiment analysis Social networks Uncertainty User generated content |
| SummonAdditionalLinks | – databaseName: Advanced Technologies & Aerospace Database dbid: P5Z link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB5B4UAPQClVd1mQD72B1Y3HiZ0TAkTFgVY9AKqQkBW_qkqw2-5uu38fj_NYtVK5cIsSa5Rk7JnxzPj7AA6cDqISynJBzWEyesHrIBwX3oqp9dUUc7ngx1d1cqLPzurTLuG27Noqe5uYDbWfO8qRH6JQU01IM_j-8ooTaxRVVzsKjYfwiFASiLrhtPzZzycpFaLcuG9ZJiE900zOwShF8FfEP1fUNZe5kjkeTtdJApfnxPeaHCGWfH3Ld2WI_1tx6Z1SavZQR8_-99uew9MuNmUf2sm0Aw_C7AVsHw_Arstd-HU89zly58kftt0E6bpZN4vAhvQc82Q4frdddiz27V8sxccsNzD-ISGMjj1lbgHWdMgoL-H70edvn77wjqGBO1mWK442SkKbdC7te2xVhxT9WdsgBY1a-kr7gMpbj1Q19tqmh0WsYq1dQaMC7sHWbD4L-8BkUMnsulhXkgAso9UKi8K6iI13ScAI3vb_3ly2QBxmgFzOmjJJUyZryqxH8JHUM4wkEO18Y744N92aNEjpnLRddL6oJTpsptE71WAZK4za-hFMenWZbmUvzUZXI3jXK3zz-P5XGv9b2it4ItJ-uM3uTGBrtbgOr-Gxu1ldLBdv8rz-C_Jr-_I priority: 102 providerName: ProQuest – databaseName: SpringerLINK dbid: C24 link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwEB6VlgMcoDyqblsqH7iBpY3Hie0jVFQcaNVDQb0gK34hJLpb7W7Zv4_HeWyL1APckthykvFjxjOfvwF463UUjVCOCwKHyRQEN1F4LoITUxeaKZZwwbcv6vxcX12Zi_5Q2HJAuw8hybJSj4fdJHG9c0q_mvUS1nz9CHbqShsC8p1sOMelVIhyo7RljUIN-WWK50UpIr2irHOVMVyW-OXBw6-5p7EKsf89a_SvAGrRS6fP_--PduFZb4eyD93AeQFbcfYSnt5hJ8x3ZyOl6_IVfD-bh2Kz86wJOxxBvm7X7SKy0THHAi0Zvzp8HUsD8Itly5gV6OI1NcLowFPJKsDanhPlNXw9_XR58pn3uRm4l3W94uiSJJ5J7_OOxzUmZrvPuRbJXNQyNDpEVMEFpHhx0C4XVqlJRvuKakXcg-3ZfBb3gcmo8oLrk2kkUVcmpxVWlfMJ2-BzAxN4N8jf3nQUHHYkWy4itFmEtojQrifwkbporEn02eXBfPHD9rPRIjly8kbRh8pI9NhOU_CqxTo1mLQLEzgaOtj2c3pp8zCaauI6wgm8Hzp0U_zwJx38W_VDeCLyzrjz8xzB9mpxG9_AY_979XO5OC5j_Q8R9PaW priority: 102 providerName: Springer Nature |
| Title | Modality-uncertainty-aware knowledge distillation framework for multimodal sentiment analysis |
| URI | https://link.springer.com/article/10.1007/s40747-025-02135-w https://www.proquest.com/docview/3270872163 https://doaj.org/article/30093164cd1943c3a0fdc7a35f63f8bd |
| Volume | 12 |
| WOSCitedRecordID | wos001617004200002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 2198-6053 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001778302 issn: 2199-4536 databaseCode: DOA dateStart: 20150101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2198-6053 dateEnd: 99991231 omitProxy: false ssIdentifier: ssib044733412 issn: 2199-4536 databaseCode: M~E dateStart: 20150101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre – providerCode: PRVPQU databaseName: ProQuest advanced technologies & aerospace journals customDbUrl: eissn: 2198-6053 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001778302 issn: 2199-4536 databaseCode: P5Z dateStart: 20151201 isFulltext: true titleUrlDefault: https://search.proquest.com/hightechjournals providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: eissn: 2198-6053 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001778302 issn: 2199-4536 databaseCode: BENPR dateStart: 20151201 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: Publicly Available Content Database customDbUrl: eissn: 2198-6053 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001778302 issn: 2199-4536 databaseCode: PIMPY dateStart: 20151201 isFulltext: true titleUrlDefault: http://search.proquest.com/publiccontent providerName: ProQuest – providerCode: PRVAVX databaseName: SpringerLink Open Access Journals customDbUrl: eissn: 2198-6053 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001778302 issn: 2199-4536 databaseCode: C24 dateStart: 20151201 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3NaxQxFA9aPehB_MStdcnBmwZn8jKTzNGWFgV3GUSlChImXyDoVna33X_f9zIfbQXx4mUYJiFkXl7yPvN7jL3wJspaaickJYepFKRoovRCBicLF-oCcrjg83u9XJrT06a9UuqLcsJ6eOCecK-BbG7U6X1Acxs8dEUKXndQpRqScYFO30I3V4wp5CSlNIC6FNyqAqnHGjPZ-6I1AV9R5bmyaYTKMcz96V6dIlh5QZVeUQRCJXbXpFYG97-mkf4RRM2y6eQ-uzcolfxN_zMP2I24esjuLiZE1s0j9m1xFrLKLVCQ9WkA-N7tunXkk1-NB9rxP_r0OJ7GvC2Oii3PmYc_aRBO95VyUQDeDZAmj9mnk-OPR2_FUFpBeFVVWwEuKYKJ9B4NFlc3EdU25zogbc-oUJsQQQcXgMK9wThsLFOdGuNL6hXhCdtbna3iU8ZV1Hhe-tTUipAnkzMaytL5BF3wOMCMvRxJZ3_1CBp2wkrOhLZIaJsJbXczdkjUnXoS-nX-gDxhB56w_-KJGTsY18YOW3JjkQMKQ1BFMGOvxvW6bP77lPb_x5SesTsSzd3eeXPA9rbr8_ic3fYX2--b9ZzdOjxeth_m7OaRVPPMyvhsq6_Y0r5btF9-A2pV8vM |
| linkProvider | Directory of Open Access Journals |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1JbxMxFH4qKRJwYEekFPABTmCRsT1jzwEhtqpRkyiHgsoBueOtqgRJSQIRf4rfiJ9niYoEtx64jWasp1m-ebu_B_DEKs8KJg1l2BwmgmO09MxS5gwbGFcMeCoXfBzJyUQdHZXTLfjV7oXBtspWJyZF7eYWc-QvOJMDhUwz_NXZN4pTo7C62o7QqGFx4H-uY8i2fDl8F7_vU8b23h--3afNVAFqRZ6vKDdBIEOitdFXN0Xpo8diTMXR0VHCFcp5Lp1xHCudTpl4MQtFKJXNcJXnUe4l2BYI9h5sT4fj6acWwVEw52LjMIg83nY72yZlfaREwi2ceJeVJRWpdrrT7ecTSGdPccJsNL08p-tz1jINFTjnCf9RvE02ce_G__Y2b8L1xvsmr-vf5RZs-dltuDbuqGuXd-DzeO5SbEKjxa_7JeJxta4WnnQJSOJQNX6p-whJaBvcSIwASGrR_IpCCG7sStMTSNVwv9yFDxfyfPegN5vP_H0gwstoWGwoC4EUncEoybPM2MArZ6OAPjxrv7U-q6lGdEcqnZChIzJ0QoZe9-ENwqFbiTTh6cR8caIbraM5JqxiQGxdVgpueTUIzsqK56HgQRnXh90WHrrRXUu9wUYfnrcA21z--y3t_FvaY7iyfzge6dFwcvAArrIY_de5rF3orRbf_UO4bH-sTpeLR81fReD4oqH3GwucWqc |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB5BQQgOvCu2FPCBW7G68Tixc4TCCkS76gFQL5UVvxAS3V3tLuzfx-M8tkXqAXFLYstJPLbn_Q3Aa6eDqISyXFBwmIxe8DoIx4W3Ymx9NcbsLvh2rKZTfXZWn17K4s_R7r1Lss1pIJSm2fpw4ePhkPgmCfedUynWxKOw5JubcIs8UrTGj7b441IqRLll4LJEofpaM9kKoxQBYFEFuqKuucy-zL3rX3OFe2WQ_yuS6V_O1MyjJg_-_-8ewv1OPmVv2wX1CG6E2WO4dwm1MN2dDFCvqydwfjL3WZbniUO28QXputk0y8AGgx3zdJT8bOPuWOwDwliSmFkOabygQRglQuVqA6zpsFKewtfJhy9HH3lXs4E7WZZrjjZKwp90LmlCtqpDkgetbZDESC19pX1A5a1H8iN7bVNjEatYa1dQr4C7sDObz8IzYDKodBC7WFeSIC2j1QqLwrqIjXdpgBEc9LQwixaawwwgzHkKTZpCk6fQbEbwjsg19CRY7fxgvvxuul1qkAw8SYF0vqglOmzG0TvVYBkrjNr6Eez3xDbdXl-ZtKTGmjCQcARveuJum6__pL1_6_4K7py-n5jjT9PPz-GuSMpzawrah5318ld4Abfd7_WP1fJl3gJ_AJqDAm4 |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Modality-uncertainty-aware+knowledge+distillation+framework+for+multimodal+sentiment+analysis&rft.jtitle=Complex+%26+intelligent+systems&rft.au=Wang%2C+Nan&rft.au=Wang%2C+Qi&rft.date=2026-01-01&rft.issn=2199-4536&rft.eissn=2198-6053&rft.volume=12&rft.issue=1&rft_id=info:doi/10.1007%2Fs40747-025-02135-w&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s40747_025_02135_w |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2199-4536&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2199-4536&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2199-4536&client=summon |