Content-aware sentiment understanding: cross-modal analysis with encoder-decoder architectures

The analysis of sentiment from social media data has attracted significant attention due to the proliferation of user-generated opinions and comments on these platforms. Social media content is often multi-modal, frequently combining images and text within single posts. To effectively estimate user...

Full description

Saved in:
Bibliographic Details
Published in:Journal of computational social science Vol. 8; no. 2
Main Authors: Pakdaman, Zahra, Koochari, Abbas, Sharifi, Arash
Format: Journal Article
Language:English
Published: Singapore Springer Nature Singapore 01.05.2025
Subjects:
ISSN:2432-2717, 2432-2725
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract The analysis of sentiment from social media data has attracted significant attention due to the proliferation of user-generated opinions and comments on these platforms. Social media content is often multi-modal, frequently combining images and text within single posts. To effectively estimate user sentiment across multiple content types, this study proposes a multimodal content-aware approach. It distinguishes text-dominant images, memes, and regular images, extracting embedded text from memes or text-dominant images. Using the Swin Transformer-GPT-2 (encoder-decoder) architecture, captions are generated for image analysis. The user’s sentiment is then estimated by analyzing embedded text, generated captions, and user-provided captions through a BiLSTM-LSTM (encoder-decoder) architecture and fully connected layers. The proposed method demonstrates superior performance, achieving 93% accuracy on the MVSA-Single dataset, 79% accuracy on the MVSA-Multiple dataset, and 90% accuracy on the TWITTER (Large) dataset surpassing current state-of-the-art methods.
AbstractList The analysis of sentiment from social media data has attracted significant attention due to the proliferation of user-generated opinions and comments on these platforms. Social media content is often multi-modal, frequently combining images and text within single posts. To effectively estimate user sentiment across multiple content types, this study proposes a multimodal content-aware approach. It distinguishes text-dominant images, memes, and regular images, extracting embedded text from memes or text-dominant images. Using the Swin Transformer-GPT-2 (encoder-decoder) architecture, captions are generated for image analysis. The user’s sentiment is then estimated by analyzing embedded text, generated captions, and user-provided captions through a BiLSTM-LSTM (encoder-decoder) architecture and fully connected layers. The proposed method demonstrates superior performance, achieving 93% accuracy on the MVSA-Single dataset, 79% accuracy on the MVSA-Multiple dataset, and 90% accuracy on the TWITTER (Large) dataset surpassing current state-of-the-art methods.
ArticleNumber 37
Author Sharifi, Arash
Koochari, Abbas
Pakdaman, Zahra
Author_xml – sequence: 1
  givenname: Zahra
  surname: Pakdaman
  fullname: Pakdaman, Zahra
  organization: Department of Computer Engineering, Science and Research Branch, Islamic Azad University
– sequence: 2
  givenname: Abbas
  orcidid: 0000-0003-0584-6470
  surname: Koochari
  fullname: Koochari, Abbas
  email: koochari@iau.ac.ir
  organization: Department of Computer Engineering, Science and Research Branch, Islamic Azad University
– sequence: 3
  givenname: Arash
  surname: Sharifi
  fullname: Sharifi, Arash
  organization: Department of Computer Engineering, Science and Research Branch, Islamic Azad University
BookMark eNp9kM1OAyEUhYmpibX2BVzxAihcYJi6M41_SRM3upUw_LSYljFA08zbO7bGpZt7zuKcm3u_SzRJffIIXTN6wyhVt0UApYxQkIRSrgQZztAUBAcCCuTkzzN1gealxI5yBg1XrJ2ij2Wfqk-VmIPJHpfRxt048D45n0s1ycW0vsM296WQXe_MFptktkOJBR9i3WCfbD9GifNHxSbbTaze1n325QqdB7Mtfv6rM_T--PC2fCar16eX5f2KWBBQiQDlhQ8dU6oRrgtM2rYDI4OSwjJmhHKdg8Z1krlFS3mQpuGMLcDwsPBtw2cITnuPd2Yf9FeOO5MHzaj-gaRPkPQISR8h6WEs8VOpjOG09ll_9vs8Plf-a30D6NRu1A
Cites_doi 10.18653/v1/P19-1656
10.1109/TKDE.2023.3270940
10.1016/j.knosys.2021.106803
10.1109/ICCV48922.2021.00986
10.1145/3132847.3133142
10.1109/TMM.2023.3321435
10.1007/s11042-023-17685-9
10.32604/iasc.2023.031987
10.1007/s11042-023-18105-8
10.1145/3517139
10.1007/s11042-023-18081-z
10.1145/3503161.3548211
10.18653/v1/W19-4828
10.18653/v1/D19-1410
10.1145/3474085.3475692
10.48550/arXiv.2204.05515
10.1016/j.engappai.2023.106874
10.1007/s10462-023-10685-z
10.26599/TST.2021.9010055
10.1016/j.knosys.2019.01.019
10.1016/j.eswa.2023.122731
10.1007/s11042-023-17953-8
10.1109/TMM.2022.3160060
10.1016/j.knosys.2019.04.018
10.1016/j.chbr.2023.100328
10.1007/s10489-022-04046-6
10.18653/v1/D17-1151
10.1016/j.eswa.2024.123247
10.18653/v1/2023.acl-long.635
10.1016/j.knosys.2023.110467
10.1007/978-3-319-27674-8_2
10.1609/aaai.v38i16.29795
10.1007/s11042-024-18156-5
10.1109/TEM.2023.3271597
10.1609/aaai.v37i8.26138
10.21437/Interspeech.2012-65
10.1007/s10462-023-10633-x
10.48550/arXiv.1405.4053
10.1007/s11042-024-18748-1
10.1162/tacl_a_00288
10.1162/neco.1997.9.8.1735
10.3390/s23020661
10.1016/j.neunet.2005.06.042
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2025 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2025 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
DBID AAYXX
CITATION
DOI 10.1007/s42001-025-00374-y
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Social Sciences (General)
EISSN 2432-2725
ExternalDocumentID 10_1007_s42001_025_00374_y
GroupedDBID 0R~
406
AACDK
AAHNG
AAIAL
AAJBT
AASML
AATNV
AATVU
AAUYE
ABAKF
ABBRH
ABDBE
ABDZT
ABECU
ABFTV
ABJNI
ABKCH
ABMQK
ABQBU
ABTEG
ABTKH
ABTMW
ABXPI
ACAOD
ACDTI
ACGFS
ACHSB
ACMLO
ACOKC
ACPIV
ACZOJ
ADHHG
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEFQL
AEJRE
AEMSY
AESKC
AFBBN
AFDZB
AFOHR
AFQWF
AGDGC
AGJBK
AGMZJ
AGQEE
AGRTI
AHPBZ
AIAKS
AIGIU
AILAN
AITGF
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
AMKLP
AMXSW
AMYLF
AMYQR
ATHPR
AXYYD
AYFIA
BGNMA
CSCUP
DPUIP
EBLON
EBS
EJD
FIGPU
FINBP
FNLPD
FSGXE
GGCAI
H13
IKXTQ
IWAJR
J-C
JZLTJ
KOV
LLZTM
M4Y
NPVJJ
NQJWS
NU0
O9J
PT4
RLLFE
ROL
RSV
SJYHP
SNE
SNPRN
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
TSG
UOJIU
UTJUX
UZXMN
VFIZW
ZMTXR
AAYXX
ABFSG
ABRTQ
ACSTC
AEZWR
AFHIU
AHWEU
AIXLP
CITATION
ID FETCH-LOGICAL-c242t-427e4efb17764dbf15c8b2a5f754c11a47dbd26db51d9803f5a631192a3f9e863
IEDL.DBID RSV
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001427685200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2432-2717
IngestDate Sat Nov 29 07:51:57 EST 2025
Fri May 16 03:49:46 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 2
Keywords Sentiment analysis
Large language model
Transformer
Image captioning
Meme detection
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c242t-427e4efb17764dbf15c8b2a5f754c11a47dbd26db51d9803f5a631192a3f9e863
ORCID 0000-0003-0584-6470
ParticipantIDs crossref_primary_10_1007_s42001_025_00374_y
springer_journals_10_1007_s42001_025_00374_y
PublicationCentury 2000
PublicationDate 20250500
2025-05-00
PublicationDateYYYYMMDD 2025-05-01
PublicationDate_xml – month: 5
  year: 2025
  text: 20250500
PublicationDecade 2020
PublicationPlace Singapore
PublicationPlace_xml – name: Singapore
PublicationTitle Journal of computational social science
PublicationTitleAbbrev J Comput Soc Sc
PublicationYear 2025
Publisher Springer Nature Singapore
Publisher_xml – name: Springer Nature Singapore
References J An (374_CR51) 2023; 126
M Jia (374_CR15) 2024; 38
374_CR13
374_CR16
C Peng (374_CR50) 2022; 27
F Huang (374_CR25) 2019; 167
374_CR21
374_CR23
K Zhang (374_CR27) 2021
Z Pakdaman (374_CR34) 2024; 83
T Sun (374_CR14) 2023; 35
B Hung (374_CR52) 2024
S Hochreiter (374_CR38) 1997; 9
374_CR46
P Přibáň (374_CR1) 2024
374_CR49
374_CR48
374_CR11
Y Qiao (374_CR12) 2023; 37
C Gan (374_CR17) 2024; 242
W Wang (374_CR47) 2020; 33
T Zhu (374_CR30) 2022; 25
R Strubytskyi (374_CR6) 2023; 12
S Saranya (374_CR10) 2023; 36
374_CR36
A Graves (374_CR37) 2005; 18
374_CR39
P Kumar (374_CR7) 2023; 53
374_CR41
374_CR40
A Yadav (374_CR26) 2023; 19
374_CR43
374_CR42
H Zhang (374_CR2) 2024; 57
374_CR45
Y Liu (374_CR19) 2023; 268
R Dey (374_CR4) 2024
J Xu (374_CR24) 2019; 178
J Lu (374_CR28) 2019; 32
T Brown (374_CR35) 2020; 33
374_CR29
374_CR32
374_CR31
374_CR33
M Danyal (374_CR5) 2024
374_CR9
374_CR8
Y Li (374_CR20) 2024; 57
T Singh (374_CR3) 2024
Z Yin (374_CR18) 2024; 83
S Hou (374_CR22) 2023; 23
T Niu (374_CR44) 2016
References_xml – ident: 374_CR9
– ident: 374_CR23
  doi: 10.18653/v1/P19-1656
– volume: 35
  start-page: 12605
  issue: 12
  year: 2023
  ident: 374_CR14
  publication-title: IEEE Transactions on Knowledge and Data Engineering
  doi: 10.1109/TKDE.2023.3270940
– year: 2021
  ident: 374_CR27
  publication-title: Knowledge-Based System
  doi: 10.1016/j.knosys.2021.106803
– ident: 374_CR32
  doi: 10.1109/ICCV48922.2021.00986
– ident: 374_CR45
  doi: 10.1145/3132847.3133142
– ident: 374_CR21
  doi: 10.1109/TMM.2023.3321435
– volume: 83
  start-page: 60171
  issue: 21
  year: 2024
  ident: 374_CR18
  publication-title: Multimedia Tools and Applications
  doi: 10.1007/s11042-023-17685-9
– volume: 36
  start-page: 339
  issue: 1
  year: 2023
  ident: 374_CR10
  publication-title: Intelligent Automation & Soft Computing
  doi: 10.32604/iasc.2023.031987
– year: 2024
  ident: 374_CR52
  publication-title: Multimedia Tools and Applications
  doi: 10.1007/s11042-023-18105-8
– volume: 19
  start-page: 1
  issue: 1
  year: 2023
  ident: 374_CR26
  publication-title: ACM Transactions on Multimedia Computing, Communications and Applications
  doi: 10.1145/3517139
– volume: 33
  start-page: 5776
  year: 2020
  ident: 374_CR47
  publication-title: Advances in Neural Information Processing Systems
– year: 2024
  ident: 374_CR3
  publication-title: Multimedia Tools and Applications
  doi: 10.1007/s11042-023-18081-z
– ident: 374_CR41
– ident: 374_CR11
  doi: 10.1145/3503161.3548211
– ident: 374_CR42
  doi: 10.18653/v1/W19-4828
– ident: 374_CR48
  doi: 10.18653/v1/D19-1410
– ident: 374_CR29
  doi: 10.1145/3474085.3475692
– ident: 374_CR49
  doi: 10.48550/arXiv.2204.05515
– volume: 126
  start-page: 106874
  year: 2023
  ident: 374_CR51
  publication-title: Engineering Applications of Artificial Intelligence
  doi: 10.1016/j.engappai.2023.106874
– volume: 57
  start-page: 1
  issue: 4
  year: 2024
  ident: 374_CR20
  publication-title: Artificial Intelligence Review
  doi: 10.1007/s10462-023-10685-z
– ident: 374_CR31
– volume: 27
  start-page: 664
  issue: 4
  year: 2022
  ident: 374_CR50
  publication-title: Tsinghua Science and Technology
  doi: 10.26599/TST.2021.9010055
– volume: 167
  start-page: 26
  year: 2019
  ident: 374_CR25
  publication-title: Knowledge-Based System
  doi: 10.1016/j.knosys.2019.01.019
– volume: 242
  start-page: 122731
  year: 2024
  ident: 374_CR17
  publication-title: Expert Systems with Applications
  doi: 10.1016/j.eswa.2023.122731
– ident: 374_CR36
– year: 2024
  ident: 374_CR4
  publication-title: Multimedia Tools and Applications
  doi: 10.1007/s11042-023-17953-8
– volume: 33
  start-page: 1877
  year: 2020
  ident: 374_CR35
  publication-title: Advances in neural information processing systems
– volume: 25
  start-page: 3375
  year: 2022
  ident: 374_CR30
  publication-title: IEEE Transactions on Multimedia
  doi: 10.1109/TMM.2022.3160060
– volume: 178
  start-page: 61
  year: 2019
  ident: 374_CR24
  publication-title: Knowledge-Based System
  doi: 10.1016/j.knosys.2019.04.018
– volume: 32
  start-page: 1
  year: 2019
  ident: 374_CR28
  publication-title: Advances in Neural Information Processing Systems
– volume: 12
  year: 2023
  ident: 374_CR6
  publication-title: Computers in Human Behavior Reports
  doi: 10.1016/j.chbr.2023.100328
– volume: 53
  start-page: 10096
  issue: 9
  year: 2023
  ident: 374_CR7
  publication-title: Applied Intelligence
  doi: 10.1007/s10489-022-04046-6
– ident: 374_CR33
– ident: 374_CR40
  doi: 10.18653/v1/D17-1151
– year: 2024
  ident: 374_CR1
  publication-title: Expert Systems with Applications
  doi: 10.1016/j.eswa.2024.123247
– ident: 374_CR13
  doi: 10.18653/v1/2023.acl-long.635
– volume: 268
  start-page: 110467
  year: 2023
  ident: 374_CR19
  publication-title: Knowledge-Based Systems
  doi: 10.1016/j.knosys.2023.110467
– start-page: 15
  volume-title: Multimedia Modeling
  year: 2016
  ident: 374_CR44
  doi: 10.1007/978-3-319-27674-8_2
– volume: 38
  start-page: 18354
  issue: 16
  year: 2024
  ident: 374_CR15
  publication-title: In Proceedings of the AAAI Conference on Artificial Intelligence
  doi: 10.1609/aaai.v38i16.29795
– year: 2024
  ident: 374_CR5
  publication-title: Multimedia Tools and Applications
  doi: 10.1007/s11042-024-18156-5
– ident: 374_CR8
  doi: 10.1109/TEM.2023.3271597
– volume: 37
  start-page: 9507
  issue: 8
  year: 2023
  ident: 374_CR12
  publication-title: Proceedings of the AAAI Conference on Artificial Intelligence
  doi: 10.1609/aaai.v37i8.26138
– ident: 374_CR39
  doi: 10.21437/Interspeech.2012-65
– volume: 57
  start-page: 17
  year: 2024
  ident: 374_CR2
  publication-title: Artificial Intelligence Review
  doi: 10.1007/s10462-023-10633-x
– ident: 374_CR46
  doi: 10.48550/arXiv.1405.4053
– volume: 83
  start-page: 80351
  issue: 34
  year: 2024
  ident: 374_CR34
  publication-title: Multimedia Tools and Applications
  doi: 10.1007/s11042-024-18748-1
– ident: 374_CR43
  doi: 10.1162/tacl_a_00288
– volume: 9
  start-page: 1735
  issue: 8
  year: 1997
  ident: 374_CR38
  publication-title: Neural Computation
  doi: 10.1162/neco.1997.9.8.1735
– ident: 374_CR16
– volume: 23
  start-page: 661
  issue: 2
  year: 2023
  ident: 374_CR22
  publication-title: Sensors
  doi: 10.3390/s23020661
– volume: 18
  start-page: 602
  issue: 5–6
  year: 2005
  ident: 374_CR37
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2005.06.042
SSID ssib031263718
ssj0002734756
Score 2.290303
Snippet The analysis of sentiment from social media data has attracted significant attention due to the proliferation of user-generated opinions and comments on these...
SourceID crossref
springer
SourceType Index Database
Publisher
SubjectTerms Big Data/Analytics
Complex Systems
Computational Linguistics
Research Article
Simulation and Modeling
Social Media
Social Sciences
Title Content-aware sentiment understanding: cross-modal analysis with encoder-decoder architectures
URI https://link.springer.com/article/10.1007/s42001-025-00374-y
Volume 8
WOSCitedRecordID wos001427685200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAVX
  databaseName: Springer Online Journals
  customDbUrl:
  eissn: 2432-2725
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002734756
  issn: 2432-2717
  databaseCode: RSV
  dateStart: 20180101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEA5aPXjxLdYXOXhQNNDNc-NNxOJBiqCWnlyymwQU3Ep3q_Tfm6TZ0oIIetrLEMK3k8wkM98XAE6VVIUqpHNemxpEmZFIYt_sIJUlMqc8D1fZ_XvR66WDgXyIpLCq6XZvSpJhp56R3ei0_QczFERT0GQZrDCvNuPP6I_9xotIgjlpdty3KOAiwjOumBLPRUlEZM_8POxihFosj4ao093433w3wXrMMuH11C22wJIpt0F7SsWFcTlX8CxqTp_vgJegUlXWSH2pkYGekRRk_-F4nvxyBcOU0ftQu3FU1DOB_i4XekFMZ4q0CV84X6GodsFz9_bp5g7FpxdQ4WJ2jSgWhhrrtak41blNWJHmWDErGC2SRFGhc425zlmiZdohlilOEpctKmKlSTnZA61yWJp9AIXRHVFoLSxPaWGxpC5pkJyRXLvcwZg2uGjgzj6mChvZTEs5YJg5DLOAYTZpg8sG7SyutuoX84O_mR-CNRx-mG9oPAKtejQ2x2C1-Kxfq9FJcLNvKbDNCQ
linkProvider Springer Nature
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEA5aBb34FuszBw-KBrp5bDbeRCwVaxGspSeX7CYBBbfS3Sr9927SbGlBBD3lMoQwmWQmmfm-AeBUCpnKVJTGayKNKNMCCWyLHYQ0RCQ0TNxXdq_NO52o3xePHhSWV9XuVUrS3dRTsBudlP9ghhxpChovgiVq2-zYN_pTr7IiEuCQVDfumydw4a6NK6bEYlEC7tEzP08776Hm06PO6zTX_7feDbDmo0x4PTGLTbCgsy1Qn0BxoT_OOTzznNPn2-DFsVRlBZJfcqihRSQ52n84mgW_XEG3ZPQ-UOU80vOZQPuXCy0hZimKlHYjnM1Q5DvguXnbvWkh33oBpaXPLhDFXFNtLDdVSFViApZGCZbMcEbTIJCUq0ThUCUsUCJqEMNkSIIyWpTECB2FZBfUskGm9wDkWjV4qhQ3YURTgwUtgwYRMpKoMnbQug4uKnXHHxOGjXjKpex0GJc6jJ0O43EdXFbajv1py38R3_-b-AlYaXUf2nH7rnN_AFax2zxb3HgIasVwpI_AcvpZvObDY2dy38_6z-0
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3dS8MwEA86RXzxW5yfefBB0bA1SZvGN5kOxTGGH2NPlrRJQMFtrJ2y_94kbccGIohPeTmOcLn0rrn7_Q6AU8FFIhJunFeHClFfccSxbXbgQhMe0yB2T9ndFmu3w16Pd2ZQ_K7bvSxJ5pgGy9LUz2pDqWtT4BvNW4GwjxyBCposgiVq_mRsU9fjU7f0KOLhgJRf3_eCzIW5ka6YEotL8ViBpPlZ7Xy0mi-VugjUXP__3jfAWpF9wuvcXTbBgupvgWoO0YXFNU_hWcFFfb4NXhu5biS-xEhBi1Ry4wDgeBYUcwXd9tHHQBo9ouA5gfaNF1qiTCOKpHIrnK1cpDvgpXn73LhDxUgGlJhYniGKmaJKW86qgMpYe34Sxlj4mvk08TxBmYwlDmTse5KHdaJ9ERDPZJGCaK7CgOyCSn_QV3sAMiXrLJGS6SCkicacmmSCBz6JpckplKqCi9L00TBn3oimHMvOhpGxYeRsGE2q4LK0fFTcwvQX8f2_iZ-Alc5NM2rdtx8OwCp2Z2d7Hg9BJRuN1RFYTj6zt3R07LzvG1GE2NE
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Content-aware+sentiment+understanding%3A+cross-modal+analysis+with+encoder-decoder+architectures&rft.jtitle=Journal+of+computational+social+science&rft.au=Pakdaman%2C+Zahra&rft.au=Koochari%2C+Abbas&rft.au=Sharifi%2C+Arash&rft.date=2025-05-01&rft.issn=2432-2717&rft.eissn=2432-2725&rft.volume=8&rft.issue=2&rft_id=info:doi/10.1007%2Fs42001-025-00374-y&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s42001_025_00374_y
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2432-2717&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2432-2717&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2432-2717&client=summon