Visual language transformer framework for multimodal dance performance evaluation and progression monitoring
Dance is often perceived as complex due to the need for coordinating multiple body movements and precisely aligning them with musical rhythm and content. Research in automatic dance performance assessment has the potential to enhance individuals’ sensorimotor skills and motion analysis. Recent studi...
Uložené v:
| Vydané v: | Scientific reports Ročník 15; číslo 1; s. 30649 - 22 |
|---|---|
| Hlavný autor: | |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
London
Nature Publishing Group UK
20.08.2025
Nature Publishing Group Nature Portfolio |
| Predmet: | |
| ISSN: | 2045-2322, 2045-2322 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Dance is often perceived as complex due to the need for coordinating multiple body movements and precisely aligning them with musical rhythm and content. Research in automatic dance performance assessment has the potential to enhance individuals’ sensorimotor skills and motion analysis. Recent studies on dance performance assessment primarily focus on evaluating simple dance movements using a single task, typically estimating final performance scores. We propose a novel transformer-based visual-language framework for multi-modal dance performance evaluation and progression monitoring. Our approach addresses two core challenges: the learning of feature representations for complex dance movements synchronized with music across diverse styles, genres, and expertise levels, and capturing the multi-task nature of dance performance evaluation. To achieve this, we integrate contrastive self-supervised learning, spatiotemporal graph convolutional networks (STGCN), long short-term memory networks (LSTM), and transformer-based text prompting. Our model evaluates three key tasks: (i) multilabel dance classification, (ii) dance quality estimation, and (iii) dance-music synchronization, leveraging primitive-based segmentation and multi-modal inputs. During the pre-training phase, we utilize contrastive loss to capture primitive-based features from complex dance motion and music data. For downstream tasks, we propose a transformer-based text prompting approach to conduct multi-task evaluations for the three assessment objectives. Our model outperforms in diverse downstream tasks. For multilabel dance classification, our model achieves a score of 75.20, representing a 10.25% improvement over CotrastiveDance, on the dance quality estimation task, the proposed model achieved a 92.09% lower loss on CotrastiveDance. For dance-music synchronization, our model excels with a score of 2.52, outperforming CotrastiveDance by 48.67%. |
|---|---|
| AbstractList | Dance is often perceived as complex due to the need for coordinating multiple body movements and precisely aligning them with musical rhythm and content. Research in automatic dance performance assessment has the potential to enhance individuals' sensorimotor skills and motion analysis. Recent studies on dance performance assessment primarily focus on evaluating simple dance movements using a single task, typically estimating final performance scores. We propose a novel transformer-based visual-language framework for multi-modal dance performance evaluation and progression monitoring. Our approach addresses two core challenges: the learning of feature representations for complex dance movements synchronized with music across diverse styles, genres, and expertise levels, and capturing the multi-task nature of dance performance evaluation. To achieve this, we integrate contrastive self-supervised learning, spatiotemporal graph convolutional networks (STGCN), long short-term memory networks (LSTM), and transformer-based text prompting. Our model evaluates three key tasks: (i) multilabel dance classification, (ii) dance quality estimation, and (iii) dance-music synchronization, leveraging primitive-based segmentation and multi-modal inputs. During the pre-training phase, we utilize contrastive loss to capture primitive-based features from complex dance motion and music data. For downstream tasks, we propose a transformer-based text prompting approach to conduct multi-task evaluations for the three assessment objectives. Our model outperforms in diverse downstream tasks. For multilabel dance classification, our model achieves a score of 75.20, representing a 10.25% improvement over CotrastiveDance, on the dance quality estimation task, the proposed model achieved a 92.09% lower loss on CotrastiveDance. For dance-music synchronization, our model excels with a score of 2.52, outperforming CotrastiveDance by 48.67%. Abstract Dance is often perceived as complex due to the need for coordinating multiple body movements and precisely aligning them with musical rhythm and content. Research in automatic dance performance assessment has the potential to enhance individuals’ sensorimotor skills and motion analysis. Recent studies on dance performance assessment primarily focus on evaluating simple dance movements using a single task, typically estimating final performance scores. We propose a novel transformer-based visual-language framework for multi-modal dance performance evaluation and progression monitoring. Our approach addresses two core challenges: the learning of feature representations for complex dance movements synchronized with music across diverse styles, genres, and expertise levels, and capturing the multi-task nature of dance performance evaluation. To achieve this, we integrate contrastive self-supervised learning, spatiotemporal graph convolutional networks (STGCN), long short-term memory networks (LSTM), and transformer-based text prompting. Our model evaluates three key tasks: (i) multilabel dance classification, (ii) dance quality estimation, and (iii) dance-music synchronization, leveraging primitive-based segmentation and multi-modal inputs. During the pre-training phase, we utilize contrastive loss to capture primitive-based features from complex dance motion and music data. For downstream tasks, we propose a transformer-based text prompting approach to conduct multi-task evaluations for the three assessment objectives. Our model outperforms in diverse downstream tasks. For multilabel dance classification, our model achieves a score of 75.20, representing a 10.25% improvement over CotrastiveDance, on the dance quality estimation task, the proposed model achieved a 92.09% lower loss on CotrastiveDance. For dance-music synchronization, our model excels with a score of 2.52, outperforming CotrastiveDance by 48.67%. Dance is often perceived as complex due to the need for coordinating multiple body movements and precisely aligning them with musical rhythm and content. Research in automatic dance performance assessment has the potential to enhance individuals' sensorimotor skills and motion analysis. Recent studies on dance performance assessment primarily focus on evaluating simple dance movements using a single task, typically estimating final performance scores. We propose a novel transformer-based visual-language framework for multi-modal dance performance evaluation and progression monitoring. Our approach addresses two core challenges: the learning of feature representations for complex dance movements synchronized with music across diverse styles, genres, and expertise levels, and capturing the multi-task nature of dance performance evaluation. To achieve this, we integrate contrastive self-supervised learning, spatiotemporal graph convolutional networks (STGCN), long short-term memory networks (LSTM), and transformer-based text prompting. Our model evaluates three key tasks: (i) multilabel dance classification, (ii) dance quality estimation, and (iii) dance-music synchronization, leveraging primitive-based segmentation and multi-modal inputs. During the pre-training phase, we utilize contrastive loss to capture primitive-based features from complex dance motion and music data. For downstream tasks, we propose a transformer-based text prompting approach to conduct multi-task evaluations for the three assessment objectives. Our model outperforms in diverse downstream tasks. For multilabel dance classification, our model achieves a score of 75.20, representing a 10.25% improvement over CotrastiveDance, on the dance quality estimation task, the proposed model achieved a 92.09% lower loss on CotrastiveDance. For dance-music synchronization, our model excels with a score of 2.52, outperforming CotrastiveDance by 48.67%.Dance is often perceived as complex due to the need for coordinating multiple body movements and precisely aligning them with musical rhythm and content. Research in automatic dance performance assessment has the potential to enhance individuals' sensorimotor skills and motion analysis. Recent studies on dance performance assessment primarily focus on evaluating simple dance movements using a single task, typically estimating final performance scores. We propose a novel transformer-based visual-language framework for multi-modal dance performance evaluation and progression monitoring. Our approach addresses two core challenges: the learning of feature representations for complex dance movements synchronized with music across diverse styles, genres, and expertise levels, and capturing the multi-task nature of dance performance evaluation. To achieve this, we integrate contrastive self-supervised learning, spatiotemporal graph convolutional networks (STGCN), long short-term memory networks (LSTM), and transformer-based text prompting. Our model evaluates three key tasks: (i) multilabel dance classification, (ii) dance quality estimation, and (iii) dance-music synchronization, leveraging primitive-based segmentation and multi-modal inputs. During the pre-training phase, we utilize contrastive loss to capture primitive-based features from complex dance motion and music data. For downstream tasks, we propose a transformer-based text prompting approach to conduct multi-task evaluations for the three assessment objectives. Our model outperforms in diverse downstream tasks. For multilabel dance classification, our model achieves a score of 75.20, representing a 10.25% improvement over CotrastiveDance, on the dance quality estimation task, the proposed model achieved a 92.09% lower loss on CotrastiveDance. For dance-music synchronization, our model excels with a score of 2.52, outperforming CotrastiveDance by 48.67%. |
| ArticleNumber | 30649 |
| Author | Chen, Lei |
| Author_xml | – sequence: 1 givenname: Lei surname: Chen fullname: Chen, Lei email: 15208201601@163.com organization: Art College, Chengdu Sport University |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/40835876$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kstu1TAQhi1UREvpC7BAkdh0E_DdzgqhikulSmyAreU4k-BDYh_spBVvX-eklJYF3sx4_M3v0cw8R0chBkDoJcFvCGb6beZENLrGVNREMi5q-gSdULw6jNKjB_4xOst5h8sRtOGkeYaOOdZMaCVP0Pjd58WO1WjDsNgBqjnZkPuYJkhVn-wENzH9rEqgmpZx9lPsCt3Z4KDaQ1rBgw_Xdlzs7GOobOiqfYpDgpzX-xSDn2PyYXiBnvZ2zHB2Z0_Rt48fvl58rq--fLq8eH9VO44bWivbtbIHSpWDlmhJG0y11KC0VLKhRHVtqzFhFjThQmjuSkMo1xYrVmDCTtHlpttFuzP75CebfptovTkEYhqMTbN3I5iShlvuuO2I4LYvtsVOKSIk56ptXdF6t2ntl3aCzkEoHRofiT5-Cf6HGeK1IZRJjXVTFM7vFFL8tUCezeSzg7G0HOKSDaNcciyI1AV9_Q-6i0sKpVcrRTjXWvBCvXpY0n0tf6ZaALoBLsWcE_T3CMFm3R6zbY8p22MO22NoSWJbUt6vs4L09-__ZN0CahrHig |
| Cites_doi | 10.1109/DICTA51227.2020.9363408 10.1007/978-3-031-19827-4_41 10.1109/ACCESS.2022.3223444 10.1145/3485664 10.1109/CVPR42600.2020.00975 10.1109/CVPR52688.2022.01077 10.1109/CVPR52688.2022.00296 10.25080/Majora-7b98e3ed-003 10.1126/scirobotics.abm6010 10.1609/aaai.v32i1.12328 10.1145/3394171.3413560 10.1109/CVPR42600.2020.00986 10.1080/07420528.2024.2379579 10.1201/9781003338888 10.1016/j.neunet.2023.09.047 10.1145/3394171.3413932 10.1109/ACCESS.2020.2980891 10.1109/ICCV48922.2021.01315 10.1145/3197517.3201371 10.3389/fpsyg.2020.01000 10.1016/j.clinbiomech.2022.105619 10.1109/TNNLS.2020.2978386 10.1145/3474085.3475307 10.1109/ICASSP49357.2023.10096824 10.1109/ICASSP49357.2023.10094628 10.1109/ICCV.2017.236 10.1109/CVPR46437.2021.00940 |
| ContentType | Journal Article |
| Copyright | The Author(s) 2025 2025. The Author(s). The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. The Author(s) 2025 2025 |
| Copyright_xml | – notice: The Author(s) 2025 – notice: 2025. The Author(s). – notice: The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: The Author(s) 2025 2025 |
| DBID | C6C AAYXX CITATION CGR CUY CVF ECM EIF NPM 3V. 7X7 7XB 88A 88E 88I 8FE 8FH 8FI 8FJ 8FK ABUWG AEUYN AFKRA AZQEC BBNVY BENPR BHPHI CCPQU DWQXO FYUFA GHDGH GNUQQ HCIFZ K9. LK8 M0S M1P M2P M7P PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQGLB PQQKQ PQUKI Q9U 7X8 5PM DOA |
| DOI | 10.1038/s41598-025-16345-2 |
| DatabaseName | Springer Nature OA Free Journals CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed ProQuest Central (Corporate) Health & Medical Collection ProQuest Central (purchase pre-March 2016) Biology Database (Alumni Edition) Medical Database (Alumni Edition) Science Database (Alumni Edition) ProQuest SciTech Collection ProQuest Natural Science Collection Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest One Sustainability ProQuest Central UK/Ireland ProQuest Central Essentials Biological Science Collection ProQuest Central Natural Science Collection ProQuest One Community College ProQuest Central Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Central Student SciTech Premium Collection ProQuest Health & Medical Complete (Alumni) Biological Sciences Health & Medical Collection (Alumni Edition) PML(ProQuest Medical Library) Science Database Biological Science Database ProQuest Central Premium ProQuest One Academic Publicly Available Content Database ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central Basic MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) Publicly Available Content Database ProQuest Central Student ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest One Health & Nursing ProQuest Natural Science Collection ProQuest Biology Journals (Alumni Edition) ProQuest Central ProQuest One Applied & Life Sciences ProQuest One Sustainability ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) Natural Science Collection ProQuest Central Korea Health & Medical Research Collection Biological Science Collection ProQuest Central (New) ProQuest Medical Library (Alumni) ProQuest Science Journals (Alumni Edition) ProQuest Biological Science Collection ProQuest Central Basic ProQuest Science Journals ProQuest One Academic Eastern Edition ProQuest Hospital Collection Health Research Premium Collection (Alumni) Biological Science Database ProQuest SciTech Collection ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest Medical Library ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE CrossRef MEDLINE - Academic Publicly Available Content Database |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: PIMPY name: Publicly Available Content Database url: http://search.proquest.com/publiccontent sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Biology Music Dance |
| EISSN | 2045-2322 |
| EndPage | 22 |
| ExternalDocumentID | oai_doaj_org_article_2480b4c4ad154af4adb0c77156447bbc PMC12368089 40835876 10_1038_s41598_025_16345_2 |
| Genre | Journal Article |
| GroupedDBID | 0R~ 4.4 53G 5VS 7X7 88E 88I 8FE 8FH 8FI 8FJ AAFWJ AAJSJ AAKDD AASML ABDBF ABUWG ACGFS ACUHS ADBBV ADRAZ AENEX AEUYN AFKRA AFPKN ALMA_UNASSIGNED_HOLDINGS AOIJS AZQEC BAWUL BBNVY BCNDV BENPR BHPHI BPHCQ BVXVI C6C CCPQU DIK DWQXO EBD EBLON EBS ESX FYUFA GNUQQ GROUPED_DOAJ GX1 HCIFZ HH5 HMCUK HYE KQ8 LK8 M1P M2P M7P M~E NAO OK1 PHGZM PHGZT PIMPY PJZUB PPXIY PQGLB PQQKQ PROAC PSQYO RNT RNTTT RPM SNYQT UKHRP AAYXX AFFHD CITATION CGR CUY CVF ECM EIF NPM PUEGO 3V. 7XB 88A 8FK K9. M48 PKEHL PQEST PQUKI Q9U 7X8 5PM |
| ID | FETCH-LOGICAL-c4092-7adb6fe227ceb1862902868e786769217dbb8013ae8145584c038248a07362913 |
| IEDL.DBID | DOA |
| ISSN | 2045-2322 |
| IngestDate | Fri Oct 03 12:36:24 EDT 2025 Tue Nov 04 02:04:16 EST 2025 Sat Nov 01 14:19:49 EDT 2025 Tue Oct 07 07:45:29 EDT 2025 Thu Sep 04 05:05:43 EDT 2025 Sat Nov 29 07:34:17 EST 2025 Thu Aug 21 01:11:25 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Keywords | Deep learning Multimodal analysis Graph convolutional network Transformer Dance performance monitoring |
| Language | English |
| License | 2025. The Author(s). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c4092-7adb6fe227ceb1862902868e786769217dbb8013ae8145584c038248a07362913 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| OpenAccessLink | https://doaj.org/article/2480b4c4ad154af4adb0c77156447bbc |
| PMID | 40835876 |
| PQID | 3241448854 |
| PQPubID | 2041939 |
| PageCount | 22 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_2480b4c4ad154af4adb0c77156447bbc pubmedcentral_primary_oai_pubmedcentral_nih_gov_12368089 proquest_miscellaneous_3246405168 proquest_journals_3241448854 pubmed_primary_40835876 crossref_primary_10_1038_s41598_025_16345_2 springer_journals_10_1038_s41598_025_16345_2 |
| PublicationCentury | 2000 |
| PublicationDate | 20250820 |
| PublicationDateYYYYMMDD | 2025-08-20 |
| PublicationDate_xml | – month: 8 year: 2025 text: 20250820 day: 20 |
| PublicationDecade | 2020 |
| PublicationPlace | London |
| PublicationPlace_xml | – name: London – name: England |
| PublicationTitle | Scientific reports |
| PublicationTitleAbbrev | Sci Rep |
| PublicationTitleAlternate | Sci Rep |
| PublicationYear | 2025 |
| Publisher | Nature Publishing Group UK Nature Publishing Group Nature Portfolio |
| Publisher_xml | – name: Nature Publishing Group UK – name: Nature Publishing Group – name: Nature Portfolio |
| References | S Persine (16345_CR14) 2022; 94 16345_CR35 Y Zhong (16345_CR8) 2024; 38 16345_CR12 16345_CR17 A Davis (16345_CR30) 2018; 37 16345_CR19 16345_CR18 16345_CR7 ZK Abdul (16345_CR28) 2022; 10 16345_CR9 16345_CR20 16345_CR22 16345_CR21 K Chen (16345_CR1) 2021; 40 P Khosla (16345_CR34) 2020; 33 16345_CR4 16345_CR3 16345_CR6 16345_CR5 G Rekik (16345_CR15) 2024; 41 16345_CR24 16345_CR23 16345_CR26 16345_CR25 16345_CR29 B Li (16345_CR13) 2022; 36 16345_CR31 16345_CR11 SV Jaque (16345_CR2) 2020; 11 16345_CR10 16345_CR32 H Mukhtar (16345_CR16) 2023; 168 Z Wu (16345_CR27) 2020; 32 F Dai (16345_CR33) 2020; 8 |
| References_xml | – volume: 38 start-page: 10270 year: 2024 ident: 16345_CR8 publication-title: Proc. AAAI Conf. Artif. Intell. – ident: 16345_CR18 doi: 10.1109/DICTA51227.2020.9363408 – ident: 16345_CR12 doi: 10.1007/978-3-031-19827-4_41 – volume: 10 start-page: 122136 year: 2022 ident: 16345_CR28 publication-title: IEEE Access doi: 10.1109/ACCESS.2022.3223444 – ident: 16345_CR32 doi: 10.1145/3485664 – ident: 16345_CR35 doi: 10.1109/CVPR42600.2020.00975 – ident: 16345_CR6 doi: 10.1109/CVPR52688.2022.01077 – ident: 16345_CR4 doi: 10.1109/CVPR52688.2022.00296 – ident: 16345_CR26 doi: 10.25080/Majora-7b98e3ed-003 – ident: 16345_CR21 doi: 10.1126/scirobotics.abm6010 – ident: 16345_CR10 doi: 10.1609/aaai.v32i1.12328 – ident: 16345_CR29 – ident: 16345_CR5 doi: 10.1145/3394171.3413560 – ident: 16345_CR19 doi: 10.1109/CVPR42600.2020.00986 – ident: 16345_CR9 – volume: 41 start-page: 1093 year: 2024 ident: 16345_CR15 publication-title: Chronobiol. Int. doi: 10.1080/07420528.2024.2379579 – ident: 16345_CR17 doi: 10.1201/9781003338888 – volume: 168 start-page: 363 year: 2023 ident: 16345_CR16 publication-title: Neural Netw. doi: 10.1016/j.neunet.2023.09.047 – ident: 16345_CR20 doi: 10.1145/3394171.3413932 – volume: 8 start-page: 53215 year: 2020 ident: 16345_CR33 publication-title: IEEE Access doi: 10.1109/ACCESS.2020.2980891 – volume: 33 start-page: 18661 year: 2020 ident: 16345_CR34 publication-title: Adv. Neural Inf. Process. Syst. – ident: 16345_CR31 doi: 10.1109/ICCV48922.2021.01315 – volume: 37 start-page: 1 year: 2018 ident: 16345_CR30 publication-title: ACM Trans. Graph. (TOG) doi: 10.1145/3197517.3201371 – ident: 16345_CR22 – volume: 36 start-page: 1272 year: 2022 ident: 16345_CR13 publication-title: Proc. AAAI Conf. Artif. Intell. – ident: 16345_CR24 – volume: 11 start-page: 1000 year: 2020 ident: 16345_CR2 publication-title: Front. Psychol. doi: 10.3389/fpsyg.2020.01000 – volume: 94 start-page: 105619 year: 2022 ident: 16345_CR14 publication-title: Clin. Biomech. doi: 10.1016/j.clinbiomech.2022.105619 – volume: 32 start-page: 4 year: 2020 ident: 16345_CR27 publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2020.2978386 – ident: 16345_CR23 doi: 10.1145/3474085.3475307 – ident: 16345_CR7 doi: 10.1109/ICASSP49357.2023.10096824 – volume: 40 start-page: 1 year: 2021 ident: 16345_CR1 publication-title: ACM Trans. Graph. (TOG) – ident: 16345_CR25 doi: 10.1109/ICASSP49357.2023.10094628 – ident: 16345_CR11 doi: 10.1109/ICCV.2017.236 – ident: 16345_CR3 doi: 10.1109/CVPR46437.2021.00940 |
| SSID | ssj0000529419 |
| Score | 2.4573777 |
| Snippet | Dance is often perceived as complex due to the need for coordinating multiple body movements and precisely aligning them with musical rhythm and content.... Abstract Dance is often perceived as complex due to the need for coordinating multiple body movements and precisely aligning them with musical rhythm and... |
| SourceID | doaj pubmedcentral proquest pubmed crossref springer |
| SourceType | Open Website Open Access Repository Aggregation Database Index Database Publisher |
| StartPage | 30649 |
| SubjectTerms | 639/705/1042 639/705/117 639/705/258 639/705/794 Adaptability Choreography Classification Dance Dance performance monitoring Dancers & choreographers Dancing - physiology Deep learning Genre Graph convolutional network Humanities and Social Sciences Humans Language Learning Literature reviews Long short-term memory Methods multidisciplinary Multimedia Multimodal analysis Music Neural Networks, Computer Performance assessment Performance evaluation Psychomotor Performance Rhythm Science Science (multidisciplinary) Sensorimotor system Synchronization Transformer |
| SummonAdditionalLinks | – databaseName: ProQuest Central dbid: BENPR link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB7BLki9UCiPBgoyEjeImjhO4pwQrVpxWlUIUG-WX6Er0WTZ7CLx75lxkl0tpVw4WUpGkZ152jP-BuBNXunSpVrHvsjqWFSmiI3Nq7jiXMu8stIaEZpNlLOZvLysLoYDt24oqxxtYjDUrrV0Rn6Mjh9jfylz8X7xI6auUZRdHVpo3IUpIZWJCUxPzmYXnzanLJTHEmk13JZJMnncoceiW2U8jzEUEXnMdzxSAO7_W7R5s2jyj8xpcEjn-_-7lIfwYAhF2Ydedh7BHd8cwP2-OeWvA5gGgcAxtIJ-DN-_zrs10o8nnGw1xrx-yeqxxovhAxaqFK9bh9SOPsIW2-sJbIsvznTjWCgQ68FB2HUwMLTcJ_Dl_Ozz6cd46NUQW9whYpCu6T6f57y0aP1xm0SoMIX0paQaWtz3OGPQGWbaS4JGl8IiM7iQGk0MEqfZU5g0beMPgYmydppLl-foOhOXGJlajpbJm9xndcEjeDvySy16SA4VUumZVD13FXJXBe4qpD4hlm4oCU47PGiX39SgnQonkhhhhXYYUeoaR5PYsiQgHVEaYyM4GjmpBh3v1JaNEbzevEbtpJSLbny7DjQFhsRpISN41svPZiaCol90RhHIHcnamerum2Z-FRDACTJHJrKK4N0ohNt53f4vnv97GS9gj5NeJGQ7j2CyWq79S7hnf67m3fLVoFq_AVSwLAM priority: 102 providerName: ProQuest |
| Title | Visual language transformer framework for multimodal dance performance evaluation and progression monitoring |
| URI | https://link.springer.com/article/10.1038/s41598-025-16345-2 https://www.ncbi.nlm.nih.gov/pubmed/40835876 https://www.proquest.com/docview/3241448854 https://www.proquest.com/docview/3246405168 https://pubmed.ncbi.nlm.nih.gov/PMC12368089 https://doaj.org/article/2480b4c4ad154af4adb0c77156447bbc |
| Volume | 15 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 2045-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000529419 issn: 2045-2322 databaseCode: DOA dateStart: 20110101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2045-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000529419 issn: 2045-2322 databaseCode: M~E dateStart: 20110101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre – providerCode: PRVPQU databaseName: Biological Science Database customDbUrl: eissn: 2045-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000529419 issn: 2045-2322 databaseCode: M7P dateStart: 20110101 isFulltext: true titleUrlDefault: http://search.proquest.com/biologicalscijournals providerName: ProQuest – providerCode: PRVPQU databaseName: Health & Medical Collection customDbUrl: eissn: 2045-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000529419 issn: 2045-2322 databaseCode: 7X7 dateStart: 20110101 isFulltext: true titleUrlDefault: https://search.proquest.com/healthcomplete providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: eissn: 2045-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000529419 issn: 2045-2322 databaseCode: BENPR dateStart: 20110101 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: Publicly Available Content Database customDbUrl: eissn: 2045-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000529419 issn: 2045-2322 databaseCode: PIMPY dateStart: 20110101 isFulltext: true titleUrlDefault: http://search.proquest.com/publiccontent providerName: ProQuest – providerCode: PRVPQU databaseName: Science Database customDbUrl: eissn: 2045-2322 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000529419 issn: 2045-2322 databaseCode: M2P dateStart: 20110101 isFulltext: true titleUrlDefault: https://search.proquest.com/sciencejournals providerName: ProQuest |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwEB5BCxIXxJtAWRmJG0RNHCceHylqBYeuIgRoOVm246gr0Wy1DyT-PWMn2e3yEBcujmTPYTQznkc8_gzwqlRGNrkxqa-KNhXKVql1pUoV5wZL5dBZER-bkNMpzmaqvvbUV-gJ6-GBe8Edc4GZFU6YhoK9aelrMydlwDgR0loXvC9lPdeKqR7VmyuRq-GWTFbg8YoiVbhNxsuUUhBRpnwvEkXA_j9lmb83S_5yYhoD0dk9uDtkkOxtz_l9uOG7B3C7f1Pyx0P49mW-2tD6-COSrcfU1C9ZO7ZiMZpgsZnwctEQdROUz652twjYDgacma5hsY-rx_Bgl9EPBO4eweez00_v3qfDkwqpo0KOcmkTrt15zqUjJ03VTABvqdBLDK2uVJ401lLMKozHgGCOwpHsSAWGPAER58VjOOgWnX8KTMi2MRybsqQIlzWZxdxxciDelr5oK57A61G8-qpHztDxxLtA3StDkzJ0VIYm6pOggS1lQL2OE2QLerAF_S9bSOBo1J8etuJKU8ZIRSNiKRJ4uV2mTRRORkznF5tIU1HmmleYwJNe3VtOREhSKWYkgHuGsMfq_ko3v4hA3QHZBjNUCbwZbWbH199l8ex_yOI53OHB2LPgCI_gYL3c-Bdwy31fz1fLCdyUMxlHnMDhyem0_jiJO4jGc16HUdJ4WH84r7_-BEmeH94 |
| linkProvider | Directory of Open Access Journals |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB6VFgQXHuUVKGAkOEHUxHFi54AQr6pVy4pDQXsztuPASjRZ9gHqn-I3MuNsdrW8bj1wipRMIsf5ZuaLPQ-AR3lpZJUaE_siq2NR2iK2Li_jknOj8tIpZ0VoNiEHAzUclu824EefC0Nhlb1NDIa6ah2tke-i40fur1Quno-_xtQ1inZX-xYaHSwO_el3_GWbPjt4jd_3Med7b45f7ceLrgKxw38ZpJOGMs8859KhnUJCT_VLCuWlomhPZOiVtWi2M-MVFfFWwiWZ4kIZVAYUTjN87jnYQjsuKYRMDuVyTYd2zURaLnJz8LbdKfpHymHjeYzER-QxX_N_oU3An7jt7yGav-zTBve3d-V_m7ircHlBtNmLTjOuwYZvtuFC13rzdBu2AtzxGBpdX4cvH0bTOcr367ds1jN6P2F1H8HG8AQLMZgnbYXSFT2EjVfJF2xVPZ2ZpmIh_K0rfcJOgvmk6b0B78_k1W_CZtM2_jYwIevKcFXlORKDpEqsSh1Hu-tt7rO64BE86fGhx13BER0CBTKlOzRpRJMOaNIo_ZIgtJSkYuHhRDv5pBe2R-NAEiucMBXyZVPj0SZOSioTJKS1LoKdHjl6YcGmegWbCB4uL6PtoQ0l0_h2HmQKJPxpoSK41eF1ORJB3B5dbQRqDclrQ12_0ow-h_rmVBBIJaqM4GkP-tW4_j4Xd_79Gg_g4v7x2yN9dDA4vAuXOOlkQl5iBzZnk7m_B-fdt9loOrkflJrBx7NWhp8mcoIz |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lj9MwEB4tu4C48FhegQWMBCeImjpO4hwQApaK1ULVA6C9eW3HgUpsUpoWtH-NX8eMk7Qqr9seOEWKJ5HjfDPz2R7PADxKcp0VQ61Dl8ZlKHKThsYmeZhzrmWSW2mN8MUmsvFYHh3lky340Z-FobDK3iZ6Q13UltbIB-j4kftLmYhB2YVFTPZHz2dfQ6ogRTutfTmNFiKH7vQ7Tt-aZwf7-K8fcz56_f7Vm7CrMBBanNcgtdR0Cs1xnlm0WUjuKZdJKl0mKfIT2XphDJrwWDtJCb2lsFEsuZAaFQOFhzG-9xzsZNhG2vWOT1brO7SDJoZ5d04HHxs06CvpPBtPQiRBIgn5hi_0JQP-xHN_D9f8Zc_Wu8LRlf95EK_C5Y6AsxetxlyDLVftwoW2JOfpLux4NcCrL4B9Hb58nDZLlO_XddmiZ_puzso-so3hDeZjM0_qAqULegmbrQ9lsHVWdaargvmwuDYlCjvxZpWG-gZ8OJNPvwnbVV2528BEVhaayyJJkDBERWTk0HK0x84kLi5THsCTHitq1iYiUT6AIJaqRZZCZCmPLIXSLwlOK0lKIu5v1PNPqrNJCjsSGWGFLpBH6xKvJrJZRumDRGaMDWCvR5HqLFuj1hAK4OGqGW0SbTTpytVLL5PiRGCYygButdhd9UQQ50cXHIDcQPVGVzdbqulnn_ecEgXJSOYBPO0VYN2vv4_FnX9_xgO4iDqg3h6MD-_CJU7qGZHz2IPtxXzp7sF5-20xbeb3vX4zOD5rXfgJuSyLAA |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Visual+language+transformer+framework+for+multimodal+dance+performance+evaluation+and+progression+monitoring&rft.jtitle=Scientific+reports&rft.au=Chen%2C+Lei&rft.date=2025-08-20&rft.issn=2045-2322&rft.eissn=2045-2322&rft.volume=15&rft.issue=1&rft.spage=30649&rft_id=info:doi/10.1038%2Fs41598-025-16345-2&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2045-2322&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2045-2322&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2045-2322&client=summon |