Visual language transformer framework for multimodal dance performance evaluation and progression monitoring

Dance is often perceived as complex due to the need for coordinating multiple body movements and precisely aligning them with musical rhythm and content. Research in automatic dance performance assessment has the potential to enhance individuals’ sensorimotor skills and motion analysis. Recent studi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific reports Jg. 15; H. 1; S. 30649 - 22
1. Verfasser: Chen, Lei
Format: Journal Article
Sprache:Englisch
Veröffentlicht: London Nature Publishing Group UK 20.08.2025
Nature Publishing Group
Nature Portfolio
Schlagworte:
ISSN:2045-2322, 2045-2322
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Dance is often perceived as complex due to the need for coordinating multiple body movements and precisely aligning them with musical rhythm and content. Research in automatic dance performance assessment has the potential to enhance individuals’ sensorimotor skills and motion analysis. Recent studies on dance performance assessment primarily focus on evaluating simple dance movements using a single task, typically estimating final performance scores. We propose a novel transformer-based visual-language framework for multi-modal dance performance evaluation and progression monitoring. Our approach addresses two core challenges: the learning of feature representations for complex dance movements synchronized with music across diverse styles, genres, and expertise levels, and capturing the multi-task nature of dance performance evaluation. To achieve this, we integrate contrastive self-supervised learning, spatiotemporal graph convolutional networks (STGCN), long short-term memory networks (LSTM), and transformer-based text prompting. Our model evaluates three key tasks: (i) multilabel dance classification, (ii) dance quality estimation, and (iii) dance-music synchronization, leveraging primitive-based segmentation and multi-modal inputs. During the pre-training phase, we utilize contrastive loss to capture primitive-based features from complex dance motion and music data. For downstream tasks, we propose a transformer-based text prompting approach to conduct multi-task evaluations for the three assessment objectives. Our model outperforms in diverse downstream tasks. For multilabel dance classification, our model achieves a score of 75.20, representing a 10.25% improvement over CotrastiveDance, on the dance quality estimation task, the proposed model achieved a 92.09% lower loss on CotrastiveDance. For dance-music synchronization, our model excels with a score of 2.52, outperforming CotrastiveDance by 48.67%.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-025-16345-2