Temporal-spatial skeleton modeling for real-time human dance behavior recognition using evolutionary algorithms

In the context of online dance instruction, accurately recognizing and assessing students’ movements in real time remains a key challenge due to occlusions, low-resolution input, and inconsistent lighting. To address these limitations, this study proposes a real-time human behavior detection framewo...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:PeerJ. Computer science Ročník 11; s. e3359
Hlavní autori: Liu, Keyin, Fan, Di
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: PeerJ Inc 20.11.2025
Predmet:
ISSN:2376-5992, 2376-5992
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In the context of online dance instruction, accurately recognizing and assessing students’ movements in real time remains a key challenge due to occlusions, low-resolution input, and inconsistent lighting. To address these limitations, this study proposes a real-time human behavior detection framework that integrates evolutionary algorithms with a skeleton-based deep learning model. The system captures continuous video streams via a Camera Serial Interface (CSI) camera module, extracts skeletal joint coordinates, and models movement through a set of hybrid temporal and spatial features—including torso angle, joint positions, and limb velocity. These features are fused into a frame-level expansion vector, which is subsequently fed into a behavior classifier built on a long short-term memory (LSTM) network. The evolutionary algorithm optimizes the feature representation and classification structure to enhance generalization and avoid local optima. Experimental evaluation on the UTKinect dataset demonstrates that the proposed method achieves an average recognition accuracy of 95.47%, outperforming traditional RGB-and depth-based baselines. Furthermore, the system demonstrates real-time performance with a recognition latency below 0.6 s and frame processing time under 0.08 s. The results validate the effectiveness of multi-feature fusion and evolutionary enhancement in dynamic action scenarios. This framework offers a reliable tool for performance evaluation in online dance instruction and can be extended to domains such as rehabilitation, virtual fitness, and intelligent surveillance.
ISSN:2376-5992
2376-5992
DOI:10.7717/peerj-cs.3359