Multi-scale motion perception fall detection algorithm based on video swin transformer
Fall detection is a prominent subject in healthcare. Advancements in modern monitoring and deep learning have sparked significant social interest in visual fall detection. Despite the success of various deep learning methods in video fall detection owing to their superior feature extraction capabili...
Gespeichert in:
| Veröffentlicht in: | Signal, image and video processing Jg. 19; H. 10; S. 800 |
|---|---|
| Hauptverfasser: | , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
London
Springer London
01.10.2025
Springer Nature B.V |
| Schlagworte: | |
| ISSN: | 1863-1703, 1863-1711 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Fall detection is a prominent subject in healthcare. Advancements in modern monitoring and deep learning have sparked significant social interest in visual fall detection. Despite the success of various deep learning methods in video fall detection owing to their superior feature extraction capabilities, they still encounter challenges in analyzing long-range or short-range spatiotemporal correlations. Taking this into account, a multi-scale motion perception fall detection algorithm based on video swin transformer is proposed in this study. Our proposed method employs video swin transformer as the backbone to fully model the global and local spatiotemporal information from videos and optimizes the backbone with two integrated modules. On one hand, we design a multi-scale motion information aggregation module to overcome the difficulty of the model in focusing on key multi-scale motion features. On the other hand, we propose a token pruning module to reduce the computational cost by pruning redundant temporal tokens. Experimental results demonstrate that the proposed algorithm exhibits promising outcomes, with an accuracy of 96.11% and 97.05% on the Le2i and UR fall detection datasets, respectively, thus outperforming some existing advanced algorithms. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1863-1703 1863-1711 |
| DOI: | 10.1007/s11760-025-04358-3 |