Multi-scale motion perception fall detection algorithm based on video swin transformer
Fall detection is a prominent subject in healthcare. Advancements in modern monitoring and deep learning have sparked significant social interest in visual fall detection. Despite the success of various deep learning methods in video fall detection owing to their superior feature extraction capabili...
Uloženo v:
| Vydáno v: | Signal, image and video processing Ročník 19; číslo 10; s. 800 |
|---|---|
| Hlavní autoři: | , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
London
Springer London
01.10.2025
Springer Nature B.V |
| Témata: | |
| ISSN: | 1863-1703, 1863-1711 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Fall detection is a prominent subject in healthcare. Advancements in modern monitoring and deep learning have sparked significant social interest in visual fall detection. Despite the success of various deep learning methods in video fall detection owing to their superior feature extraction capabilities, they still encounter challenges in analyzing long-range or short-range spatiotemporal correlations. Taking this into account, a multi-scale motion perception fall detection algorithm based on video swin transformer is proposed in this study. Our proposed method employs video swin transformer as the backbone to fully model the global and local spatiotemporal information from videos and optimizes the backbone with two integrated modules. On one hand, we design a multi-scale motion information aggregation module to overcome the difficulty of the model in focusing on key multi-scale motion features. On the other hand, we propose a token pruning module to reduce the computational cost by pruning redundant temporal tokens. Experimental results demonstrate that the proposed algorithm exhibits promising outcomes, with an accuracy of 96.11% and 97.05% on the Le2i and UR fall detection datasets, respectively, thus outperforming some existing advanced algorithms. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1863-1703 1863-1711 |
| DOI: | 10.1007/s11760-025-04358-3 |