Real-time fall detection algorithm based on FFD-AlphaPose and CTR–GCN

With the increasing prevalence of an aging population, falls present a substantial risk to the health of older adults, making fall detection and prevention a primary societal concern. In response to the challenges of inadequate real-time performance and low accuracy in existing methods, this paper p...

Full description

Saved in:
Bibliographic Details
Published in:Journal of real-time image processing Vol. 22; no. 3; p. 109
Main Authors: Yang, Xuecun, Wang, Yixiang, Dong, Zhonghua, Li, Jiayu, Zhang, Qingyun, Qiang, Shushan
Format: Journal Article
Language:English
Published: Berlin/Heidelberg Springer Berlin Heidelberg 01.06.2025
Springer Nature B.V
Subjects:
ISSN:1861-8200, 1861-8219
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the increasing prevalence of an aging population, falls present a substantial risk to the health of older adults, making fall detection and prevention a primary societal concern. In response to the challenges of inadequate real-time performance and low accuracy in existing methods, this paper proposes a lightweight AlphaPose based on FFD-YOLO. Additionally, it incorporates the channel topology refinement graph convolutional network (CTR-GCN) to improve fall detection capabilities. To address the bottlenecks in efficiency and accuracy, this paper first presents an innovative C2fPDR module aimed at enhancing the processing capabilities of long sequence data and expanding the feature receptive field. This approach maintains efficiency while reducing the parameter count, thereby ensuring the stability and accuracy of detection and fully demonstrating the unique advantages of a lightweight design. Furthermore, the neck component has been innovatively restructured by employing a gather-and-distribute (GD) mechanism to optimize the fusion of multi-layer features. Additionally, the integration of MobileNetV4 enhances the backbone network, significantly improving detection speed. The experimental results indicate that the F-FD-YOLO model proposed in this paper reduces the parameter count by 43.0% compared to the original network, increases the frames per second (FPS) by 11.9, achieves a mean average precision (mAP) of 94.3%, and outperforms other classical object detection algorithms that have been adapted for AlphaPose. After embedding AlphaPose, the pose estimation average precision (AP) reaches 74.3%, demonstrating improvements of 0.7, 1.0, and 0.8% compared to the most recent literature (Liang et al., J Supercomput 81:1–20, 2025; Xu et al., Neurocomputing 619:129154, 2025; Miao et al., Adv Neural Inf Process Syst 37:44791–44813, 2025), respectively. The frames per second (FPS) on the GPU reaches 45.8, which is 32.4 FPS faster than OpenPose. When combined with CTR-GCN for action recognition, the precision reaches 98.62%, representing improvements of 8.57, 2.20, and 1.61% over the most recent literature (Cheng et al., Multimed Syst 31:67, 2025; Raza et al., Eng Appl Artif Intell 143:109809, 2025; Yu et al., Pervasive Mob Comput 107:102016, 2025). These experiments validate the substantial advantages of the proposed algorithm for fall detection.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1861-8200
1861-8219
DOI:10.1007/s11554-025-01687-x