Plantar Pressure-Based Gait Recognition with and Without Carried Object by Convolutional Neural Network-Autoencoder Architecture

Convolutional neural networks (CNNs) have been widely and successfully demonstrated for closed set recognition in gait identification, but they still lack robustness in open set recognition for unknown classes. To improve the disadvantage, we proposed a convolutional neural network autoencoder (CNN-...

Full description

Saved in:
Bibliographic Details
Published in:Biomimetics (Basel, Switzerland) Vol. 10; no. 2; p. 79
Main Authors: Wu, Chin-Cheng, Tsai, Cheng-Wei, Wu, Fei-En, Chiang, Chi-Hsuan, Chiou, Jin-Chern
Format: Journal Article
Language:English
Published: Switzerland MDPI AG 01.02.2025
MDPI
Subjects:
ISSN:2313-7673, 2313-7673
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Convolutional neural networks (CNNs) have been widely and successfully demonstrated for closed set recognition in gait identification, but they still lack robustness in open set recognition for unknown classes. To improve the disadvantage, we proposed a convolutional neural network autoencoder (CNN-AE) architecture for user classification based on plantar pressure gait recognition. The model extracted gait features using pressure-sensitive mats, focusing on foot pressure distribution and foot size during walking. Preprocessing techniques, including region of interest (ROI) selection, feature image extraction, and data horizontal flipping, were utilized to establish a CNN model that assessed gait recognition accuracy under two conditions: without carried items and carrying a 500 g object. To extend the application of the CNN to open set recognition for unauthorized personnel, the proposed convolutional neural network-autoencoder (CNN-AE) architecture compressed the average foot pressure map into a 64-dimensional feature vector and facilitated identity determination based on the distances between these vectors. Among 60 participants, 48 were classified as authorized individuals and 12 as unauthorized. Under the condition of not carrying an object, an accuracy of 91.218%, precision of 93.676%, recall of 90.369%, and an F1-Score of 91.993% were achieved, indicating that the model successfully identified most actual positives. However, when carrying a 500 g object, the accuracy was 85.648%, precision was 94.459%, recall was 84.423%, and the F1-Score was 89.603%.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2313-7673
2313-7673
DOI:10.3390/biomimetics10020079