Sparse and Contractive Graph-Based Variational Encoder-Decoder with Multihead Attention for Robust Spatiotemporal Activity Recognition

With the increasing adoption of various sensors, human action recognition has gained significant attention across multiple domains, including person surveillance and human-robot interaction. However, existing data-driven approaches struggle with effectively modeling the spatiotemporal dynamics of se...

Full description

Saved in:
Bibliographic Details
Published in:IEEE International Conference on Electro Information Technology pp. 109 - 114
Main Authors: Saffari, Mohsen, Singh, Yash Pratap, Khodayar, Mahdi
Format: Conference Proceeding
Language:English
Published: IEEE 29.05.2025
Subjects:
ISSN:2154-0373
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the increasing adoption of various sensors, human action recognition has gained significant attention across multiple domains, including person surveillance and human-robot interaction. However, existing data-driven approaches struggle with effectively modeling the spatiotemporal dynamics of sensory data and suffer from limited generalization capability. To address these challenges, this paper introduces a novel graph-based deep learning framework, incorporating a Graph-Attentive Variational Sparse Contractive Peephole LSTM (GAVSC-PLSTM) model. The proposed architecture effectively captures spatiotemporal correlations among sensory data from different body parts and introduces a novel encoder-decoder generative framework to extract task-relevant deep spatiotemporal features. Extensive experiments on three widely used public datasets demonstrate that the proposed model outperforms recent baseline methods.
ISSN:2154-0373
DOI:10.1109/eIT64391.2025.11103603