LDD-Track: An energy-efficient deep reinforcement learning framework for multi-subject tracking in mobile crowdsensing
Multi-subject tracking in Mobile Crowdsensing Systems (MCS) is a challenging task due to dynamic mobility, limited energy resources, and the need for real-time decisions. Traditional models like Kalman Filters and Hidden Markov Models struggle in such conditions, while Transformer-based deep learnin...
Uložené v:
| Vydané v: | Computer networks (Amsterdam, Netherlands : 1999) Ročník 272; s. 111735 |
|---|---|
| Hlavní autori: | , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Elsevier B.V
01.11.2025
|
| Predmet: | |
| ISSN: | 1389-1286 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | Multi-subject tracking in Mobile Crowdsensing Systems (MCS) is a challenging task due to dynamic mobility, limited energy resources, and the need for real-time decisions. Traditional models like Kalman Filters and Hidden Markov Models struggle in such conditions, while Transformer-based deep learning methods offer high accuracy but are too computationally demanding for mobile use. Unlike previous studies that focus on one-to-one or collaborative group tracking, which often lack scalability and adaptability to real-world complexities, we propose LDD-Track, a novel multi-subject tracking framework that integrates Long Short-Term Memory (LSTM) networks with an adaptive attention mechanism, Density-Based Spatial Clustering of Applications with Noise (DBSCAN), and Deep Q-Network (DQN)-based user allocation. The LSTM model, enhanced with attention mechanisms, dynamically assigns weights αt to past trajectory points, filtering noise and improving prediction accuracy. The DBSCAN clustering technique effectively groups subjects based on predicted movement, optimizing resource allocation and reducing computational overhead. The DQN-based user assignment strategy models resource optimization as a Markov Decision Process (MDP), leveraging the Q-value function Q(st,at) to ensure adaptive and energy-efficient user allocation. Extensive experiments on the Taxi Mobility in Rome dataset demonstrate the superiority of LDD-Track. The framework achieves a 51 % reduction in energy consumption, a 39 % increase in Coverage Completion Rate (CCR), and a 9.7 % improvement in resource allocation efficiency compared to state-of-the-art methods. These findings validate the effectiveness of integrating attention-based prediction and deep reinforcement learning in large-scale, real-time MCS environments. |
|---|---|
| ISSN: | 1389-1286 |
| DOI: | 10.1016/j.comnet.2025.111735 |