Context-Aware Proactive Edge Caching for Vehicular Edge Computing Based on Asynchronous Federated Learning

Edge caching is a promising technique for effectively reducing backhaul pressure and content access latency in the Internet of Vehicles (IoV). The existing content caching solutions still face the following challenges: 1) contents cached on edge servers are outdated quickly as time and user preferen...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE internet of things journal Ročník 12; číslo 13; s. 23195 - 23206
Hlavní autoři: Liao, Zhuofan, Liu, Pang, Zheng, Bin, Tang, XiaoYong
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 01.07.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2327-4662, 2327-4662
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Edge caching is a promising technique for effectively reducing backhaul pressure and content access latency in the Internet of Vehicles (IoV). The existing content caching solutions still face the following challenges: 1) contents cached on edge servers are outdated quickly as time and user preferences change; 2) the large amount of vehicle data causes huge communication overheads; and 3) limited storage resources of edge servers. Simultaneously considering these issues to reduce transmission latency is a large-scale 0-1 constraint problem, which is NP-hard, and boosting cache hit rates is a key entry point. In this work, we propose a context-aware proactive caching strategy (CPCS) based on asynchronous federated learning (AFL), which works as follows. To improve the accuracy of content popularity prediction, thus improving the cache hit rate, we combine contextual information between different contents and use long and short-term memory networks to analyze the dynamic preferences of vehicle users. After that, vehicles complete the model training and upload via an asynchronous federation learning to complete the popularity prediction. To explore the problem of local models being outdated in AFL, CPCS integrates model compression algorithms, enhancing system efficiency and prediction accuracy. With the prediction results, CPCS gives a content placement algorithm based on the prediction results to approximate the optimal caching scheme. Simulation results show that the CPCS can improve the cache hit rate by 17% at most compared to existing state-of-the-art caching strategies.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2025.3552682