Deep Reinforcement Learning techniques for dynamic task offloading in the 5G edge-cloud continuum
The integration of new Internet of Things (IoT) applications and services heavily relies on task offloading to external devices due to the constrained computing and battery resources of IoT devices. Up to now, Cloud Computing (CC) paradigm has been a good approach for tasks where latency is not crit...
Uloženo v:
| Vydáno v: | Journal of cloud computing : advances, systems and applications Ročník 13; číslo 1; s. 94 - 24 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.12.2024
Springer Nature B.V SpringerOpen |
| Témata: | |
| ISSN: | 2192-113X, 2192-113X |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | The integration of new Internet of Things (IoT) applications and services heavily relies on task offloading to external devices due to the constrained computing and battery resources of IoT devices. Up to now, Cloud Computing (CC) paradigm has been a good approach for tasks where latency is not critical, but it is not useful when latency matters, so Multi-access Edge Computing (MEC) can be of use. In this work, we propose a distributed Deep Reinforcement Learning (DRL) tool to optimize the binary task offloading decision, this is, the independent decision of where to execute each computing task, depending on many factors. The optimization goal in this work is to maximize the Quality-of-Experience (QoE) when performing tasks, which is defined as a metric related to the battery level of the UE, but subject to satisfying tasks’ latency requirements. This distributed DRL approach, specifically an Actor-Critic (AC) algorithm running on each User Equipment (UE), is evaluated through the simulation of two distinct scenarios and outperforms other analyzed baselines in terms of QoE values and/or energy consumption in dynamic environments, also demonstrating that decisions need to be adapted to the environment’s evolution. |
|---|---|
| AbstractList | The integration of new Internet of Things (IoT) applications and services heavily relies on task offloading to external devices due to the constrained computing and battery resources of IoT devices. Up to now, Cloud Computing (CC) paradigm has been a good approach for tasks where latency is not critical, but it is not useful when latency matters, so Multi-access Edge Computing (MEC) can be of use. In this work, we propose a distributed Deep Reinforcement Learning (DRL) tool to optimize the binary task offloading decision, this is, the independent decision of where to execute each computing task, depending on many factors. The optimization goal in this work is to maximize the Quality-of-Experience (QoE) when performing tasks, which is defined as a metric related to the battery level of the UE, but subject to satisfying tasks’ latency requirements. This distributed DRL approach, specifically an Actor-Critic (AC) algorithm running on each User Equipment (UE), is evaluated through the simulation of two distinct scenarios and outperforms other analyzed baselines in terms of QoE values and/or energy consumption in dynamic environments, also demonstrating that decisions need to be adapted to the environment’s evolution. Abstract The integration of new Internet of Things (IoT) applications and services heavily relies on task offloading to external devices due to the constrained computing and battery resources of IoT devices. Up to now, Cloud Computing (CC) paradigm has been a good approach for tasks where latency is not critical, but it is not useful when latency matters, so Multi-access Edge Computing (MEC) can be of use. In this work, we propose a distributed Deep Reinforcement Learning (DRL) tool to optimize the binary task offloading decision, this is, the independent decision of where to execute each computing task, depending on many factors. The optimization goal in this work is to maximize the Quality-of-Experience (QoE) when performing tasks, which is defined as a metric related to the battery level of the UE, but subject to satisfying tasks’ latency requirements. This distributed DRL approach, specifically an Actor-Critic (AC) algorithm running on each User Equipment (UE), is evaluated through the simulation of two distinct scenarios and outperforms other analyzed baselines in terms of QoE values and/or energy consumption in dynamic environments, also demonstrating that decisions need to be adapted to the environment’s evolution. |
| ArticleNumber | 94 |
| Author | Nieto, Gorka Lopez-Novoa, Unai de la Iglesia, Idoia Perfecto, Cristina |
| Author_xml | – sequence: 1 givenname: Gorka surname: Nieto fullname: Nieto, Gorka email: gnieto@ikerlan.es organization: Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA), University of the Basque Country (UPV/EHU). School of Engineering in Bilbao – sequence: 2 givenname: Idoia surname: de la Iglesia fullname: de la Iglesia, Idoia organization: Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA) – sequence: 3 givenname: Unai surname: Lopez-Novoa fullname: Lopez-Novoa, Unai organization: University of the Basque Country (UPV/EHU). School of Engineering in Bilbao – sequence: 4 givenname: Cristina surname: Perfecto fullname: Perfecto, Cristina organization: University of the Basque Country (UPV/EHU). School of Engineering in Bilbao |
| BookMark | eNp9kU1rVDEYhYNUsNb-AVcB11fzcXOTWUrVWhgQRMFdeJu8mWa8k4xJ7qL_vpleRXHRTT7Pczgv5yU5SzkhIa85e8u5md5VLietBybGgbFJmYE9I-eCb8TAufxx9s_5Bbmsdc8Y44wLafQ5gQ-IR_oVYwq5ODxganSLUFJMO9rQ3aX4a8FK-y_19wkO0dEG9SfNIcwZ_EkWE213SNU1Rb_Dwc158dTl1GJalsMr8jzAXPHy935Bvn_6-O3q87D9cn1z9X47OMVEG0aU4ZZtQl-dMuCDM3oDfNMvkqHwkgc-3oKZRNDaMFBSTOOIxgsDHdPygtysvj7D3h5LPEC5txmifXzIZWehtOhmtHL0fBKOCyP0aLwHo0bupQAVtAchuteb1etY8mn8Zvd5KanHt5IpJhXjk-wqs6pcybUWDNbFBi32yQvE2XJmT_3YtR_b-7GP_VjWUfEf-ifwk5BcodrFaYflb6onqAdewqP2 |
| CitedBy_id | crossref_primary_10_1016_j_adhoc_2025_103804 crossref_primary_10_1016_j_simpat_2025_103170 crossref_primary_10_1016_j_cosrev_2025_100782 crossref_primary_10_1007_s11227_024_06596_7 crossref_primary_10_1007_s42979_025_04130_x crossref_primary_10_1016_j_rineng_2025_105890 crossref_primary_10_1109_TCE_2024_3507158 crossref_primary_10_3390_s25092904 crossref_primary_10_1177_14727978251366564 crossref_primary_10_1016_j_icte_2025_04_015 crossref_primary_10_3390_network5020016 crossref_primary_10_1007_s11227_025_07551_w |
| Cites_doi | 10.32604/cmc.2023.038417 10.1109/JIOT.2021.3091142 10.1109/BMSB58369.2023.10211194 10.1109/IPCCC59175.2023.10253839 10.5281/zenodo.8127026 10.1557/s43577-022-00417-z 10.23919/ICN.2023.0007 10.1109/ACCESS.2021.3071848 10.1109/SAS58821.2023.10254051 10.1007/s10994-021-05961-4 10.1109/JIOT.2022.3143529 10.1007/s10723-023-09667-w 10.3390/app9061160 10.1109/ACCESS.2017.2665971 10.1109/TVT.2018.2881191 10.1109/JSYST.2022.3188997 10.1109/AI4I49448.2020.00014 10.1109/JIOT.2023.3316139 10.1109/MWC.001.2000296 10.1109/IECON49645.2022.9969083 10.1007/s11276-023-03404-7 10.1145/3616388.3617539 10.1109/JIOT.2021.3050804 10.1109/OJCOMS.2022.3189013 10.1007/s11277-020-07446-4 10.1109/TVT.2021.3115899 10.1007/s12083-022-01348-x 10.1109/HPSR57248.2023.10147932 10.1109/ACCESS.2019.2913564 10.1109/CSNDSP.2018.8471762 10.1109/JIOT.2020.3048992 10.1109/JSAC.2015.2478718 10.1109/ICCCE58854.2023.10246081 10.1109/ICC42927.2021.9500520 10.1109/TNSE.2023.3263169 10.1007/s00521-023-08905-2 10.1007/978-3-030-03596-9_15 10.1109/IPCCC59175.2023.10253860 10.1109/TCC.2022.3146615 10.1109/TMC.2019.2928811 10.1016/j.jpdc.2023.02.008 10.1109/TNSM.2023.3316626 |
| ContentType | Journal Article |
| Copyright | The Author(s) 2024 The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| Copyright_xml | – notice: The Author(s) 2024 – notice: The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
| DBID | C6C AAYXX CITATION 3V. 7RQ 7SC 7XB 8AL 8FD 8FE 8FG 8FK 8G5 ABUWG AFKRA ARAPS AZQEC BENPR BGLVJ CCPQU DWQXO GNUQQ GUQSH HCIFZ JQ2 K7- L7M L~C L~D M0N M2O MBDVC P62 PADUT PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS Q9U U9A DOA |
| DOI | 10.1186/s13677-024-00658-0 |
| DatabaseName | Springer Nature OA Free Journals CrossRef ProQuest Central (Corporate) Career & Technical Education Database Computer and Information Systems Abstracts ProQuest Central (purchase pre-March 2016) Computing Database (Alumni Edition) Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) Research Library (Alumni Edition) ProQuest Central (Alumni Edition) ProQuest Central UK/Ireland Advanced Technologies & Computer Science Collection ProQuest Central Essentials ProQuest Central (subscription) Technology collection ProQuest One Community College ProQuest Central Korea ProQuest Central Student Research Library Prep SciTech Premium Collection ProQuest Computer Science Collection Computer Science Database Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Computing Database Research Library Research Library (Corporate) ProQuest Advanced Technologies & Aerospace Collection Research Library China ProQuest Central Premium ProQuest One Academic (New) Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central China ProQuest Central Basic DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef Publicly Available Content Database Research Library Prep Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College Research Library (Alumni Edition) ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences ProQuest Central Korea ProQuest Research Library ProQuest Central (New) Research Library China Advanced Technologies Database with Aerospace Career and Technical Education (Alumni Edition) Advanced Technologies & Aerospace Collection ProQuest Computing ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection Computer and Information Systems Abstracts Professional ProQuest Career and Technical Education ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) |
| DatabaseTitleList | Publicly Available Content Database CrossRef |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: 7RQ name: Career & Technical Education Database url: https://search.proquest.com/career sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 2192-113X |
| EndPage | 24 |
| ExternalDocumentID | oai_doaj_org_article_34d162c1282748dda8541d32a5f7da22 10_1186_s13677_024_00658_0 |
| GrantInformation_xml | – fundername: Basque Government grantid: KK-2023/00038; KK-2023/00038 |
| GroupedDBID | -A0 0R~ 3V. 40G 5VS 7RQ 8FE 8FG 8G5 AAFWJ AAJSJ AAKKN ABEEZ ABFTD ABUWG ACACY ACGFS ACULB ADBBV ADINQ AFGXO AFKRA AFPKN AHBYD AHYZX ALMA_UNASSIGNED_HOLDINGS AMKLP ARAPS AZQEC BCNDV BENPR BGLVJ BPHCQ C24 C6C CCPQU DWQXO EBLON EBS GNUQQ GROUPED_DOAJ GUQSH HCIFZ HZ~ IAO ISR ITC K6V K7- KQ8 M0N M2O M~E O9- OK1 PADUT PIMPY PQQKQ PROAC RNS RSV SCO SOJ AASML AAYXX AFFHD CITATION ICD IVC PHGZM PHGZT PQGLB 7SC 7XB 8AL 8FD 8FK JQ2 L7M L~C L~D MBDVC P62 PKEHL PQEST PQUKI PRINS Q9U U9A |
| ID | FETCH-LOGICAL-c502t-4e3fb09f3fbc58adfc879a19c5830e2d31f14ba862f7780a532644e8d28a09f73 |
| IEDL.DBID | C24 |
| ISICitedReferencesCount | 14 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001221283600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2192-113X |
| IngestDate | Tue Oct 14 19:08:31 EDT 2025 Wed Oct 15 14:30:25 EDT 2025 Sat Nov 29 01:40:30 EST 2025 Tue Nov 18 22:34:09 EST 2025 Fri Feb 21 02:39:19 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Keywords | Performance evaluation Energy consumption Internet of Things (IoT) Quality-of-Experience (QoE) Multi-access Edge Computing (MEC) Edge-Cloud-Continuum Task offloading Reinforcement Learning (RL) |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c502t-4e3fb09f3fbc58adfc879a19c5830e2d31f14ba862f7780a532644e8d28a09f73 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| OpenAccessLink | https://link.springer.com/10.1186/s13677-024-00658-0 |
| PQID | 3050350163 |
| PQPubID | 2034894 |
| PageCount | 24 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_34d162c1282748dda8541d32a5f7da22 proquest_journals_3050350163 crossref_citationtrail_10_1186_s13677_024_00658_0 crossref_primary_10_1186_s13677_024_00658_0 springer_journals_10_1186_s13677_024_00658_0 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-12-01 |
| PublicationDateYYYYMMDD | 2024-12-01 |
| PublicationDate_xml | – month: 12 year: 2024 text: 2024-12-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Berlin/Heidelberg |
| PublicationPlace_xml | – name: Berlin/Heidelberg – name: Heidelberg |
| PublicationSubtitle | Advances, Systems and Applications |
| PublicationTitle | Journal of cloud computing : advances, systems and applications |
| PublicationTitleAbbrev | J Cloud Comp |
| PublicationYear | 2024 |
| Publisher | Springer Berlin Heidelberg Springer Nature B.V SpringerOpen |
| Publisher_xml | – name: Springer Berlin Heidelberg – name: Springer Nature B.V – name: SpringerOpen |
| References | MitsisGTsiropoulouEEPapavassiliouSPrice and risk awareness for data offloading decision-making in edge computing systemsIEEE Syst J20221646546655710.1109/JSYST.2022.3188997 ChenCZengYLiHLiuYWanSA multihop task offloading decision model in mec-enabled internet of vehiclesIEEE Internet Things J20231043215323010.1109/JIOT.2022.3143529 DongYAlwakeelAMAlwakeelMMAlharbiLAAlthubitiSAA heuristic deep q learning for offloading in edge devices in 5 g networksJ Grid Comput20232133710.1007/s10723-023-09667-w KovacevicIHarjulaEGlisicSLorenzoBYlianttilaMCloud and edge computation offloading for latency limited servicesIEEE Access20219557645577610.1109/ACCESS.2021.3071848 Towers M, Terry JK, Kwiatkowski A, Balis JU, Cola Gd, Deleu T, Goulão M, Kallinteris A, KG A, Krimmel M, Perez-Vicente R, Pierré A, Schulhoff S, Tai JJ, Shen ATJ, Younis OG (2023) Gymnasium. https://doi.org/10.5281/zenodo.8127026 Gulde R, Tuscher M, Csiszar A, Riedel O, Verl A (2020) Deep reinforcement learning using cyclical learning rates. In: 2020 Third International Conference on Artificial Intelligence for Industries (AI4I), IEEE, pp 32–35. https://doi.org/10.1109/AI4I49448.2020.00014 JiangTZhangJTangPTianLZhengYDouJAsplundHRaschkowskiLD’ErricoRJämsäT3g pp standardized 5g channel model for IIOT scenarios: A surveyIEEE Internet Things J20218118799881510.1109/JIOT.2020.3048992 Lai P, He Q, Abdelrazek M, Chen F, Hosking J, Grundy J, Yang Y (2018) Optimal edge user allocation in edge computing with variable sized vector bin packing. In: Pahl C, Vukovic M, Yin J, Yu Q (eds) Service-Oriented Computing. Springer International Publishing, Cham, pp 230–245. https://doi.org/10.1007/978-3-030-03596-9_15 ChenXWuCLiuZZhangNJiYComputation offloading in beyond 5g networks: A distributed learning framework and applicationsIEEE Wirel Commun2021282566210.1109/MWC.001.2000296 Plaat A (2022) Deep reinforcement learning, vol 10. Springer. https://link.springer.com/content/pdf/10.1007/978-981-19-0638-1.pdf Silva C, Magaia N, Grilo A (2023) Task offloading optimization in mobile edge computing based on deep reinforcement learning. In: Proceedings of the Int’l ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems, Association for Computing Machinery, pp 109–118. https://doi.org/10.1145/3616388.3617539 ChoBXiaoYLearning-based decentralized offloading decision making in an adversarial environmentIEEE Trans Veh Technol20217011113081132310.1109/TVT.2021.3115899 LinLZhouWYangZLiuJDeep reinforcement learning-based task scheduling and resource allocation for noma-mec in industrial internet of thingsPeer-to-Peer Netw Appl202316117018810.1007/s12083-022-01348-x Song Y, Shen Y (2023) Computing offloading based on deep reinforcement learning for virtual reality scene. In: 2023 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), pp 1–5. https://doi.org/10.1109/BMSB58369.2023.10211194 WangSZaferMLeungKKOnline placement of multi-component applications in edge computing environmentsIEEE Access201752514253310.1109/ACCESS.2017.2665971 Brockman G, Cheung V, Pettersson L, Schneider J, Schulman J, Tang J, Zaremba W (2016) OpenAI Gym. arXiv:1606.01540 Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mané D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viégas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X (2015) TensorFlow: Large-scale machine learning on heterogeneous systems. https://www.tensorflow.org/. Accessed 26 Mar 2024 BaccarelliEScarpinitiMMomenzadehAEcomobifog-design and dynamic optimization of a 5g mobile-fog-cloud multi-tier ecosystem for the real-time distributed execution of stream applicationsIEEE Access20197555655560810.1109/ACCESS.2019.2913564 HouJWuYCaiJZhouZQoe-guaranteed distributed offloading decision via partially observable deep reinforcement learning for edge-enabled internet of thingsNeural Comput Applic20233529216032161910.1007/s00521-023-08905-2 Xu J, Yang D (2023) Optimal task offloading for edge computing with stochastic task arrivals. In: 2023 IEEE International Performance, Computing, and Communications Conference (IPCCC), pp 24–31. https://doi.org/10.1109/IPCCC59175.2023.10253860 ZhouHJiangKLiuXLiXLeungVCDeep reinforcement learning for energy-efficient computation offloading in mobile-edge computingIEEE Internet Things J2021921517153010.1109/JIOT.2021.3091142 HuangLBiSZhangYJADeep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networksIEEE Trans Mob Comput202019112581259310.1109/TMC.2019.2928811 Dulac-ArnoldGLevineNMankowitzDJLiJPaduraruCGowalSHesterTChallenges of real-world reinforcement learning: definitions, benchmarks and analysisMach Learn2021110924192468430605510.1007/s10994-021-05961-4 KhannaAKaurSInternet of things (iot), applications and challenges: A comprehensive reviewWirel Pers Commun202011421687176210.1007/s11277-020-07446-4 YuanPShaoSZhangJZhaoXCooperative edge offloading strategy for sensory data with delay and energy constraintsWirel Netw20232983469347810.1007/s11276-023-03404-7 Saeed MM, Saeed RA, Mokhtar RA, Khalifa OO, Ahmed ZE, Barakat M, Elnaim AA (2023) Task reverse offloading with deep reinforcement learning in multi-access edge computing. In: 2023 9th International Conference on Computer and Communication Engineering (ICCCE), IEEE, pp 322–327. https://doi.org/10.1109/ICCCE58854.2023.10246081 Chollet F, et al (2015) Keras. https://keras.io. Accessed 26 Mar 2024 ETSI (2024) Multi-access edge computing (mec). https://www.etsi.org/technologies/multi-access-edge-computing. Accessed 26 Jan 2024 KhanBSJangsherSAhmedAAl-DweikAUrllc and embb in 5g industrial iot: A surveyIEEE Open J Commun Soc202231134116310.1109/OJCOMS.2022.3189013 Avgeris M, Mechennef M, Leivadeas A, Lambadaris I (2023) A two-stage cooperative reinforcement learning scheme for energy-aware computational offloading. In: 2023 IEEE 24th International Conference on High Performance Switching and Routing (HPSR), pp 179–184. https://doi.org/10.1109/HPSR57248.2023.10147932 Jiao X, Ou H, Chen S, Guo S, Qu Y, Xiang C, Shang J (2023) Deep reinforcement learning for time-energy tradeoff online offloading in mec-enabled industrial internet of things. IEEE Trans Netw Sci Eng 1–14. https://doi.org/10.1109/TNSE.2023.3263169 CarboneMRWhen not to use machine learning: A perspective on potential and limitationsMRS Bull202247996897410.1557/s43577-022-00417-z PanMLiZQianJEnergy-efficient multiuser and multitask computation offloading optimization methodIntell Converged Netw202341769210.23919/ICN.2023.0007 Al Aidaros O, Kardjadja Y, Bouida Z, Ibnkahla M (2023) Energy and time-effective computation offloading for edge computing-enabled iot networks. In: 2023 IEEE Sensors Applications Symposium (SAS), pp 1–6. https://doi.org/10.1109/SAS58821.2023.10254051 ChenXLiuGEnergy-efficient task offloading and resource allocation via deep reinforcement learning for augmented reality in mobile edge networksIEEE Internet Things J2021813108431085610.1109/JIOT.2021.3050804 CozzolinoVTonettoLMohanNDingAYOttJNimbus: Towards latency-energy efficient task offloading for ar servicesIEEE Trans Cloud Comput20231121530154510.1109/TCC.2022.3146615 Scarpiniti M, Baccarelli E, Momenzadeh A (2019) Virtfogsim: A parallel toolbox for dynamic energy-delay performance testing and optimization of 5g mobile-fog-cloud virtualized platforms. Appl Sci 9(6). https://doi.org/10.3390/app9061160 KwakJKimYLeeJChongSDream: Dynamic resource and task allocation for energy minimization in mobile cloud systemsIEEE J Sel Areas Commun201533122510252310.1109/JSAC.2015.2478718 XuAHuZZhangXXiaoHZhengHChenBZhengMZhongPKangYLiKQdrl: Queue-aware online drl for computation offloading in industrial internet of thingsIEEE Internet Things J202310.1109/JIOT.2023.3316139 3GPP (2020) Study on channel model for frequencies from 0.5 to 100 ghz. Technical report (tr), 3rd Generation Partnership Project (3GPP). version 16.1.0. https://www.etsi.org/deliver/etsi_tr/138900_138999/138901/16.01.00_60/tr_138901v160100p.pdf WuGXuZZhangHShenSYuSMulti-agent drl for joint completion delay and energy consumption with queuing theory in mec-based iiotJ Parallel Distrib Comput2023176809410.1016/j.jpdc.2023.02.008 Sun X, Chen J, Guo C (2022) Semantic-driven computation offloading and resource allocation for uav-assisted monitoring system in vehicular networks. In: IECON 2022-48th Annual Conference of the IEEE Industrial Electronics Society, IEEE, pp 1–6. https://doi.org/10.1109/IECON49645.2022.9969083 TranTXPompiliDJoint task offloading and resource allocation for multi-server mobile-edge computing networksIEEE Trans Veh Technol201968185686810.1109/TVT.2018.2881191 Zhang B, Xiao F, Wu L (2023) Offline reinforcement learning for asynchronous task offloading in mobile edge computing. IEEE Trans Netw Serv Manag. https://doi.org/10.1109/TNSM.2023.3316626 Abdullaev I, Prodanova N, Bhaskar KA, Lydia EL, Kadry S, Kim J (2023) Task offloading and resource allocation in iot based mobile edge computing using deep learning. Comput Mater Continua 76(2). https://doi.org/10.32604/cmc.2023.038417 Bi S, Huang L, Wang H, Zhang YJA (2021) Stable online computation offloading via lyapunov-guided deep reinforcement learning. In: IEEE ICC, pp 1–7. https://doi.org/10.1109/ICC42927.2021.9500520 Farhan L, Kharel R, Kaiwartya O, Quiroz-Castellanos M, Alissa A, Abdulsalam M (2018) A concise review on internet of things (iot) -problems, challenges and opportunities. In: 2018 11th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP), pp 1–6. https://doi.org/10.1109/CSNDSP.2018.8471762 Li H, Xiong K, Fan P, Letaief KB (2023) Deep reinforcement learning based task offloading and resource allocation in small cell mec. In: 2023 IEEE International Performance, 658_CR7 658_CR20 S Wang (658_CR42) 2017; 5 658_CR8 658_CR5 658_CR40 P Yuan (658_CR46) 2023; 29 MR Carbone (658_CR9) 2022; 47 658_CR24 658_CR45 A Khanna (658_CR26) 2020; 114 A Xu (658_CR44) 2023 I Kovacevic (658_CR27) 2021; 9 L Huang (658_CR22) 2020; 19 658_CR47 B Cho (658_CR13) 2021; 70 BS Khan (658_CR25) 2022; 3 L Lin (658_CR31) 2023; 16 C Chen (658_CR10) 2023; 10 658_CR29 T Jiang (658_CR23) 2021; 8 658_CR30 V Cozzolino (658_CR15) 2023; 11 658_CR35 658_CR34 X Chen (658_CR11) 2021; 8 658_CR39 658_CR38 M Pan (658_CR33) 2023; 4 658_CR37 658_CR14 658_CR36 TX Tran (658_CR41) 2019; 68 658_CR19 658_CR18 G Wu (658_CR43) 2023; 176 G Mitsis (658_CR32) 2022; 16 E Baccarelli (658_CR6) 2019; 7 H Zhou (658_CR48) 2021; 9 658_CR3 X Chen (658_CR12) 2021; 28 658_CR4 J Kwak (658_CR28) 2015; 33 658_CR1 G Dulac-Arnold (658_CR17) 2021; 110 658_CR2 Y Dong (658_CR16) 2023; 21 J Hou (658_CR21) 2023; 35 |
| References_xml | – reference: 3GPP (2020) Study on channel model for frequencies from 0.5 to 100 ghz. Technical report (tr), 3rd Generation Partnership Project (3GPP). version 16.1.0. https://www.etsi.org/deliver/etsi_tr/138900_138999/138901/16.01.00_60/tr_138901v160100p.pdf – reference: CozzolinoVTonettoLMohanNDingAYOttJNimbus: Towards latency-energy efficient task offloading for ar servicesIEEE Trans Cloud Comput20231121530154510.1109/TCC.2022.3146615 – reference: ChoBXiaoYLearning-based decentralized offloading decision making in an adversarial environmentIEEE Trans Veh Technol20217011113081132310.1109/TVT.2021.3115899 – reference: BaccarelliEScarpinitiMMomenzadehAEcomobifog-design and dynamic optimization of a 5g mobile-fog-cloud multi-tier ecosystem for the real-time distributed execution of stream applicationsIEEE Access20197555655560810.1109/ACCESS.2019.2913564 – reference: XuAHuZZhangXXiaoHZhengHChenBZhengMZhongPKangYLiKQdrl: Queue-aware online drl for computation offloading in industrial internet of thingsIEEE Internet Things J202310.1109/JIOT.2023.3316139 – reference: Al Aidaros O, Kardjadja Y, Bouida Z, Ibnkahla M (2023) Energy and time-effective computation offloading for edge computing-enabled iot networks. In: 2023 IEEE Sensors Applications Symposium (SAS), pp 1–6. https://doi.org/10.1109/SAS58821.2023.10254051 – reference: WuGXuZZhangHShenSYuSMulti-agent drl for joint completion delay and energy consumption with queuing theory in mec-based iiotJ Parallel Distrib Comput2023176809410.1016/j.jpdc.2023.02.008 – reference: Brockman G, Cheung V, Pettersson L, Schneider J, Schulman J, Tang J, Zaremba W (2016) OpenAI Gym. arXiv:1606.01540 – reference: LinLZhouWYangZLiuJDeep reinforcement learning-based task scheduling and resource allocation for noma-mec in industrial internet of thingsPeer-to-Peer Netw Appl202316117018810.1007/s12083-022-01348-x – reference: Bi S, Huang L, Wang H, Zhang YJA (2021) Stable online computation offloading via lyapunov-guided deep reinforcement learning. In: IEEE ICC, pp 1–7. https://doi.org/10.1109/ICC42927.2021.9500520 – reference: Dulac-ArnoldGLevineNMankowitzDJLiJPaduraruCGowalSHesterTChallenges of real-world reinforcement learning: definitions, benchmarks and analysisMach Learn2021110924192468430605510.1007/s10994-021-05961-4 – reference: JiangTZhangJTangPTianLZhengYDouJAsplundHRaschkowskiLD’ErricoRJämsäT3g pp standardized 5g channel model for IIOT scenarios: A surveyIEEE Internet Things J20218118799881510.1109/JIOT.2020.3048992 – reference: Li H, Xiong K, Fan P, Letaief KB (2023) Deep reinforcement learning based task offloading and resource allocation in small cell mec. In: 2023 IEEE International Performance, Computing, and Communications Conference (IPCCC), pp 475–480. https://doi.org/10.1109/IPCCC59175.2023.10253839 – reference: Zhang B, Xiao F, Wu L (2023) Offline reinforcement learning for asynchronous task offloading in mobile edge computing. IEEE Trans Netw Serv Manag. https://doi.org/10.1109/TNSM.2023.3316626 – reference: DongYAlwakeelAMAlwakeelMMAlharbiLAAlthubitiSAA heuristic deep q learning for offloading in edge devices in 5 g networksJ Grid Comput20232133710.1007/s10723-023-09667-w – reference: Avgeris M, Mechennef M, Leivadeas A, Lambadaris I (2023) A two-stage cooperative reinforcement learning scheme for energy-aware computational offloading. In: 2023 IEEE 24th International Conference on High Performance Switching and Routing (HPSR), pp 179–184. https://doi.org/10.1109/HPSR57248.2023.10147932 – reference: Jiao X, Ou H, Chen S, Guo S, Qu Y, Xiang C, Shang J (2023) Deep reinforcement learning for time-energy tradeoff online offloading in mec-enabled industrial internet of things. IEEE Trans Netw Sci Eng 1–14. https://doi.org/10.1109/TNSE.2023.3263169 – reference: Abdullaev I, Prodanova N, Bhaskar KA, Lydia EL, Kadry S, Kim J (2023) Task offloading and resource allocation in iot based mobile edge computing using deep learning. Comput Mater Continua 76(2). https://doi.org/10.32604/cmc.2023.038417 – reference: MitsisGTsiropoulouEEPapavassiliouSPrice and risk awareness for data offloading decision-making in edge computing systemsIEEE Syst J20221646546655710.1109/JSYST.2022.3188997 – reference: Chollet F, et al (2015) Keras. https://keras.io. Accessed 26 Mar 2024 – reference: Farhan L, Kharel R, Kaiwartya O, Quiroz-Castellanos M, Alissa A, Abdulsalam M (2018) A concise review on internet of things (iot) -problems, challenges and opportunities. In: 2018 11th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP), pp 1–6. https://doi.org/10.1109/CSNDSP.2018.8471762 – reference: Towers M, Terry JK, Kwiatkowski A, Balis JU, Cola Gd, Deleu T, Goulão M, Kallinteris A, KG A, Krimmel M, Perez-Vicente R, Pierré A, Schulhoff S, Tai JJ, Shen ATJ, Younis OG (2023) Gymnasium. https://doi.org/10.5281/zenodo.8127026 – reference: Saeed MM, Saeed RA, Mokhtar RA, Khalifa OO, Ahmed ZE, Barakat M, Elnaim AA (2023) Task reverse offloading with deep reinforcement learning in multi-access edge computing. In: 2023 9th International Conference on Computer and Communication Engineering (ICCCE), IEEE, pp 322–327. https://doi.org/10.1109/ICCCE58854.2023.10246081 – reference: ChenXLiuGEnergy-efficient task offloading and resource allocation via deep reinforcement learning for augmented reality in mobile edge networksIEEE Internet Things J2021813108431085610.1109/JIOT.2021.3050804 – reference: Scarpiniti M, Baccarelli E, Momenzadeh A (2019) Virtfogsim: A parallel toolbox for dynamic energy-delay performance testing and optimization of 5g mobile-fog-cloud virtualized platforms. Appl Sci 9(6). https://doi.org/10.3390/app9061160 – reference: Gulde R, Tuscher M, Csiszar A, Riedel O, Verl A (2020) Deep reinforcement learning using cyclical learning rates. In: 2020 Third International Conference on Artificial Intelligence for Industries (AI4I), IEEE, pp 32–35. https://doi.org/10.1109/AI4I49448.2020.00014 – reference: KovacevicIHarjulaEGlisicSLorenzoBYlianttilaMCloud and edge computation offloading for latency limited servicesIEEE Access20219557645577610.1109/ACCESS.2021.3071848 – reference: Lai P, He Q, Abdelrazek M, Chen F, Hosking J, Grundy J, Yang Y (2018) Optimal edge user allocation in edge computing with variable sized vector bin packing. In: Pahl C, Vukovic M, Yin J, Yu Q (eds) Service-Oriented Computing. Springer International Publishing, Cham, pp 230–245. https://doi.org/10.1007/978-3-030-03596-9_15 – reference: WangSZaferMLeungKKOnline placement of multi-component applications in edge computing environmentsIEEE Access201752514253310.1109/ACCESS.2017.2665971 – reference: KwakJKimYLeeJChongSDream: Dynamic resource and task allocation for energy minimization in mobile cloud systemsIEEE J Sel Areas Commun201533122510252310.1109/JSAC.2015.2478718 – reference: Xu J, Yang D (2023) Optimal task offloading for edge computing with stochastic task arrivals. In: 2023 IEEE International Performance, Computing, and Communications Conference (IPCCC), pp 24–31. https://doi.org/10.1109/IPCCC59175.2023.10253860 – reference: TranTXPompiliDJoint task offloading and resource allocation for multi-server mobile-edge computing networksIEEE Trans Veh Technol201968185686810.1109/TVT.2018.2881191 – reference: YuanPShaoSZhangJZhaoXCooperative edge offloading strategy for sensory data with delay and energy constraintsWirel Netw20232983469347810.1007/s11276-023-03404-7 – reference: CarboneMRWhen not to use machine learning: A perspective on potential and limitationsMRS Bull202247996897410.1557/s43577-022-00417-z – reference: HuangLBiSZhangYJADeep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networksIEEE Trans Mob Comput202019112581259310.1109/TMC.2019.2928811 – reference: ChenCZengYLiHLiuYWanSA multihop task offloading decision model in mec-enabled internet of vehiclesIEEE Internet Things J20231043215323010.1109/JIOT.2022.3143529 – reference: ChenXWuCLiuZZhangNJiYComputation offloading in beyond 5g networks: A distributed learning framework and applicationsIEEE Wirel Commun2021282566210.1109/MWC.001.2000296 – reference: KhanBSJangsherSAhmedAAl-DweikAUrllc and embb in 5g industrial iot: A surveyIEEE Open J Commun Soc202231134116310.1109/OJCOMS.2022.3189013 – reference: Plaat A (2022) Deep reinforcement learning, vol 10. Springer. https://link.springer.com/content/pdf/10.1007/978-981-19-0638-1.pdf – reference: Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mané D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viégas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X (2015) TensorFlow: Large-scale machine learning on heterogeneous systems. https://www.tensorflow.org/. Accessed 26 Mar 2024 – reference: ZhouHJiangKLiuXLiXLeungVCDeep reinforcement learning for energy-efficient computation offloading in mobile-edge computingIEEE Internet Things J2021921517153010.1109/JIOT.2021.3091142 – reference: KhannaAKaurSInternet of things (iot), applications and challenges: A comprehensive reviewWirel Pers Commun202011421687176210.1007/s11277-020-07446-4 – reference: Song Y, Shen Y (2023) Computing offloading based on deep reinforcement learning for virtual reality scene. In: 2023 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), pp 1–5. https://doi.org/10.1109/BMSB58369.2023.10211194 – reference: HouJWuYCaiJZhouZQoe-guaranteed distributed offloading decision via partially observable deep reinforcement learning for edge-enabled internet of thingsNeural Comput Applic20233529216032161910.1007/s00521-023-08905-2 – reference: Silva C, Magaia N, Grilo A (2023) Task offloading optimization in mobile edge computing based on deep reinforcement learning. In: Proceedings of the Int’l ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems, Association for Computing Machinery, pp 109–118. https://doi.org/10.1145/3616388.3617539 – reference: ETSI (2024) Multi-access edge computing (mec). https://www.etsi.org/technologies/multi-access-edge-computing. Accessed 26 Jan 2024 – reference: PanMLiZQianJEnergy-efficient multiuser and multitask computation offloading optimization methodIntell Converged Netw202341769210.23919/ICN.2023.0007 – reference: Sun X, Chen J, Guo C (2022) Semantic-driven computation offloading and resource allocation for uav-assisted monitoring system in vehicular networks. In: IECON 2022-48th Annual Conference of the IEEE Industrial Electronics Society, IEEE, pp 1–6. https://doi.org/10.1109/IECON49645.2022.9969083 – ident: 658_CR3 doi: 10.32604/cmc.2023.038417 – volume: 9 start-page: 1517 issue: 2 year: 2021 ident: 658_CR48 publication-title: IEEE Internet Things J doi: 10.1109/JIOT.2021.3091142 – ident: 658_CR38 doi: 10.1109/BMSB58369.2023.10211194 – ident: 658_CR30 doi: 10.1109/IPCCC59175.2023.10253839 – ident: 658_CR40 doi: 10.5281/zenodo.8127026 – volume: 47 start-page: 968 issue: 9 year: 2022 ident: 658_CR9 publication-title: MRS Bull doi: 10.1557/s43577-022-00417-z – volume: 4 start-page: 76 issue: 1 year: 2023 ident: 658_CR33 publication-title: Intell Converged Netw doi: 10.23919/ICN.2023.0007 – volume: 9 start-page: 55764 year: 2021 ident: 658_CR27 publication-title: IEEE Access doi: 10.1109/ACCESS.2021.3071848 – ident: 658_CR14 – ident: 658_CR4 doi: 10.1109/SAS58821.2023.10254051 – volume: 110 start-page: 2419 issue: 9 year: 2021 ident: 658_CR17 publication-title: Mach Learn doi: 10.1007/s10994-021-05961-4 – ident: 658_CR18 – volume: 10 start-page: 3215 issue: 4 year: 2023 ident: 658_CR10 publication-title: IEEE Internet Things J doi: 10.1109/JIOT.2022.3143529 – volume: 21 start-page: 37 issue: 3 year: 2023 ident: 658_CR16 publication-title: J Grid Comput doi: 10.1007/s10723-023-09667-w – ident: 658_CR36 doi: 10.3390/app9061160 – volume: 5 start-page: 2514 year: 2017 ident: 658_CR42 publication-title: IEEE Access doi: 10.1109/ACCESS.2017.2665971 – volume: 68 start-page: 856 issue: 1 year: 2019 ident: 658_CR41 publication-title: IEEE Trans Veh Technol doi: 10.1109/TVT.2018.2881191 – ident: 658_CR1 – volume: 16 start-page: 6546 issue: 4 year: 2022 ident: 658_CR32 publication-title: IEEE Syst J doi: 10.1109/JSYST.2022.3188997 – ident: 658_CR20 doi: 10.1109/AI4I49448.2020.00014 – ident: 658_CR8 – year: 2023 ident: 658_CR44 publication-title: IEEE Internet Things J doi: 10.1109/JIOT.2023.3316139 – volume: 28 start-page: 56 issue: 2 year: 2021 ident: 658_CR12 publication-title: IEEE Wirel Commun doi: 10.1109/MWC.001.2000296 – ident: 658_CR39 doi: 10.1109/IECON49645.2022.9969083 – volume: 29 start-page: 3469 issue: 8 year: 2023 ident: 658_CR46 publication-title: Wirel Netw doi: 10.1007/s11276-023-03404-7 – ident: 658_CR37 doi: 10.1145/3616388.3617539 – volume: 8 start-page: 10843 issue: 13 year: 2021 ident: 658_CR11 publication-title: IEEE Internet Things J doi: 10.1109/JIOT.2021.3050804 – volume: 3 start-page: 1134 year: 2022 ident: 658_CR25 publication-title: IEEE Open J Commun Soc doi: 10.1109/OJCOMS.2022.3189013 – volume: 114 start-page: 1687 issue: 2 year: 2020 ident: 658_CR26 publication-title: Wirel Pers Commun doi: 10.1007/s11277-020-07446-4 – volume: 70 start-page: 11308 issue: 11 year: 2021 ident: 658_CR13 publication-title: IEEE Trans Veh Technol doi: 10.1109/TVT.2021.3115899 – ident: 658_CR34 – volume: 16 start-page: 170 issue: 1 year: 2023 ident: 658_CR31 publication-title: Peer-to-Peer Netw Appl doi: 10.1007/s12083-022-01348-x – ident: 658_CR5 doi: 10.1109/HPSR57248.2023.10147932 – volume: 7 start-page: 55565 year: 2019 ident: 658_CR6 publication-title: IEEE Access doi: 10.1109/ACCESS.2019.2913564 – ident: 658_CR19 doi: 10.1109/CSNDSP.2018.8471762 – volume: 8 start-page: 8799 issue: 11 year: 2021 ident: 658_CR23 publication-title: IEEE Internet Things J doi: 10.1109/JIOT.2020.3048992 – volume: 33 start-page: 2510 issue: 12 year: 2015 ident: 658_CR28 publication-title: IEEE J Sel Areas Commun doi: 10.1109/JSAC.2015.2478718 – ident: 658_CR35 doi: 10.1109/ICCCE58854.2023.10246081 – ident: 658_CR7 doi: 10.1109/ICC42927.2021.9500520 – ident: 658_CR24 doi: 10.1109/TNSE.2023.3263169 – volume: 35 start-page: 21603 issue: 29 year: 2023 ident: 658_CR21 publication-title: Neural Comput Applic doi: 10.1007/s00521-023-08905-2 – ident: 658_CR2 – ident: 658_CR29 doi: 10.1007/978-3-030-03596-9_15 – ident: 658_CR45 doi: 10.1109/IPCCC59175.2023.10253860 – volume: 11 start-page: 1530 issue: 2 year: 2023 ident: 658_CR15 publication-title: IEEE Trans Cloud Comput doi: 10.1109/TCC.2022.3146615 – volume: 19 start-page: 2581 issue: 11 year: 2020 ident: 658_CR22 publication-title: IEEE Trans Mob Comput doi: 10.1109/TMC.2019.2928811 – volume: 176 start-page: 80 year: 2023 ident: 658_CR43 publication-title: J Parallel Distrib Comput doi: 10.1016/j.jpdc.2023.02.008 – ident: 658_CR47 doi: 10.1109/TNSM.2023.3316626 |
| SSID | ssj0001012387 |
| Score | 2.4495895 |
| Snippet | The integration of new Internet of Things (IoT) applications and services heavily relies on task offloading to external devices due to the constrained... Abstract The integration of new Internet of Things (IoT) applications and services heavily relies on task offloading to external devices due to the constrained... |
| SourceID | doaj proquest crossref springer |
| SourceType | Open Website Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 94 |
| SubjectTerms | Algorithms Cloud computing Computation offloading Computer Communication Networks Computer Science Computer System Implementation Computer Systems Organization and Communication Networks Deep learning Edge computing Energy consumption Information Systems Applications (incl.Internet) Internet of Things Mobile computing Mobile Edge Computing Meets AI Multi-access Edge Computing (MEC) Performance evaluation Quality-of-Experience (QoE) Reinforcement Learning (RL) Software Engineering/Programming and Operating Systems Special Purpose and Application-Based Systems Task offloading |
| SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LS8QwEA4iHrz4Fqur5OBNw7ZN26RHX6sHWURU9hbSPGRxbZdt6-83Sdt1V1AvXgptEggzk8yjM98AcCqINFJim2aoKEERIRhlsVYI84hnPAyizBXSvtyT4ZCORunDQqsvmxPWwAM3hOvjSAZJKMw1avwnKiWncRRIHPJYE8lDd_v6JF1wplx0xZoKlHRVMjTplxabjCCjkpBTu8hf0kQOsH_Jyvz2Y9Tpm8EW2GgNRXjRbHAbrKh8B2x2TRhgeyZ3Ab9WagoflUNAFS7YB1vQ1Fc4R2gtoRmFsmk_DytevsFC60nhMujhOIfGDoTxLbTRNSQmRS2hTWIf53X9vgeeBzdPV3eo7ZuAhKF6hSKFdean2jxFTLnUgpKUB6l5wb4KJQ604QE3vowmhPo8xtYqUlSGlJtlBO-D1bzI1QGAqaQZTVSWUU0jbYt4QyKoT2SaaGy474GgoyETLai47W0xYc65oAlr6M4M3ZmjO_M9cDZfM20gNX6dfWlZM59p4bDdByMkrBUS9peQeKDXMZa1Z7Rk2ELhxMbkxR4475j9Nfzzlg7_Y0tHYD20wugyY3pgtZrV6hisiY9qXM5OnDR_AiOn9HM priority: 102 providerName: Directory of Open Access Journals – databaseName: Career & Technical Education Database dbid: 7RQ link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3PT9swFH4ajAMXYANEWUE-7DYsEjuJndO0wWCHqQLEJm6W4x9VRUlK0_L3Y7tOK5DgskukxHZk6Xv2e35-73sAXxXTTkp80QyTFThjjOIqtwZTmclKkjSrQiLtvz9sMOB3d-VVdLi1Mayy2xPDRq0b5X3kp9QTl-TOQKHfJ4_YV43yt6uxhMYafHSKuvByzm6uVz4WbzBw1uXK8OK09QxlDDvFhIPyxckLfRRo-1_Ymq-uR4PWudj-3_nuwFa0N9GPhYB8gg-m_gzbXS0HFJf2LshzYyboxgQiVRV8hihyrw7Rkui1Ra4V6UUVezST7T1qrB03IRAfjWrkzEmUXyLvpMNq3Mw18rHwo3o-f9iDvxe_bs9-41h-ASsH3gxnhtoqKa17qpxLbRVnpUxL90ITQzRNrYNSuiORZYwnMqfeuDJcEy7dMEb3Yb1uanMAqNS84oWpKm55Zn0uMGGKJ0yXhaVOiHqQdiAIFbnJfYmMsQhnFF6IBXDCAScCcCLpwbflmMmCmePd3j89tsuenlU7fGimQxEXqaCZTguinMp2Z3WuteR5lmpKZG6ZloT0oN9hLeJSb8UK6B6cdNKyan57Sofv_-0LbBIvpyF0pg_rs-ncHMGGepqN2ulxEPRnDREDMg priority: 102 providerName: ProQuest |
| Title | Deep Reinforcement Learning techniques for dynamic task offloading in the 5G edge-cloud continuum |
| URI | https://link.springer.com/article/10.1186/s13677-024-00658-0 https://www.proquest.com/docview/3050350163 https://doaj.org/article/34d162c1282748dda8541d32a5f7da22 |
| Volume | 13 |
| WOSCitedRecordID | wos001221283600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 2192-113X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001012387 issn: 2192-113X databaseCode: DOA dateStart: 20120101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2192-113X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001012387 issn: 2192-113X databaseCode: M~E dateStart: 20120101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre – providerCode: PRVPQU databaseName: Career & Technical Education Database customDbUrl: eissn: 2192-113X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001012387 issn: 2192-113X databaseCode: 7RQ dateStart: 20240101 isFulltext: true titleUrlDefault: https://search.proquest.com/career providerName: ProQuest – providerCode: PRVPQU databaseName: Computer Science Database customDbUrl: eissn: 2192-113X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001012387 issn: 2192-113X databaseCode: K7- dateStart: 20240101 isFulltext: true titleUrlDefault: http://search.proquest.com/compscijour providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central (subscription) customDbUrl: eissn: 2192-113X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001012387 issn: 2192-113X databaseCode: BENPR dateStart: 20240101 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: Publicly Available Content Database customDbUrl: eissn: 2192-113X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001012387 issn: 2192-113X databaseCode: PIMPY dateStart: 20240101 isFulltext: true titleUrlDefault: http://search.proquest.com/publiccontent providerName: ProQuest – providerCode: PRVPQU databaseName: Research Library customDbUrl: eissn: 2192-113X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001012387 issn: 2192-113X databaseCode: M2O dateStart: 20240101 isFulltext: true titleUrlDefault: https://search.proquest.com/pqrl providerName: ProQuest – providerCode: PRVAVX databaseName: SpringerOpen customDbUrl: eissn: 2192-113X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0001012387 issn: 2192-113X databaseCode: C24 dateStart: 20121201 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwEB5By4EL5akuLSsfuIFFYiexc2xLCwi6LCtA5WQ5flQrlqTa7PL7GXudRUWABBdLiW0pmkdmxp75BuCpERalJDTNcEVFCyE4bUrvKNeFbjTLiyYW0n5-JyYTeXFRT1NRWD9kuw9XkvFPHdVaVi_6AC4mKNoUGu0mxUB9t8xlHRL5TlKNQzxZCW6CFEOFzG-3XrNCEaz_mof5y6VotDVne__3lXfhTvItydFGGO7BDdfeh72hbwNJavwA9EvnrsjMRdBUE88HScJZvSRbUNee4Cyxm471ZKX7r6TzftHFpHsybwm6jqR8RcKBHDWLbm1JyHuft-v1t4fw6ez048lrmlotUIOMWtHCcd9ktcfRlFJbb6SodV7jA88cszz3yDaN4Y8XQma65MGRctIyqXGb4I9gp-1atw-ktrKRlWsa6WXhQ90vE0ZmwtaV5ygwI8gH0iuTcMhDO4yFivGIrNSGhgppqCINVTaCZ9s9VxsUjr-uPg4c3a4MCNrxRbe8VEkhFS9sXjGD5hnjcmmtlmWRW8506YXVjI3gcJAHldS6Vzyg55ToJfMRPB_4_3P6z5_0-N-WH8BtFkQops0cws5quXZP4Jb5vpr3yzHcFLMPY9g9Pp1MZ-Mo_ON4loDjW0FxPGfvcX765nz65Qeb2f_A |
| linkProvider | Springer Nature |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB6VgkQvlKe60IIPcAKrie3EzgEhSimtdlkhVFBvxvGjWnWbbDe7oP4pfiO2N9lVkeitBy6REtuRE38z_jz2zAC81Nx4lISkGZblmHFOcZk5i6liqlQkZWV0pP0-4MOhODkpvqzB784XJhyr7HRiVNSm1sFGvktD4JLMExT6bnKBQ9aosLvapdBYwKJvL3_5JVvz9mjfj-8rQg4-Hn84xG1WAax9n2aYWerKpHD-qjOhjNOCFyot_A1NLDE0db6HyjN9x7lIVEYDZ7DCEKF8M079e2_BbUYFD3LV53hl0wkERfDON0fku02IiMaxnwhxnOxxcmX-i2kCrnDbv7Zj4yx3sPm__Z_7cK_l0-j9QgAewJqtHsJml6sCtarrEah9ayfoq42BYnW0iaI2tuwpWgaybZAvReayUucjjWaqOUO1c-M6OhqgUYU8XUbZJxSMkFiP67lB4az_qJrPzx_Dtxv50CewXtWV3QJUGFGK3JalcIK54OtMuBYJN0XuqBeSHqTdoEvdxl4PKUDGMq7BRC4XQJEeKDICRSY9eL1sM1lEHrm29l7A0rJmiBoeH9TTU9kqIUmZSXOiPSUhnAljlMhYaihRmeNGEdKD7Q5bslVljVwBqwdvOnSuiv_dpafXv-0F3D08_jyQg6Nh_xlskCAj8ZjQNqzPpnO7A3f0z9momT6PQobgx02j9g9-81-h |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3Nb9MwFH8aHUJcGJ-iMMAHOIHVxE5i54CmjVKoNqoKARonz_HHVNElpWmZ9q_x12G7SashsdsOXCIl_pCT_N7zz89-7wG8VEw7lPikGSbJcMIYxUVqDaYykYUkcVIER9pvR2w04sfH-XgLfre-MP5YZasTg6LWlfI28h71gUtSR1BozzbHIsb9wd7sJ_YZpPxOa5tOYwWRQ3Nx7pZv9dth3_3rV4QM3n959xE3GQawcuNb4MRQW0S5dVeVcqmt4iyXce5uaGSIprF1o5WO9VvGeCRT6vmD4Zpw6Zox6vq9AduOkiekA9vj4afx942Fx9MVzlpPHZ71ah8fjWE3LeIw9ePo0mwYkgZcYrp_bc6GOW-w8z9_rbtwp2HaaH8lGvdgy5T3YafNYoEapfYAZN-YGfpsQghZFaylqIk6e4rWIW5r5EqRvijl2UShhax_oMraaRVcENCkRI5Io_QD8uZJrKbVUiPvBTApl8uzh_D1Wl70EXTKqjSPAeWaFzwzRcEtT6z3giZM8YjpPLPUiU8X4hYAQjVR2X1ykKkIqzOeiRVohAONCKARURder9vMVjFJrqx94HG1runjiYcH1fxUNOpJ0ETHGVGOrBCWcK0lT5NYUyJTy7QkpAu7Lc5Eo-RqsQFZF960SN0U_3tIT67u7QXccmAVR8PR4VO4Tby4hPNDu9BZzJfmGdxUvxaTev68kTgEJ9cN2z8a0moi |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Reinforcement+Learning+techniques+for+dynamic+task+offloading+in+the+5G+edge-cloud+continuum&rft.jtitle=Journal+of+cloud+computing+%3A+advances%2C+systems+and+applications&rft.au=Nieto%2C+Gorka&rft.au=de%C2%A0la%C2%A0Iglesia%2C+Idoia&rft.au=Lopez-Novoa%2C+Unai&rft.au=Perfecto%2C+Cristina&rft.date=2024-12-01&rft.pub=Springer+Berlin+Heidelberg&rft.eissn=2192-113X&rft.volume=13&rft.issue=1&rft_id=info:doi/10.1186%2Fs13677-024-00658-0&rft.externalDocID=10_1186_s13677_024_00658_0 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2192-113X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2192-113X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2192-113X&client=summon |