When Learning Joins Edge: Real-Time Proportional Computation Offloading via Deep Reinforcement Learning
Computation offloading makes sense to the interaction between users and compute-intensive applications. Current researches focused on deciding locally or remotely executing an application, but ignored the specific offloading proportion of application. A full offloading cannot make the best use of cl...
Saved in:
| Published in: | 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS) pp. 414 - 421 |
|---|---|
| Main Authors: | , , , , |
| Format: | Conference Proceeding |
| Language: | English |
| Published: |
IEEE
01.12.2019
|
| Subjects: | |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Computation offloading makes sense to the interaction between users and compute-intensive applications. Current researches focused on deciding locally or remotely executing an application, but ignored the specific offloading proportion of application. A full offloading cannot make the best use of client and server resources. In this paper, we propose an innovative reinforcement learning (RL) method to solve the proportional computation problem. We consider a common offloading scenario with time-variant bandwidth and heterogeneous devices, and the device generates applications constantly. For each application, the client has to choose locally or remotely executing this application, and determines the proportion to be offloaded. We formalize the problem as a long-term optimization problem, and then propose a RL-based algorithm to solve it. The basic idea is to estimate the benefit of posible decisions, of wihch the decision with the maximum benefit is selected. Instead of adopting the original Deep Q Network (DQN), we propose Advanced DQN (ADQN) by adding Priority Buffer Mechanism and Expert Buffer Mechanism, which improves the utilization of samples and overcomes the cold start problem, respectively. The experimental results show ADQN's high feasibility and efficiency compared with several traditional policies, such as None Offloading Policy, Random Offloading Policy, Link Capacity Optimal Policy, and Computing Capability Optimal Policy. At last, we analyse the effect of expert buffer size and learning rate on ADQN's performance. |
|---|---|
| AbstractList | Computation offloading makes sense to the interaction between users and compute-intensive applications. Current researches focused on deciding locally or remotely executing an application, but ignored the specific offloading proportion of application. A full offloading cannot make the best use of client and server resources. In this paper, we propose an innovative reinforcement learning (RL) method to solve the proportional computation problem. We consider a common offloading scenario with time-variant bandwidth and heterogeneous devices, and the device generates applications constantly. For each application, the client has to choose locally or remotely executing this application, and determines the proportion to be offloaded. We formalize the problem as a long-term optimization problem, and then propose a RL-based algorithm to solve it. The basic idea is to estimate the benefit of posible decisions, of wihch the decision with the maximum benefit is selected. Instead of adopting the original Deep Q Network (DQN), we propose Advanced DQN (ADQN) by adding Priority Buffer Mechanism and Expert Buffer Mechanism, which improves the utilization of samples and overcomes the cold start problem, respectively. The experimental results show ADQN's high feasibility and efficiency compared with several traditional policies, such as None Offloading Policy, Random Offloading Policy, Link Capacity Optimal Policy, and Computing Capability Optimal Policy. At last, we analyse the effect of expert buffer size and learning rate on ADQN's performance. |
| Author | Zhang, Sheng Chen, Ning Qian, Zhuzhong Lu, Sanglu Wu, Jie |
| Author_xml | – sequence: 1 givenname: Ning surname: Chen fullname: Chen, Ning organization: Nanjing University, China – sequence: 2 givenname: Sheng surname: Zhang fullname: Zhang, Sheng organization: Nanjing University, China – sequence: 3 givenname: Zhuzhong surname: Qian fullname: Qian, Zhuzhong organization: Nanjing University, China – sequence: 4 givenname: Jie surname: Wu fullname: Wu, Jie organization: Temple University, USA – sequence: 5 givenname: Sanglu surname: Lu fullname: Lu, Sanglu organization: Nanjing University, China |
| BookMark | eNo9kMtKw0AYRkfQhdY-gSDzAqlzSebirqRVK4EWrbgsk8w_cSCZCdMo-PamKK4-zuKcxXeFzkMMgNAtJQtKib7blLvl6jWXSooFI1QvCCFCnKG5lopKpigrFKeXqH3_gIArMCn40OLn6MMRr20L9_gFTJftfQ94l-IQ0-hjMB0uYz98juZEeOtcF409mV_e4BXAMGk-uJga6CGM_-VrdOFMd4T5387Q28N6Xz5l1fZxUy6rzDPCx4wLYKZuTFErxjUUglhlaO2gBkWIokCVljp3ttbC5rpQTjbQWMOdkMyyhs_QzW_XA8BhSL436fswScV0Bf8B82NWsg |
| CODEN | IEEPAD |
| ContentType | Conference Proceeding |
| DBID | 6IE 6IL CBEJK RIE RIL |
| DOI | 10.1109/ICPADS47876.2019.00066 |
| DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Xplore POP ALL IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP All) 1998-Present |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| EISBN | 9781728125831 1728125839 |
| EndPage | 421 |
| ExternalDocumentID | 8975787 |
| Genre | orig-research |
| GroupedDBID | 6IE 6IL CBEJK RIE RIL |
| ID | FETCH-LOGICAL-i203t-36e2abca5b8239e560d8a1bfebe80081e189794fdb96d4958f7cecda3f672d2c3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 18 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000530854900057&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| IngestDate | Wed Aug 27 07:38:00 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-i203t-36e2abca5b8239e560d8a1bfebe80081e189794fdb96d4958f7cecda3f672d2c3 |
| PageCount | 8 |
| ParticipantIDs | ieee_primary_8975787 |
| PublicationCentury | 2000 |
| PublicationDate | 2019-Dec |
| PublicationDateYYYYMMDD | 2019-12-01 |
| PublicationDate_xml | – month: 12 year: 2019 text: 2019-Dec |
| PublicationDecade | 2010 |
| PublicationTitle | 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS) |
| PublicationTitleAbbrev | PADSW |
| PublicationYear | 2019 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| Score | 2.225507 |
| Snippet | Computation offloading makes sense to the interaction between users and compute-intensive applications. Current researches focused on deciding locally or... |
| SourceID | ieee |
| SourceType | Publisher |
| StartPage | 414 |
| SubjectTerms | Advanced Deep Q Network Bandwidth Base stations Computation offloading Deep reinforcement learning Delays Energy consumption Expert Buffer Mechanism NP-hard problem Optimization Real-time systems Servers |
| Title | When Learning Joins Edge: Real-Time Proportional Computation Offloading via Deep Reinforcement Learning |
| URI | https://ieeexplore.ieee.org/document/8975787 |
| WOSCitedRecordID | wos000530854900057&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwELbaioEJUIt4ywMjpomTxjYb6kPAUCoeUrfKPttVJJRUbdrfj-1EhYGFzTrJiXIn-buc7_sOodsoBaGV0kQxab2odkqEpproVEQAlFkGMgybYNMpn8_FrIXu9lwYY0xoPjP3fhnu8nUJW18q63Ph1ddZG7UZYzVXqyH9xpHoPw9nj6N3LzbjWw9ir0MZefHDX1NTAmhMjv73umPU-2Hf4dkeV05QyxRdtHSnZoEbNdQlfinzYoPHemke8JvL9YincvhdPp0O1T1cz2sIjsev1n6VoVse73KJR8as3LYgmgqhPrh_cg99TsYfwyfSDEkgOY2SiiSZoVKBHChOE2FcAqO5jJV1weEe703sPkOkViuRuQAMuPO-AS0TmzGqKSSnqFOUhTlDGMDZHFZl7jfMYRbjVmpLB2BTaSEDOEdd76TFqtbBWDT-ufjbfIkOfRTq1o8r1KnWW3ONDmBX5Zv1TQjeN4TBoBw |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT8MwDLbGQIIToA0xnjlwpKxNu7bhhvbQBmNMMKTdptRJpkqonfb6_SRpNThw4VZFiqPaUj7X9fcZ4M4NkIkkEU4ScWVEtQOHCSocETAXkUYqQm6HTUSjUTydsnEF7ndcGCmlbT6TD-bR_ssXOW5MqawZM6O-Hu3BvrZJvYKtVdJ-PZc1B-3xU-fDyM2Y5gPPKFG6Rv7w19wUCxu94_8deAL1H_4dGe-Q5RQqMqvBXN-bGSn1UOfkOU-zFemKuXwk7zrbcwyZw-wyCbWt75FiYoN1PXlT6iu3_fJkm3LSkXKht1nZVLQVwp3lOnz2upN23ynHJDgpdf2144eS8gR5K4mpz6ROYUTMvUTp8MQG8aWnX4MFSiQs1CFoxdr_EgX3VRhRQdE_g2qWZ_IcCKJe02gV6g8xjVpRrLhQtIUq4ApDxAbUjJNmi0IJY1b65-Lv5Vs47E9eh7PhYPRyCUcmIkUjyBVU18uNvIYD3K7T1fLGBvIbBAOjYw |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2019+IEEE+25th+International+Conference+on+Parallel+and+Distributed+Systems+%28ICPADS%29&rft.atitle=When+Learning+Joins+Edge%3A+Real-Time+Proportional+Computation+Offloading+via+Deep+Reinforcement+Learning&rft.au=Chen%2C+Ning&rft.au=Zhang%2C+Sheng&rft.au=Qian%2C+Zhuzhong&rft.au=Wu%2C+Jie&rft.date=2019-12-01&rft.pub=IEEE&rft.spage=414&rft.epage=421&rft_id=info:doi/10.1109%2FICPADS47876.2019.00066&rft.externalDocID=8975787 |