Presentation | 2021-03-04 Application Offloading Mechanism based on Distributed Reinforcement Learning in MEC Environment Soh Takamura, Takao Kondo, Fumio Teraoka, |
---|---|
PDF Download Page | PDF download Page Link |
Abstract(in Japanese) | (See Japanese page) |
Abstract(in English) | This paper proposes a mechanism for determining the offloading strategy of an application running on a User Equipment (UE) with battery and computational resource constraints based on distributed reinforcement learning, a form of reinforcement learning, assuming a Multi-access Edge Computing (MEC) environment. The MEC environment envisioned in this paper consists of a large number of UEs, multiple MEC servers, and a cloud server. A group of UEs executing an application requests and collects from a neighboring MEC server an estimated reward value, which serves as a decision indicator for the offload destination, and the UE determines the MEC server for offloading based on the collected estimated reward values. The UE sends the results of the application execution via the MEC server to the cloud server. The cloud server updates the learning model at regular intervals and synchronizes it with the MEC servers. The above process is expected to reduce the battery consumption of the UEs in an unstable wireless communication environment.In the evaluation, we evaluated the offload selection for image processing applications in the UEs, assuming game rendering, in terms of learning convergence speed and energy consumption.The number of UEs was varied according to the mobility scenario of the UEs. 20 UEs converged 9 times faster than 1 UE, and 2.4 times faster than 10 UEs. |
Keyword(in Japanese) | (See Japanese page) |
Keyword(in English) | MEC / Deep Reinforcement Learning / Distributed Reinforcement Learning |
Paper # | IN2020-67 |
Date of Issue | 2021-02-25 (IN) |
Conference Information | |
Committee | IN / NS |
---|---|
Conference Date | 2021/3/4(2days) |
Place (in Japanese) | (See Japanese page) |
Place (in English) | Online |
Topics (in Japanese) | (See Japanese page) |
Topics (in English) | General |
Chair | Kenji Ishida(Hiroshima City Univ.) / Akihiro Nakao(Univ. of Tokyo) |
Vice Chair | Kunio Hato(Internet Multifeed) / Tetsuya Oishi(NTT) |
Secretary | Kunio Hato(Hiroshima City Univ.) / Tetsuya Oishi(KDDI Research) |
Assistant | / Shinya Kawano(NTT) |
Paper Information | |
Registration To | Technical Committee on Information Networks / Technical Committee on Network Systems |
---|---|
Language | JPN |
Title (in Japanese) | (See Japanese page) |
Sub Title (in Japanese) | (See Japanese page) |
Title (in English) | Application Offloading Mechanism based on Distributed Reinforcement Learning in MEC Environment |
Sub Title (in English) | |
Keyword(1) | MEC |
Keyword(2) | Deep Reinforcement Learning |
Keyword(3) | Distributed Reinforcement Learning |
1st Author's Name | Soh Takamura |
1st Author's Affiliation | Keio University(Keio Univ.) |
2nd Author's Name | Takao Kondo |
2nd Author's Affiliation | Keio University(Keio Univ.) |
3rd Author's Name | Fumio Teraoka |
3rd Author's Affiliation | Keio University(Keio Univ.) |
Date | 2021-03-04 |
Paper # | IN2020-67 |
Volume (vol) | vol.120 |
Number (no) | IN-414 |
Page | pp.pp.79-84(IN), |
#Pages | 6 |
Date of Issue | 2021-02-25 (IN) |