TY - GEN
T1 - DTA-RL
T2 - 2024 IEEE Global Communications Conference, GLOBECOM 2024
AU - Fu, Lianhao
AU - Cheng, Nan
AU - Wang, Xiucheng
AU - Sun, Ruijin
AU - Lu, Ning
AU - Su, Zhou
AU - Li, Changle
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Mobile edge computing (MEC) enhances data processing by enabling users to offload tasks to edge servers with enough computation resource. In multi-user and multi-server scenario, the offloading scheduling is overwhelming complex and significantly influences the processing delay, which makes deep learning (DL) become an appealing approach. Yet, prior DL-based methods often overlook dynamic topology challenges due to the inflexibility of fixed neural network structures, leading to constrained performance. To tackle this challenge, a novel reinforcement learning framework named dynamic topology adaptive reinforcement learning (DTA-RL) is proposed in this paper. The MEC network is modeled as a graph based on the communication relationships between users and servers, and the offloading process is formulated as a Markov decision process (MDP). Building on the graph model and MDP, DTA-RL leverages graph attention networks to handle dynamic observation spaces and incorporates an attention mechanism for decision-making in environments with evolving action spaces. Simulation results illustrate that DTA-RL effectively reduces task processing delays and offloading failure rates within the MEC system. Furthermore, the pre-trained model can be seamlessly implemented in networks with new topology without experiencing significant performance degradation. The code is available at https://github.com/UNIC-Lab/DTA-RL.
AB - Mobile edge computing (MEC) enhances data processing by enabling users to offload tasks to edge servers with enough computation resource. In multi-user and multi-server scenario, the offloading scheduling is overwhelming complex and significantly influences the processing delay, which makes deep learning (DL) become an appealing approach. Yet, prior DL-based methods often overlook dynamic topology challenges due to the inflexibility of fixed neural network structures, leading to constrained performance. To tackle this challenge, a novel reinforcement learning framework named dynamic topology adaptive reinforcement learning (DTA-RL) is proposed in this paper. The MEC network is modeled as a graph based on the communication relationships between users and servers, and the offloading process is formulated as a Markov decision process (MDP). Building on the graph model and MDP, DTA-RL leverages graph attention networks to handle dynamic observation spaces and incorporates an attention mechanism for decision-making in environments with evolving action spaces. Simulation results illustrate that DTA-RL effectively reduces task processing delays and offloading failure rates within the MEC system. Furthermore, the pre-trained model can be seamlessly implemented in networks with new topology without experiencing significant performance degradation. The code is available at https://github.com/UNIC-Lab/DTA-RL.
KW - attention mechanism
KW - dynamic topology
KW - mobile edge computing networks
KW - reinforcement learning
UR - https://www.scopus.com/pages/publications/105000818990
U2 - 10.1109/GLOBECOM52923.2024.10901650
DO - 10.1109/GLOBECOM52923.2024.10901650
M3 - 会议稿件
AN - SCOPUS:105000818990
T3 - Proceedings - IEEE Global Communications Conference, GLOBECOM
SP - 3334
EP - 3339
BT - GLOBECOM 2024 - 2024 IEEE Global Communications Conference
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 8 December 2024 through 12 December 2024
ER -