TY - GEN
T1 - Digital Twin-Assisted Efficient Reinforcement Learning for Edge Task Scheduling
AU - Wang, Xiucheng
AU - Ma, Longfei
AU - Li, Haocheng
AU - Yin, Zhisheng
AU - Luan, Tom
AU - Cheng, Nan
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Task scheduling is a critical problem when one user offloads multiple different tasks to the edge server. When a user has multiple tasks to offload and only one task can be transmitted to server at a time, while server processes tasks according to the transmission order, the problem is NP-hard. However, it is difficult for traditional optimization methods to quickly obtain the optimal solution, while approaches based on reinforcement learning face with the challenge of excessively large action space and slow convergence. In this paper, we propose a Digital Twin (DT)-assisted RL-based task scheduling method in order to improve the performance and convergence of the RL. We use DT to simulate the results of different decisions made by the agent, so that one agent can try multiple actions at a time, or, similarly, multiple agents can interact with environment in parallel in DT. In this way, the exploration efficiency of RL can be significantly improved via DT, and thus RL can converges faster and local optimality is less likely to happen. Particularly, two algorithms are designed to made task scheduling decisions, i.e., DT-assisted asynchronous Q-learning (DTAQL) and DT-assisted exploring Q-learning (DTEQL). Simulation results show that both algorithms significantly improve the convergence speed of Q-learning by increasing the exploration efficiency.
AB - Task scheduling is a critical problem when one user offloads multiple different tasks to the edge server. When a user has multiple tasks to offload and only one task can be transmitted to server at a time, while server processes tasks according to the transmission order, the problem is NP-hard. However, it is difficult for traditional optimization methods to quickly obtain the optimal solution, while approaches based on reinforcement learning face with the challenge of excessively large action space and slow convergence. In this paper, we propose a Digital Twin (DT)-assisted RL-based task scheduling method in order to improve the performance and convergence of the RL. We use DT to simulate the results of different decisions made by the agent, so that one agent can try multiple actions at a time, or, similarly, multiple agents can interact with environment in parallel in DT. In this way, the exploration efficiency of RL can be significantly improved via DT, and thus RL can converges faster and local optimality is less likely to happen. Particularly, two algorithms are designed to made task scheduling decisions, i.e., DT-assisted asynchronous Q-learning (DTAQL) and DT-assisted exploring Q-learning (DTEQL). Simulation results show that both algorithms significantly improve the convergence speed of Q-learning by increasing the exploration efficiency.
KW - digital twin
KW - exploration efficiency
KW - reinforcement learning
KW - task scheduling
UR - https://www.scopus.com/pages/publications/85137770199
U2 - 10.1109/VTC2022-Spring54318.2022.9860495
DO - 10.1109/VTC2022-Spring54318.2022.9860495
M3 - 会议稿件
AN - SCOPUS:85137770199
T3 - IEEE Vehicular Technology Conference
BT - 2022 IEEE 95th Vehicular Technology Conference - Spring, VTC 2022-Spring - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 95th IEEE Vehicular Technology Conference - Spring, VTC 2022-Spring
Y2 - 19 June 2022 through 22 June 2022
ER -