TY - CHAP
T1 - Aerial-Assisted Intelligent Resource Allocation
AU - Peng, Haixia
AU - Ye, Qiang
AU - Shen, Xuemin Sherman
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - In this chapter, we investigate multi-dimensional resource management for UAV-assisted MVNETs. To efficiently provide on-demand resource access, the MeNB and UAV, both mounted with MEC servers, cooperatively make association decisions and allocate proper amounts of resources to vehicles. First, we introduce an SADDPG-based scheme to centrally allocate the multi-dimensional resources by considering a central controller installed at the MeNB. Also, to avoid extra time and spectrum consumption on communications between MEC servers and a central controller, we formulate the resource allocation at the MEC servers as a distributive optimization problem with the objective of maximizing the number of offloaded tasks while satisfying their heterogeneous QoS requirements. Then, we solve the formulated distributive problem with an MADDPG-based method. Through centrally training the MADDPG model offline, the MEC servers, acting as learning agents, can rapidly make vehicle-server association and resource allocation decisions during the online execution stage. From our simulation results, the MADDPG-based method can converge within 200 training episodes, comparable to the SADDPG-based one. Moreover, the proposed SADDPG-based and MADDPG-based resource management scheme can achieve higher delay/QoS satisfaction ratios than the random scheme.
AB - In this chapter, we investigate multi-dimensional resource management for UAV-assisted MVNETs. To efficiently provide on-demand resource access, the MeNB and UAV, both mounted with MEC servers, cooperatively make association decisions and allocate proper amounts of resources to vehicles. First, we introduce an SADDPG-based scheme to centrally allocate the multi-dimensional resources by considering a central controller installed at the MeNB. Also, to avoid extra time and spectrum consumption on communications between MEC servers and a central controller, we formulate the resource allocation at the MEC servers as a distributive optimization problem with the objective of maximizing the number of offloaded tasks while satisfying their heterogeneous QoS requirements. Then, we solve the formulated distributive problem with an MADDPG-based method. Through centrally training the MADDPG model offline, the MEC servers, acting as learning agents, can rapidly make vehicle-server association and resource allocation decisions during the online execution stage. From our simulation results, the MADDPG-based method can converge within 200 training episodes, comparable to the SADDPG-based one. Moreover, the proposed SADDPG-based and MADDPG-based resource management scheme can achieve higher delay/QoS satisfaction ratios than the random scheme.
KW - Multi-access edge computing
KW - Multi-agent DDPG
KW - Multi-dimensional resource management
KW - Single-agent DDPG
KW - Unmanned aerial vehicle
KW - Vehicular networks
UR - https://www.scopus.com/pages/publications/85127863219
U2 - 10.1007/978-3-030-96507-5_5
DO - 10.1007/978-3-030-96507-5_5
M3 - 章节
AN - SCOPUS:85127863219
T3 - Wireless Networks (United Kingdom)
SP - 111
EP - 143
BT - Wireless Networks (United Kingdom)
PB - Springer Nature
ER -