Abstract
In this paper, we investigate multi-dimensional resource management for unmanned aerial vehicles (UAVs) assisted vehicular networks. To efficiently provide on-demand resource access, the macro eNodeB and UAV, both mounted with multi-access edge computing (MEC) servers, cooperatively make association decisions and allocate proper amounts of resources to vehicles. Since there is no central controller, we formulate the resource allocation at the MEC servers as a distributive optimization problem to maximize the number of offloaded tasks while satisfying their heterogeneous quality-of-service (QoS) requirements, and then solve it with a multi-agent deep deterministic policy gradient (MADDPG)-based method. Through centrally training the MADDPG model offline, the MEC servers, acting as learning agents, then can rapidly make vehicle association and resource allocation decisions during the online execution stage. From our simulation results, the MADDPG-based method can converge within 200 training episodes, comparable to the single-agent DDPG (SADDPG)-based one. Moreover, the proposed MADDPG-based resource management scheme can achieve higher delay/QoS satisfaction ratios than the SADDPG-based and random schemes.
| Original language | English |
|---|---|
| Article number | 9254093 |
| Pages (from-to) | 131-141 |
| Number of pages | 11 |
| Journal | IEEE Journal on Selected Areas in Communications |
| Volume | 39 |
| Issue number | 1 |
| DOIs | |
| State | Published - Jan 2021 |
| Externally published | Yes |
Keywords
- Vehicular networks
- multi-access edge computing
- multi-agent DDPG
- multi-dimensional resource management
- unmanned aerial vehicle
Fingerprint
Dive into the research topics of 'Multi-Agent Reinforcement Learning Based Resource Management in MEC- And UAV-Assisted Vehicular Networks'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver