Multi-Agent Reinforcement Learning Based Resource Management in MEC- And UAV-Assisted Vehicular Networks

Research output: Contribution to journalArticlepeer-review

493 Scopus citations

Abstract

In this paper, we investigate multi-dimensional resource management for unmanned aerial vehicles (UAVs) assisted vehicular networks. To efficiently provide on-demand resource access, the macro eNodeB and UAV, both mounted with multi-access edge computing (MEC) servers, cooperatively make association decisions and allocate proper amounts of resources to vehicles. Since there is no central controller, we formulate the resource allocation at the MEC servers as a distributive optimization problem to maximize the number of offloaded tasks while satisfying their heterogeneous quality-of-service (QoS) requirements, and then solve it with a multi-agent deep deterministic policy gradient (MADDPG)-based method. Through centrally training the MADDPG model offline, the MEC servers, acting as learning agents, then can rapidly make vehicle association and resource allocation decisions during the online execution stage. From our simulation results, the MADDPG-based method can converge within 200 training episodes, comparable to the single-agent DDPG (SADDPG)-based one. Moreover, the proposed MADDPG-based resource management scheme can achieve higher delay/QoS satisfaction ratios than the SADDPG-based and random schemes.

Original languageEnglish
Article number9254093
Pages (from-to)131-141
Number of pages11
JournalIEEE Journal on Selected Areas in Communications
Volume39
Issue number1
DOIs
StatePublished - Jan 2021
Externally publishedYes

Keywords

  • Vehicular networks
  • multi-access edge computing
  • multi-agent DDPG
  • multi-dimensional resource management
  • unmanned aerial vehicle

Fingerprint

Dive into the research topics of 'Multi-Agent Reinforcement Learning Based Resource Management in MEC- And UAV-Assisted Vehicular Networks'. Together they form a unique fingerprint.

Cite this