Adaptive mean field multi-agent reinforcement learning

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Large-scale Multi-Agent Reinforcement Learning (MARL) is fundamentally a challenge due to the curse of dimensionality. In a homogeneous multi-agent setting, mean field theory gives an effective way of scalable MARL by abstracting other agents to a virtual mean agent, assuming that the influence between agents is equal and infinitesimal. However, in some real scenarios, only several neighboring agents, rather than all agents, affect the decision-making of an agent, and different neighboring agents may have varying degrees of influence on the agent's decision-making. In this paper, not restricted to a homogeneous setting, we propose adaptive mean field MARL, which is based on the attention mechanism and can be used to deal with many-agent scenarios where there may be different influence relationships among agents. Specifically, we first derive the mean field approximation with adaptive weight and give the error bound of the approximation. Then, we propose adaptive mean field Q-Learning and describe how to obtain the adaptive weight. In addition, we discuss the differences between the proposed approach and existing mean-field MARL methods. Finally, we conduct experiments on simulation platforms, and the results show that the performance of the proposed approach outperforms that of the state-of-the-art method.

Original languageEnglish
Article number120560
JournalInformation Sciences
Volume669
DOIs
StatePublished - May 2024

Keywords

  • Adaptive mean field approximation
  • Large scale
  • Multi-agent reinforcement learning

Fingerprint

Dive into the research topics of 'Adaptive mean field multi-agent reinforcement learning'. Together they form a unique fingerprint.

Cite this