Consensus-based Distributed Reinforcement Learning with Primal-Dual Update for Networked Microgrids On-Line Coordination

Research output: Contribution to journalArticlepeer-review

Abstract

This paper develops a distributed reinforcement learning (RL) method to coordinate cooperative microgrids (MGs). The high uncertainty of power loads and renewable energy sources motivate the operator to perform real-time dispatch. On the one hand, the existing online methods usually utilize approximate models that result in intractable constraint violation. A common method is to relax it as a chance constraint, while it is still hard to ensure its satisfaction in practice. On the other hand, some MGs may hope to preserve the private information on their local costs and states. To address these problems, we make the following contributions. First, the coordination problem is reformulated as a constrained multi-agent Markov decision process. Second, the distributed RL algorithm with a theoretical convergence guarantee is developed. Third, to further preserve the local private information and improve the performance, this algorithm is modified by adding a local feature extraction module for each agent. This module could also be regarded as an encryption module for the local state information. Fourth, numerical experiments are carried out to validate the effectiveness of the modified algorithm.

Original languageEnglish
JournalIEEE Transactions on Automation Science and Engineering
DOIs
StateAccepted/In press - 2025

Keywords

  • Constrained Markov decision processes
  • Distribution network
  • Microgrids
  • Multi-agent system
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Consensus-based Distributed Reinforcement Learning with Primal-Dual Update for Networked Microgrids On-Line Coordination'. Together they form a unique fingerprint.

Cite this