Enhancing the Tolerance of Voltage Regulation to Cyber Contingencies via Graph-Based Deep Reinforcement Learning

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

The volatility from the high penetration of distributed energy resources (DERs) makes distribution networks more susceptible to voltage violations. Besides, with the increasing coupling of cyber and physical sides in modern power systems, the risk of potential cyber contingencies (CCs) is rising, which can weaken existing voltage regulation methods. Addressing these issues, this paper proposes a novel graph-based deep reinforcement learning (DRL) framework for enhancing the tolerance of voltage regulation to CCs. Firstly, typical CCs including data missing, data noise, and time delay are modeled in a unified manner, based on the cyber-physical architecture of distribution network. The voltage regulation problem is formulated into a Markov decision process (MDP) with a pertinently designed reward function, while the partial observability exhibited in scenarios involving CCs is also described. In the proposed framework, a novel graph feature representation (GFR) algorithm aiming to mitigate the impact of CCs, which fully utilizes the graph information in the cyber-physical distribution network, is developed in detail and embedded into the proximal policy optimization (PPO) algorithm, whose implementation is specified to ensure its feasibility. Case studies on the 33-bus and 141-bus networks prove the effectiveness and tolerance of the proposed method to CCs of different severities.

Original languageEnglish
Pages (from-to)4661-4673
Number of pages13
JournalIEEE Transactions on Power Systems
Volume39
Issue number2
DOIs
StatePublished - 1 Mar 2024

Keywords

  • cyber contingencies
  • cyber-physical distribution network
  • deep reinforcement learning
  • graph feature representation
  • Voltage regulation

Fingerprint

Dive into the research topics of 'Enhancing the Tolerance of Voltage Regulation to Cyber Contingencies via Graph-Based Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this