A deep reinforcement learning-based charging scheduling approach with augmented Lagrangian for electric vehicles

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

The adoption of electric vehicles (EVs) is increasingly recognized as a promising solution to decarbonization, thereby large scales of EVs are integrated into transportation and power systems in recent years. The transportation and power systems' operation states largely influence EVs' patterns, introducing uncertainties into EVs' driving patterns and energy demand. Such uncertainties make it a challenge to optimize the operations of charging stations, which provide both charging and electric grid services such as demand responses. To handle this dilemma, this paper models the chargers' operation decisions as a constrained Markov decision process (CMDP). By synergistically combining the augmented Lagrangian method and soft actor-critic algorithm, a novel safe off-policy reinforcement learning (RL) approach is proposed in this paper to solve the CMDP. The actor-network is updated in a policy gradient manner with the Lagrangian value function. A double-critics network is adopted to estimate the action-value function to avoid overestimation bias synchronously. The proposed algorithm does not require a strong convexity guarantee of examined problems and is sample efficient. Comprehensive numerical experiments with real-world electricity prices demonstrate that our proposed algorithm can achieve high solution optimality and constraint compliance.

Original languageEnglish
Article number124706
JournalApplied Energy
Volume378
DOIs
StatePublished - 15 Jan 2025

Keywords

  • Constrained Markov decision process
  • Electric vehicle
  • Safe reinforcement learning

Fingerprint

Dive into the research topics of 'A deep reinforcement learning-based charging scheduling approach with augmented Lagrangian for electric vehicles'. Together they form a unique fingerprint.

Cite this