Demand Charge Control for Energy-intensive Enterprises based on Deep Reinforcement Learning

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

With the advancement of the dual-carbon goal and energy consumption revolution, demand-side management relying on smart grids has gradually become the focus of related research. Due to the obvious surging and high uncertainty of the power load of energy-intensive enterprises, it is difficult to obtain the best control strategy for demand charge. In this paper, we solve the problem of multi-equipment demand charge control for multi-batch tasks with discontinuous production by controlling the load of controllable equipment to reduce electricity costs. To solve the problems of long-time sequence with time coupled and complex systems with difficult modeling, we established a Markov decision process (MDP) model for real-time demand charge control innovatively. To avoid the curse of dimensionality caused by the increasing state space, we introduced the deep Q learning (DQN) algorithm, which successfully solves MDP problems with large state space. Moreover, we introduced constrained deep Q-learning (CDQN) aiming at a large number of action constraints in the problem, which selects the optimal action from the feasible action zone instead of the whole action space to improve the training efficiency and data utilization. Finally, we conducted experiments on simulation case experiments. Under the basic day-ahead production scheduling plan, real-time demand charge control can reduce costs by 10.4% compared with uncontrolled, indicating that this method has achieved excellent performance in obtaining demand charge control strategies.

Original languageEnglish
Title of host publicationProceeding - 2021 China Automation Congress, CAC 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6791-6796
Number of pages6
ISBN (Electronic)9781665426473
DOIs
StatePublished - 2021
Event2021 China Automation Congress, CAC 2021 - Beijing, China
Duration: 22 Oct 202124 Oct 2021

Publication series

NameProceeding - 2021 China Automation Congress, CAC 2021

Conference

Conference2021 China Automation Congress, CAC 2021
Country/TerritoryChina
CityBeijing
Period22/10/2124/10/21

Keywords

  • deep reinforcement learning
  • demand charge control
  • ultra-short-term scheduling

Fingerprint

Dive into the research topics of 'Demand Charge Control for Energy-intensive Enterprises based on Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this