TY - JOUR
T1 - Event-triggered-based online integral reinforcement learning for optimal control of unknown constrained nonlinear systems
AU - Han, Xiumei
AU - Zhao, Xudong
AU - Wang, Ding
AU - Wang, Bohui
N1 - Publisher Copyright:
© 2022 Informa UK Limited, trading as Taylor & Francis Group.
PY - 2024
Y1 - 2024
N2 - For unknown nonlinear systems with actuator saturation, an online policy iteration-based algorithm is employed to solve the optimal event-triggered control problem. To learn the system dynamics, a novel identifier is proposed to make the estimation error converge quickly and the experience replay technique is employed to release the persistence of the excitation condition. To approximate the cost function and the event-triggered control law, we present event-triggered-based critic and actor networks, whose weights are updated only at triggered instants. During the policy iteration process, an event-triggered-based integral reinforcement learning method is proposed to solve the Hamilton–Jacobi–Bellman equation. By utilising the integral reinforcement learning, the network resource is saved and learning efficiency is improved. Based on the Lyapunov method, stability for the closed-loop system and estimation errors for the three networks are analysed. At last, simulation results of two numerical examples are used to show the effectiveness of the proposed method.
AB - For unknown nonlinear systems with actuator saturation, an online policy iteration-based algorithm is employed to solve the optimal event-triggered control problem. To learn the system dynamics, a novel identifier is proposed to make the estimation error converge quickly and the experience replay technique is employed to release the persistence of the excitation condition. To approximate the cost function and the event-triggered control law, we present event-triggered-based critic and actor networks, whose weights are updated only at triggered instants. During the policy iteration process, an event-triggered-based integral reinforcement learning method is proposed to solve the Hamilton–Jacobi–Bellman equation. By utilising the integral reinforcement learning, the network resource is saved and learning efficiency is improved. Based on the Lyapunov method, stability for the closed-loop system and estimation errors for the three networks are analysed. At last, simulation results of two numerical examples are used to show the effectiveness of the proposed method.
KW - Optimal event-triggered control
KW - constrained control input
KW - event-triggered-based integral reinforcement learning
KW - unknown nonlinear systems
UR - https://www.scopus.com/pages/publications/85142143322
U2 - 10.1080/00207179.2022.2137852
DO - 10.1080/00207179.2022.2137852
M3 - 文章
AN - SCOPUS:85142143322
SN - 0020-7179
VL - 97
SP - 213
EP - 225
JO - International Journal of Control
JF - International Journal of Control
IS - 2
ER -