Robust control for affine nonlinear systems under the reinforcement learning framework

  • Wenxin Guo
  • , Weiwei Qin
  • , Xuguang Lan
  • , Jieyu Liu
  • , Zhaoxiang Zhang

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

This article investigates the robust control problem of affine nonlinear systems with both additive and multiplicative uncertainty. Different from existing actor-critic (AC) algorithms for adaptive dynamic programming (ADP), we introduce an uncertainty estimator and propose an actor-critic-estimator (ACE) algorithm. The proposed algorithm alternates between the value evaluation, uncertainty estimation, and policy update to generate the adaptive robust control law without knowing the system dynamics. Especially, during the step of uncertainty estimation, we approximate the uncertainty by a radial basis function neural network (RBFNN) and design the appropriate utility function accordingly instead of using the supremum of the uncertainty as in existing studies. The Lyapunov stability theorem provides theoretical demonstrations of the stability and convergence. We further demonstrate that the affine nonlinear systems with uncertainty is uniformly ultimately bounded (UUB) stable when the learned adaptive robust control law is adopted. The performance of the proposed algorithm is demonstrated through a torsion pendulum system and an inverted pendulum system.

Original languageEnglish
Article number127631
JournalNeurocomputing
Volume587
DOIs
StatePublished - 28 Jun 2024

Keywords

  • Adaptive dynamic programming
  • Robust control
  • Uncertainty estimation
  • Utility function

Fingerprint

Dive into the research topics of 'Robust control for affine nonlinear systems under the reinforcement learning framework'. Together they form a unique fingerprint.

Cite this