TY - JOUR
T1 - Adaptive Critic Learning-Based Optimal Bipartite Consensus for Multiagent Systems with Prescribed Performance
AU - Yan, Lei
AU - Liu, Junhe
AU - Lai, Guanyu
AU - Philip Chen, C. L.
AU - Wu, Zongze
AU - Liu, Zhi
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2025
Y1 - 2025
N2 - Developing a distributed bipartite optimal consensus scheme while ensuring user-predefined performance is essential in practical applications. Existing approaches to this problem typically require a complex controller structure due to adopting an identifier-actor-critic framework and prescribed performance cannot be guaranteed. In this work, an adaptive critic learning (ACL)-based optimal bipartite consensus scheme is developed to bridge the gap. A newly designed error scaling function, which defines the user-predefined settling time and steady accuracy without relying on the initial conditions, is then integrated into a cost function. The backstepping framework combines the ACL and integral reinforcement learning (IRL) algorithm to develop the adaptive optimal bipartite consensus scheme, which contributes a critic-only controller structure by removing the identifier and actor networks in the existing methods. The adaptive law of the critic network is derived by the gradient descent algorithm and experience replay to minimize the IRL-based residual error. It is shown that a compute-saving learning mechanism can achieve the optimal consensus, and the error variables of the closed-loop system are uniformly ultimately bounded (UUB). Besides, in any bounded initial condition, the evolution of bipartite consensus is limited to a user-prescribed boundary under bounded initial conditions. The illustrative simulation results validate the efficacy of the approach.
AB - Developing a distributed bipartite optimal consensus scheme while ensuring user-predefined performance is essential in practical applications. Existing approaches to this problem typically require a complex controller structure due to adopting an identifier-actor-critic framework and prescribed performance cannot be guaranteed. In this work, an adaptive critic learning (ACL)-based optimal bipartite consensus scheme is developed to bridge the gap. A newly designed error scaling function, which defines the user-predefined settling time and steady accuracy without relying on the initial conditions, is then integrated into a cost function. The backstepping framework combines the ACL and integral reinforcement learning (IRL) algorithm to develop the adaptive optimal bipartite consensus scheme, which contributes a critic-only controller structure by removing the identifier and actor networks in the existing methods. The adaptive law of the critic network is derived by the gradient descent algorithm and experience replay to minimize the IRL-based residual error. It is shown that a compute-saving learning mechanism can achieve the optimal consensus, and the error variables of the closed-loop system are uniformly ultimately bounded (UUB). Besides, in any bounded initial condition, the evolution of bipartite consensus is limited to a user-prescribed boundary under bounded initial conditions. The illustrative simulation results validate the efficacy of the approach.
KW - Adaptive critic learning (ACL)
KW - integral reinforcement learning (IRL)
KW - optimal bipartite consensus
KW - prescribed performance
UR - https://www.scopus.com/pages/publications/86000428396
U2 - 10.1109/TNNLS.2024.3379503
DO - 10.1109/TNNLS.2024.3379503
M3 - 文章
C2 - 38709609
AN - SCOPUS:86000428396
SN - 2162-237X
VL - 36
SP - 5417
EP - 5427
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 3
ER -