TY - JOUR
T1 - Game-Based Adaptive Fuzzy Optimal Bipartite Containment of Nonlinear Multiagent Systems
AU - Yan, Lei
AU - Liu, Junhe
AU - Chen, C. L.Philip
AU - Zhang, Yun
AU - Wu, Zongze
AU - Liu, Zhi
N1 - Publisher Copyright:
© 1993-2012 IEEE.
PY - 2024/3/1
Y1 - 2024/3/1
N2 - Existing adaptive optimal consensus approaches for high-order nonlinear multiagent systems (MASs) are limited by their complicated and computation-intensive identifier-actor-critic structure and ignore the conflict of interest between agents. This article proposes a graphical game-based adaptive fuzzy optimal bipartite containment scheme that removes these restrictions. The optimal containment is formulated as an N-player game over the communication topology for high-order nonlinear MASs by defining a cost function that integrates the agent's control inputs, including those of its neighbors, and the local tracking errors. To seek the Nash equilibrium, integral reinforcement learning is adopted, which does not involve the system drift dynamic. This approach eliminates the need for an identifier network and simplifies the control scheme using adaptive critic learning. To drive the online learning mechanism, the Bellman residual error is utilized, and a fuzzy logic system is used to approximate the optimal value functions of the critic networks. The updating laws incorporate an experience stack, resulting in an easy-to-check persistence excitation condition. It is proven that the synchronization error is uniformly ultimately bounded, and the bipartite containment of the outputs of followers is achieved. An illustrative example is presented to verify the effectiveness of the developed control scheme.
AB - Existing adaptive optimal consensus approaches for high-order nonlinear multiagent systems (MASs) are limited by their complicated and computation-intensive identifier-actor-critic structure and ignore the conflict of interest between agents. This article proposes a graphical game-based adaptive fuzzy optimal bipartite containment scheme that removes these restrictions. The optimal containment is formulated as an N-player game over the communication topology for high-order nonlinear MASs by defining a cost function that integrates the agent's control inputs, including those of its neighbors, and the local tracking errors. To seek the Nash equilibrium, integral reinforcement learning is adopted, which does not involve the system drift dynamic. This approach eliminates the need for an identifier network and simplifies the control scheme using adaptive critic learning. To drive the online learning mechanism, the Bellman residual error is utilized, and a fuzzy logic system is used to approximate the optimal value functions of the critic networks. The updating laws incorporate an experience stack, resulting in an easy-to-check persistence excitation condition. It is proven that the synchronization error is uniformly ultimately bounded, and the bipartite containment of the outputs of followers is achieved. An illustrative example is presented to verify the effectiveness of the developed control scheme.
KW - Adaptive optimal consensus
KW - bipartite containment
KW - differential graphical game
KW - integral reinforcement learning (IRL)
UR - https://www.scopus.com/pages/publications/85181821491
U2 - 10.1109/TFUZZ.2023.3327699
DO - 10.1109/TFUZZ.2023.3327699
M3 - 文章
AN - SCOPUS:85181821491
SN - 1063-6706
VL - 32
SP - 1455
EP - 1465
JO - IEEE Transactions on Fuzzy Systems
JF - IEEE Transactions on Fuzzy Systems
IS - 3
ER -