TY - JOUR
T1 - Memory, Attention, and Muscle Synergies Based Reinforcement and Transfer Learning for Musculoskeletal Robots Under Imperfect Observation
AU - Chen, Jiahao
AU - Wu, Yaxiong
AU - Qiao, Hong
N1 - Publisher Copyright:
© 1996-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Compared to traditional robots employing joint-link structures, biologically inspired musculoskeletal robots offer superior compliance, dexterity, and robustness. However, applying reinforcement learning methods to such robots in real-world scenarios is challenged by imperfect observation of feedback states, including partial observation, noise interference, and time delay. To address these constraints and enhance the motion learning in musculoskeletal robots, a memory, attention, and muscle synergies based reinforcement and transfer learning method is proposed. Specifically, a neuromuscular controller is introduced based on memory, attention, and muscle synergies. The controller is trained by a proximal policy optimization-based reinforcement learning method. Besides, aimed at enhancing motion learning for new tasks, a transfer learning method with leveraging previously acquired muscle synergies is proposed. The effectiveness of the proposed method is validated using both a simulated model and hardware system of the musculoskeletal robot. The results indicate that the proposed method outperforms existing methods by achieving faster learning efficiency and higher movement precision under imperfect observation conditions.
AB - Compared to traditional robots employing joint-link structures, biologically inspired musculoskeletal robots offer superior compliance, dexterity, and robustness. However, applying reinforcement learning methods to such robots in real-world scenarios is challenged by imperfect observation of feedback states, including partial observation, noise interference, and time delay. To address these constraints and enhance the motion learning in musculoskeletal robots, a memory, attention, and muscle synergies based reinforcement and transfer learning method is proposed. Specifically, a neuromuscular controller is introduced based on memory, attention, and muscle synergies. The controller is trained by a proximal policy optimization-based reinforcement learning method. Besides, aimed at enhancing motion learning for new tasks, a transfer learning method with leveraging previously acquired muscle synergies is proposed. The effectiveness of the proposed method is validated using both a simulated model and hardware system of the musculoskeletal robot. The results indicate that the proposed method outperforms existing methods by achieving faster learning efficiency and higher movement precision under imperfect observation conditions.
KW - Muscle synergy
KW - musculoskeletal robots
KW - reinforcement learning
UR - https://www.scopus.com/pages/publications/85194063942
U2 - 10.1109/TMECH.2024.3401045
DO - 10.1109/TMECH.2024.3401045
M3 - 文章
AN - SCOPUS:85194063942
SN - 1083-4435
VL - 30
SP - 1853
EP - 1864
JO - IEEE/ASME Transactions on Mechatronics
JF - IEEE/ASME Transactions on Mechatronics
IS - 3
ER -