TY - GEN
T1 - An Approach for Upper Limb Movement Intention Recognition Using EEG and sEMG Fusion based on the MCPSA-CIIM
AU - Zhang, Weiming
AU - Zhang, Xiaodong
AU - Xu, Cheng
AU - Zhou, Guchuan
AU - Zhang, Teng
AU - Wang, Yu
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The upper limb motion intention recognition method based on electroencephalography (EEG) and surface electromyography (sEMG) fusion has achieved significant results in fields such as prosthetic control. However, most existing fusion methods use manual means to select features, which cannot capture temporal and spatial features at different scales, and ignore the correlation features between the two types of signals. To address these issues, this article proposes a fusion recognition method for upper limb motion intention EEG and sEMG based on Multi-scale Convolution, Polarized Self-Attention, and Cross Intelligence Integration Module. This article extracts multidimensional temporal and spatial features of EEG and sEMG through multi-scale convolution, and introduce polarized self-attention mechanism to filter the extracted multi-scale features. Simultaneously using cross enhancement strategy to extract correlation features between EEG and sEMG. Finally, the features are input into the classification network for recognition. This method was validated on the Jeong database, and the results showed that compared with CNN-LSTM and EEGNet, the recognition accuracy of this method increased by 2.63% and 3.15%, respectively.
AB - The upper limb motion intention recognition method based on electroencephalography (EEG) and surface electromyography (sEMG) fusion has achieved significant results in fields such as prosthetic control. However, most existing fusion methods use manual means to select features, which cannot capture temporal and spatial features at different scales, and ignore the correlation features between the two types of signals. To address these issues, this article proposes a fusion recognition method for upper limb motion intention EEG and sEMG based on Multi-scale Convolution, Polarized Self-Attention, and Cross Intelligence Integration Module. This article extracts multidimensional temporal and spatial features of EEG and sEMG through multi-scale convolution, and introduce polarized self-attention mechanism to filter the extracted multi-scale features. Simultaneously using cross enhancement strategy to extract correlation features between EEG and sEMG. Finally, the features are input into the classification network for recognition. This method was validated on the Jeong database, and the results showed that compared with CNN-LSTM and EEGNet, the recognition accuracy of this method increased by 2.63% and 3.15%, respectively.
UR - https://www.scopus.com/pages/publications/85174184347
U2 - 10.1109/WRCSARA60131.2023.10261809
DO - 10.1109/WRCSARA60131.2023.10261809
M3 - 会议稿件
AN - SCOPUS:85174184347
T3 - 2023 WRC Symposium on Advanced Robotics and Automation, WRC SARA 2023
SP - 402
EP - 407
BT - 2023 WRC Symposium on Advanced Robotics and Automation, WRC SARA 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 5th World Robot Conference Symposium on Advanced Robotics and Automation, WRC SARA 2023
Y2 - 19 August 2023
ER -