EEG-Based Multimodal Emotion Recognition: A Machine Learning Perspective

  • Huan Liu
  • , Tianyu Lou
  • , Yuzhe Zhang
  • , Yixiao Wu
  • , Yang Xiao
  • , Christian S. Jensen
  • , Dalin Zhang

Research output: Contribution to journalArticlepeer-review

88 Scopus citations

Abstract

Emotion, a fundamental trait of human beings, plays a pivotal role in shaping aspects of our lives, including our cognitive and perceptual abilities. Hence, emotion recognition also is central to human communication, decision-making, learning, and other activities. Emotion recognition from electroencephalography (EEG) signals has garnered substantial attention due to advantages such as noninvasiveness, high speed, and high temporal resolution; driven also by the complementarity between EEG and other physiological signals at revealing emotions, recent years have seen a surge in proposals for EEG-based multimodal emotion recognition (EMER). In short, EEG-based emotion recognition is a promising technology in medical measurements and health monitoring. While reviews exist, which explore emotion recognition from multimodal physiological signals, they focus mostly on general combinations of modalities and do not emphasize studies that center on EEG as the fundamental modality. Furthermore, existing reviews take a methodology-agnostic perspective, primarily concentrating on the biomedical basis or experimental paradigms, thereby giving little attention to the methodological characteristics unique to this field. To address these gaps, we present a comprehensive review of current EMER studies, with a focus on multimodal machine learning models. The review is structured around three key aspects: multimodal feature representation learning, multimodal physiological signal fusion, and incomplete multimodal learning models. In doing so, the review sheds light on the advances and challenges in the field of EMER, thus offering researchers who are new to the field a holistic understanding. The review also aims to provide valuable insight that may guide new research in this exciting and rapidly evolving field.

Original languageEnglish
Article number4003729
Pages (from-to)1-29
Number of pages29
JournalIEEE Transactions on Instrumentation and Measurement
Volume73
DOIs
StatePublished - 2024

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

Keywords

  • Electroencephalography (EEG)
  • emotion recognition
  • machine learning
  • multimodal learning
  • multimodal physiological signal

Fingerprint

Dive into the research topics of 'EEG-Based Multimodal Emotion Recognition: A Machine Learning Perspective'. Together they form a unique fingerprint.

Cite this