EveryBrain: Generate EEG Responses From Images For Specified Individuals

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents EveryBrain, a method to generate electroencephalographic (EEG) signals of visual stimuli using images. Given that individuals exhibit distinct EEG responses to the same visual stimulus, EveryBrain is capable of capturing these individual characteristics during signal generation. The framework operates in two stages. By leveraging the temporal properties of EEG signals and the spatial features of images, EveryBrain presents a self-supervised framework that simultaneously reconstruct EEG signals and perform contrastive learning between image and EEG features. Furthermore, through additional training focused on individual EEG differences, Stage2 injects an ID number (representing a specific person) into image features via a cross-modal projector. The resulting personalized EEG latent codes, supervised by the Stage1 encoder, are then decoded into vivid, individualized EEG responses. Experiments validate the accuracy of EveryBrain in generating EEG signals for various individuals in response to visual stimuli. Overall, the proposed method tackles challenges in EEG generation from images, such as cross-modal alignment, individual variability, and waveform stability, yielding promising results. Additionally, the novel approach of of joint learning between images and EEG demonstrates positive effects on decoding visual neural representations. Both quantitative and qualitative evaluations demonstrate the effectiveness of methods, marking a significant step toward portable and cost-effective "image-to-thought".

Keywords

  • cross-modal learning
  • imagetoEEG generation
  • personalized embedding
  • self-supervised learning
  • visual neural decoding

Fingerprint

Dive into the research topics of 'EveryBrain: Generate EEG Responses From Images For Specified Individuals'. Together they form a unique fingerprint.

Cite this