Posted: 2023/02/24 17:35 | Author: NICA

The first author of the article is Yukun Zhang, title:“Multimodal Motor Imagery Decoding Method Based on Temporal Spatial Feature Alignment and Fusion ”

Abstract
Objective: A motor imagery-based brain-computer interface (MI-BCI) translates spontaneous movement intention from the brain to outside devices. Multimodal MI-BCI that uses multiple neural signals contains rich common and complementary information and is promising for enhancing the decoding accuracy of MI-BCI. However, the heterogeneity of different modalities makes the multimodal decoding task difficult. How to effectively utilize multimodal information remains to be further studied.
Approach: In this study, a multimodal MI decoding neural network was proposed. Spatial feature alignment losses were designed to enhance the feature representations extracted from the heterogeneous data and guide the fusion of features from different modalities. An attention-based modality fusion module was built to align and fuse the features in the temporal dimension. To evaluate the proposed decoding method, a five-class motor imagery electroencephalography and functional near infrared spectroscopy dataset were constructed.
Main results and Significance: The comparison experimental results showed that the proposed decoding method achieved higher decoding accuracy than the compared methods on both the self-collected dataset and a public dataset. The ablation results verified the effectiveness of each part of the proposed method. Feature distribution visualization results showed that the proposed losses enhance the feature representation of EEG and fNIRS modalities. The proposed method based on EEG and fNIRS modalities has significant potential for improving decoding performance of MI tasks.
Keywords: brain-computer interface, motor imagery, multimodal, EEG-fNIRS, center loss

Article link: https://iopscience.iop.org/article/10.1088/1741-2552/acbfdf