文章的第一作者张裕坤,题目为“Filter Bank Adversarial Domain Adaptation For Motor Imagery Brain Computer Interface”,发表于2021 International Joint Conference on Neural Networks (IJCNN)
以下为文章摘要:
Abstract:
Motor imagery (MI) based Brain-computer interface (BCI) is a promising BCI paradigm that can help neuromuscular injury patients to recover or replace their motor abilities. However, electroencephalography (EEG) based MI-BCI suffers from its long calibration time and low classification accuracy, which restrict its application. Thus, it is important to reduce the calibration time of MI-BCI and enhance its prediction accuracy. In this study, we propose a filter bank Wasserstein adversarial domain adaptation framework (FBWADA) that uses a short amount of training data from a new target subject, and all collected data from an existing subject. A Convolutional Neural Networks (CNN) based feature extractor is designed to extract feature from EEG data. Filter bank strategy is employed to extract feature from multiple sub bands and integrate predictions from all sub bands. Wasserstein Generative Adversarial Networks (WGAN) based domain adaptation network aligns the marginal and conditional distribution of target and source. We evaluate our method on Data set 2a of BCI competition IV. Experiment results show that our method achieves the best performance among compared methods under different amount of training data. Performance of our method trained with certain blocks of data is similar to or better than the best comparing method trained with one more block. This indicates that our method could reduce the need for training data for at least one block.
文章链接:https://ieeexplore.ieee.org/document/9534286
文章的第一作者毛嘉宇,题目为“A Cross-Modal Guiding and Fusion Method for Multi-Modal RSVP-based Image Retrieval”,发表于2021 International Joint Conference on Neural Networks (IJCNN)
以下为文章摘要:
Abstract:
Rapid Serial Visual Presentation (RSVP) is an important paradigm in Brain-Computer Interface (BCI). It can be used in speller, image retrieval, anomaly detection, etc. RSVP paradigm uses a small number of target pictures in a high speed presented picture sequence to induce specific event-related potential (ERP) components. However, the application of RSVP based BCI is challenged by the accuracy of ERP detection. Thus, the goal of this study is to introduce other related modalities to the traditional EEG-based BCI to make robust predictions and improve the detection performance. First, we introduce the eye movement modality into the RSVP-based BCI and collect a multimodality RSVP-based dataset simultaneously during the image retrieval task. Second, we design a simple but efficient CNN-based network with two modality fusion modules to fully utilize the multi-modality data in two stages. In the feature extraction stage, we propose a Cross-modality-Guided Feature Calibration (cm-GFC) module to enable the EEG modality feature to modify the eye movement modality feature, and the aim is to make eye movement modality features and EEG modality features are more complementary. In the feature fusion stage, we propose a Dynamic Gated Fusion (DGF) module, which applies modality-specific gates to retain the complementary information of the two modalities and reduce redundant information from the two modalities. To evaluate our method, we conduct extensive experiments on the dataset with EEG and eye movement data are from 20 subjects. The proposed method achieves a high balanced accuracy of 87.83 ± 2.31% of classification, which outperforms a series of single modality and multi-modality approaches.
文章链接:https://ieeexplore.ieee.org/document/9534465