发表于: 2022/06/30 17:40 | 作者: NICA

文章的第一作者李叙锦,题目为“TFF-Former: Temporal-Frequency Fusion Transformer for Zero-training Decoding of Two BCI Tasks ”

以下为文章摘要:Brain-computer interface (BCI) systems provide a direct connection between the human brain and external devices. Visual evoked BCI systems including Event-related Potential (ERP) and Steady-state Visual Evoked Potential (SSVEP) have attracted extensive attention because of their strong brain responses and wide applications. Previous studies have made some breakthroughs in within-subject decoding algorithms for specific tasks. However, there are two challenges in current decoding algorithms in BCI systems. Firstly, current decoding algorithms cannot accurately classify EEG signals without the data of the new subject, but the calibration procedure is time-consuming. Secondly, algorithms are tailored to extract features for one specific task, which limits their applications across tasks. In this study, we proposed a Temporal-Frequency Fusion Transformer (TFF-Former) for zero-training decoding across two BCI tasks. EEG data were organized into temporal-spatial and frequency-spatial forms, which can be considered as two views. In the TFF-Former framework, two symmetrical Transformer streams were designed to extract view-specific features. The cross-view module based on the cross-attention mechanism was proposed to guide each stream to strengthen common representations of features across EEG views. Additionally, an attention-based fusion module was built to fuse the representations from the two views effectively. The mean mask mechanism was applied to adaptively decrease redundant EEG tokens aggregation for the integration of common representations. We validated our method on the self-collected RSVP dataset and benchmark SSVEP dataset. Experimental results demonstrated that our TFF-Former model achieved competitive performance compared with models in each of the above paradigms. It can further promote the application of visual evoked EEG-based BCI system.

文章的第一作者程昕钰,题目为“VigilanceNet: Decouple Intra- and Inter-Modality Learning for Multimodal Vigilance Estimation in RSVP-Based BCI”

以下为文章摘要:Recently, brain-computer interface (BCI) technology has made impressive progress and has been developed for many applications. Thereinto, the BCI system based on rapid serial visual presentation (RSVP) is a promising information detection technology. However, the use of RSVP is closely related to the user's performance, which can be influenced by their vigilance levels. Therefore it is crucial to detect vigilance levels in RSVP-based BCI. In this paper, we conducted a long-term RSVP target detection experiment to collect electroencephalography (EEG) and electrooculogram (EOG) data at different vigilance levels. In addition, to estimate vigilance levels in RSVP-based BCI, we propose a multimodal method named VigilanceNet using EEG and EOG. Firstly, we define the multiplicative relationships in conventional EOG features that can better describe the relationships between EOG features, and design an outer product embedding module to extract the multiplicative relationships. Secondly, we propose to decouple the learning of intra- and inter-modality to improve multimodal learning. Specifically, for intra-modality, we introduce an intra-modality representation learning (intra-RL) method to obtain effective representations of each modality by letting each modality independently predict vigilance levels during the multimodal training process. For inter-modality, we employ the cross-modal Transformer based on cross-attention to capture the complementary information between EEG and EOG, which only pays attention to the inter-modality relations. Extensive experiments and ablation studies are conducted on the RSVP and SEED-VIG public datasets. The results demonstrate the effectiveness of the method in terms of regression error and correlation.