
Attention-based Multi-Source-Free Domain Adaptation for EEG Emotion Recognition
Abstract
Electroencephalography (EEG) based emotion recognition in affective brain-computer interfaces has advanced significantly in recent years. Unsupervised domain adaptation (UDA) methods have been successfully used to mitigate the need for large amounts of training data, which is required due to the inter-subject variability of EEG signals. Typical UDA solutions require access to raw source data to leverage the knowledge learned from the labelled source domains (previous subjects) across the target domain (a new subject), raising privacy concerns. To tackle this issue, we propose Attention-based Multi-Source-Free Domain Adaptation (AMFDA) for EEG emotion recognition. AMFDA attempts to transfer knowledge of source models to the target domain by aggregating adapted source models based on a set of learnable weights without accessing the source data. While the classifiers of source models are frozen, the set of learnable weights and the feature extractors are learned based on information maximization and a novel self-supervised pseudo-labelling method. A channel-wise attention layer is also used in the proposed framework to enhance the performance of source models, which in turn improves the performance of target models. We conducted extensive experiments on SEED and SEED-IV. The experimental results demonstrate that the proposed AMFDA method performs comparably to UDA state-of-the-art methods.