Transfer Learning for a Multimodal Hybrid EEG-fTCD Brain–Computer Interface
Humans can transfer knowledge previously acquired from a specific task to new and unknown ones. Recently, transfer learning (TL) has been extensively used in brain–computer interface (BCI) research to reduce the training/calibration requirements. BCI systems have been designed to provide alternative communication or control access through computers to individuals with limited speech and physical abilities (LSPA). These systems generally require a calibration session in order to train the BCI before each usage. Such a calibration session may be burdensome for the individuals with LSPA. In this article, we introduce a multimodal hybrid BCI based on electroencephalography (EEG) and functional transcranial Doppler ultrasound (fTCD) and present a TL approach to reduce the calibration requirements. In the hybrid BCI, EEG, and fTCD are used simultaneously to measure the electrical brain activity and cerebral blood velocity, respectively, in response to motor imagery (MI) tasks. Using the data we collected from ten healthy individuals, we perform dimensionality reduction utilizing regularized discriminant analysis (RDA). Using the scores from RDA, we learn class conditional probabilistic distributions for each individual. We use these class conditional distributions to perform TL across different participants. More specifically, in order to reduce the calibration requirements for each individual, we choose the recorded data from other individuals to augment the training data for that specific individual. We choose the data for augmentation based on the probabilistic similarities between the class conditional distributions. For the final classification, we use the RDA scores after TL as features input to three different classifiers: quadratic discriminant analysis (QDA), linear discriminant analysis (LDA), and support vector machines (SVMs). Using our experimental data, we show that TL decreases the calibration requirements up to $87.5\%$. Also by comparing SVM, LDA, and QDA, we observe that the SVM provides the best classification performance.