Mutual Information for Transfer Learning in SSVEP Hybrid EEG-fTCD Brain-Computer Interfaces
Brain-computer interfaces (BCIs) allow individuals with limited speech and physical abilities to communicate with the surrounding environment. Such BCIs require calibration sessions which is burdensome for such individuals. We introduce a transfer learning approach for our novel hybrid BCI in which brain electrical activity and cerebral blood velocity are recorded simultaneously using Electroencephalography (EEG) and functional transcranial Doppler ultrasound (fTCD) respectively in response to flickering mental rotation (MR) and word generation (WG) tasks. With the aim of reducing the calibration requirements, for each BCI user, we used mutual information to identify the top similar datasets collected from other users. Using these datasets and the dataset of the current user, features derived from power spectrum of EEG and fTCD signals were calculated. Mutual information and support vector machines were used for feature selection and classification. Using the hybrid combination, an average accuracy of 93.04% was achieved for MR versus baseline whereas WG versus baseline yielded average accuracy of 90.94%. As for MR versus WG, an average accuracy of 92.64% was obtained by hybrid combination compared to 88.14% obtained by EEG only. Average bit rates of 11.45, 17.24, and 19.72 bits/min were achieved for MR versus WG, MR versus baseline, and WG versus baseline respectively. The proposed system outperforms the state of the art EEG-fNIRS BCIs in terms of accuracy and/or bit rate.