In this paper, we introduce a transfer learning approach for our novel hybrid brain-computer interface in which electroencephalography and functional transcranial Doppler ultrasound are used simultaneously to record brain electrical activity and cerebral blood velocity respectively due to flickering mental rotation and word generation tasks. We reduced each trial into a scalar score using Regularized Discriminant Analysis (RDA). For each individual, class conditional probabilistic distribution of each mental task was estimated using RDA scores of the trials corresponding to that mental task. Similarities between class conditional distributions across individuals were measured using Kullback-Leibler divergence, Bhattacharyya, and Hellinger distances. Classification task was performed using Quadratic Discriminant Analysis (QDA), Linear Discriminant Analysis (LDA), and Support Vector Machines (SVM). We demonstrate that transfer learning can reduce calibration requirements up to %87.5. Moreover, it was found that QDA provides the most significant performance improvement compared to the case when no transfer learning is employed.
Analysis of vertex-varying spectral content of signals on graphs challenges the assumption of vertex invariance and requires the introduction of vertex-frequency representations as a new tool for graph signal analysis. Local smoothness, an important parameter of vertex-varying graph signals, is introduced and defined in this paper. Basic properties of this parameter are given. By using the local smoothness, an ideal vertex-frequency distribution is introduced. The local smoothness estimation is performed based on several forms of the vertex-frequency distributions, including the graph spectrogram, the graph Rihaczek distribution, and a vertex-frequency distribution with reduced interferences. The presented theory is illustrated through numerical examples.
Graphs are irregular structures which naturally account for data integrity, however, traditional approaches have been established outside Signal Processing, and largely focus on analyzing the underlying graphs rather than signals on graphs. Given the rapidly increasing availability of multisensor and multinode measurements, likely recorded on irregular or ad-hoc grids, it would be extremely advantageous to analyze such structured data as graph signals and thus benefit from the ability of graphs to incorporate spatial awareness of the sensing locations, sensor importance, and local versus global sensor association. The aim of this lecture note is therefore to establish a common language between graph signals, defined on irregular signal domains, and some of the most fundamental paradigms in DSP, such as spectral analysis of multichannel signals, system transfer function, digital filter design, parameter estimation, and optimal filter design. This is achieved through a physically meaningful and intuitive real-world example of geographically distributed multisensor temperature estimation. A similar spatial multisensor arrangement is already widely used in Signal Processing curricula to introduce minimum variance estimators and Kalman filters \cite{HM}, and by adopting this framework we facilitate a seamless integration of graph theory into the curriculum of existing DSP courses. By bridging the gap between standard approaches and graph signal processing, we also show that standard methods can be thought of as special cases of their graph counterparts, evaluated on linear graphs. It is hoped that our approach would not only help to demystify graph theoretic approaches in education and research but it would also empower practitioners to explore a whole host of otherwise prohibitive modern applications.
Brain-computer interfaces (BCIs) allow individuals with limited speech and physical abilities to communicate with the surrounding environment. Such BCIs require calibration sessions which is burdensome for such individuals. We introduce a transfer learning approach for our novel hybrid BCI in which brain electrical activity and cerebral blood velocity are recorded simultaneously using Electroencephalography (EEG) and functional transcranial Doppler ultrasound (fTCD) respectively in response to flickering mental rotation (MR) and word generation (WG) tasks. With the aim of reducing the calibration requirements, for each BCI user, we used mutual information to identify the top similar datasets collected from other users. Using these datasets and the dataset of the current user, features derived from power spectrum of EEG and fTCD signals were calculated. Mutual information and support vector machines were used for feature selection and classification. Using the hybrid combination, an average accuracy of 93.04% was achieved for MR versus baseline whereas WG versus baseline yielded average accuracy of 90.94%. As for MR versus WG, an average accuracy of 92.64% was obtained by hybrid combination compared to 88.14% obtained by EEG only. Average bit rates of 11.45, 17.24, and 19.72 bits/min were achieved for MR versus WG, MR versus baseline, and WG versus baseline respectively. The proposed system outperforms the state of the art EEG-fNIRS BCIs in terms of accuracy and/or bit rate.
Objective. We aim at developing a hybrid brain–computer interface that utilizes electroencephalography (EEG) and functional transcranial Doppler (fTCD). In this hybrid BCI, EEG and fTCD are used simultaneously to measure electrical brain activity and cerebral blood velocity respectively in response to flickering mental rotation (MR) and word generation (WG) tasks. In this paper, we improve both the accuracy and information transfer rate (ITR) of this novel hybrid brain computer interface (BCI) we designed in our previous work. Approach. To achieve such aim, we extended our feature extraction approach through using template matching and multi-scale analysis to extract EEG and fTCD features, respectively. In particular, template matching was used to analyze EEG data whereas 5-level wavelet decomposition was applied to fTCD data. Significant EEG and fTCD features were selected using Wilcoxon signed rank test. Support vector machines classifier (SVM) was used to project EEG and fTCD selected features of each trial into scalar SVM scores. Moreover, instead of concatenating EEG and fTCD feature vectors corresponding to each trial, we proposed a Bayesian fusion approach of EEG and fTCD evidences. Main results. Average accuracy and average ITR of 98.11% and 21.29 bits min−1 were achieved for WG versus MR classification while MR versus baseline yielded 86.27% average accuracy and 8.95 bit min−1 average ITR. In addition, average accuracy of 85.29% and average ITR of 8.34 bits min−1 were obtained for WG versus baseline. Significance. The proposed analysis techniques significantly improved the hybrid BCI performance. Specifically, for MR/WG versus baseline problems, we achieved twice of the ITRs obtained in our previous study. Moreover, the ITR of WG versus MR problem is 4-times the ITR we obtained before for the same problem. The current analysis methods boosted the performance of our EEG-fTCD BCI such that it outperformed the existing EEG-fNIRS BCIs in comparison.
Humans can transfer knowledge previously acquired from a specific task to new and unknown ones. Recently, transfer learning (TL) has been extensively used in brain–computer interface (BCI) research to reduce the training/calibration requirements. BCI systems have been designed to provide alternative communication or control access through computers to individuals with limited speech and physical abilities (LSPA). These systems generally require a calibration session in order to train the BCI before each usage. Such a calibration session may be burdensome for the individuals with LSPA. In this article, we introduce a multimodal hybrid BCI based on electroencephalography (EEG) and functional transcranial Doppler ultrasound (fTCD) and present a TL approach to reduce the calibration requirements. In the hybrid BCI, EEG, and fTCD are used simultaneously to measure the electrical brain activity and cerebral blood velocity, respectively, in response to motor imagery (MI) tasks. Using the data we collected from ten healthy individuals, we perform dimensionality reduction utilizing regularized discriminant analysis (RDA). Using the scores from RDA, we learn class conditional probabilistic distributions for each individual. We use these class conditional distributions to perform TL across different participants. More specifically, in order to reduce the calibration requirements for each individual, we choose the recorded data from other individuals to augment the training data for that specific individual. We choose the data for augmentation based on the probabilistic similarities between the class conditional distributions. For the final classification, we use the RDA scores after TL as features input to three different classifiers: quadratic discriminant analysis (QDA), linear discriminant analysis (LDA), and support vector machines (SVMs). Using our experimental data, we show that TL decreases the calibration requirements up to $87.5\%$. Also by comparing SVM, LDA, and QDA, we observe that the SVM provides the best classification performance.
Swallowing is a sensorimotor activity by which food, liquids, and saliva pass from the oral cavity to the stomach. It is considered one of the most complex sensorimotor functions because of the high level of coordination needed to accomplish the swallowing task over a very short period of 1-2s and the multiple subsystems it involves. Dysphagia (i.e., swallowing difficulties) refers to any swallowing disorder and is commonly caused by a variety of neurological conditions (e.g., stroke, cerebral palsy, Parkinson disease), head and neck cancer and its treatment, genetic syndromes, and iatrogenic conditions or trauma. The signs and symptoms of dysphagia range from anterior loss of food while eating, difficulty chewing, and subjective difficulty swallowing food or liquids to choking or coughing before, during, or after eating because of impaired clearance of swallowed material from the throat into the digestive system. When not effectively treated, dysphagia can cause malnutrition, dehydration, immune system failure, psychosocial degradation, and generally decreased quality of life.
Millions of people across the globe suffer from swallowing difficulties, known as dysphagia, which can lead to malnutrition, pneumonia, and even death. Swallowing cervical auscultation, which has been suggested as a noninvasive screening method for dysphagia, has not been associated yet with any physical events. In this paper, we have compared the hyoid bone displacement extracted from the videofluoroscopy images of 31 swallows to the signal features extracted from the cervical auscultation recordings captured with a tri-axial accelerometer and a microphone. First, the vertical displacement of the anterior part of the hyoid bone is related to the entropy rate of the superior–inferior swallowing vibrations and to the kurtosis of the swallowing sounds. Second, the vertical displacement of the posterior part of the hyoid bone is related to the bandwidth of the medial–lateral swallowing vibrations. Third, the horizontal displacements of the posterior and anterior parts of the hyoid bone are related to the spectral centroid of the superior–inferior swallowing vibrations and to the peak frequency of the medial–lateral swallowing vibrations, respectively. At last, the airway protection scores and the command characteristics were associated with the vertical and horizontal displacements, respectively, of the posterior part of the hyoid bone. Additional associations between the patients’ characteristics and auscultations’ signals were also observed. The hyoid bone maximal displacement is a cause of swallowing vibrations and sounds. High-resolution cervical auscultation may offer a noninvasive alternative for dysphagia screening and additional diagnostic information.
Carbon nanotube-based field-effect transistors (NTFETs) are ideal sensor devices as they provide rich information regarding carbon nanotube interactions with target analytes and have potential for miniaturization in diverse applications in medical, safety, environmental, and energy sectors. Herein, we investigate chemical detection with cross-sensitive NTFETs sensor arrays comprised of metal nanoparticle-decorated single-walled carbon nanotubes (SWCNTs). By combining analysis of NTFET device characteristics with supervised machine-learning algorithms, we have successfully discriminated among five selected purine compounds, adenine, guanine, xanthine, uric acid, and caffeine. Interactions of purine compounds with metal nanoparticle-decorated SWCNTs were corroborated by density functional theory calculations. Furthermore, by testing a variety of prepared as well as commercial solutions with and without caffeine, our approach accurately discerns the presence of caffeine in 95% of the samples with 48 features using a linear discriminant analysis and in 93.4% of the samples with only 11 features when using a support vector machine analysis. We also performed recursive feature elimination and identified three NTFET parameters, transconductance, threshold voltage, and minimum conductance, as the most crucial features to analyte prediction accuracy.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više