Time-frequency representations of signals obtained by the S-transform can be very sensitive to the presence of ?-stable noise. An algorithm for the robust S-transform is introduced. The proposed scheme is based on the L-DFT. The results of conducted numerical analysis show a significantly enhanced performance of the proposed scheme compared to the standard S-transform.
Fast Hermite projections have been often used in image-processing procedures such as image database retrieval, projection filtering, and texture analysis. In this paper, we propose an innovative approach for the analysis of one-dimensional biomedical signals that combines the Hermite projection method with time-frequency analysis. In particular, we propose a two-step approach to characterize vibrations of various origins in swallowing accelerometry signals. First, by using time-frequency analysis we obtain the energy distribution of signal frequency content in time. Second, by using fast Hermite projections we characterize whether the analyzed time-frequency regions are associated with swallowing or other phenomena (vocalization, noise, bursts, etc.). The numerical analysis of the proposed scheme clearly shows that by using a few Hermite functions, vibrations of various origins are distinguishable. These results will be the basis for further analysis of swallowing accelerometry to detect swallowing difficulties.
The goal of this Chapter is to review the applications of the Thomson Multitaper analysis (Percival and Walden; 1993b), (Thomson; 1982) for problems encountered in communications (Thomson; 1998; Stoica and Sundin; 1999). In particular we will focus on issues related to channel modelling, estimation and prediction. Sum of Sinusoids (SoS) or Sum of Cisoids (SoC) simulators (Patzold; 2002; SCM Editors; 2006) are popular ways of building channel simulators both in SISO and MIMO case. However, this approach is not a very good option when features of communications systems such as prediction and estimation are to be simulated. Indeed, representation of signals as a sum of coherent components with large prediction horizon (Papoulis; 1991) leads to overly optimistic results. In this Chapter we develop an approach which allows one to avoid this difficulty. The proposed simulator combines a representation of the scattering environment advocated in (SCM Editors; 2006; Almers et al.; 2006; Molisch et al.; 2006; Asplund et al.; 2006; Molish; 2004) and the approach for a single cluster environment used in (Fechtel; 1993; Alcocer et al.; 2005; Kontorovich et al.; 2008) with some important modifications (Yip and Ng; 1997; Xiao et al.; 2005). The problem of estimation and interpolation of a moderately fast fading Rayleigh/Rice channel is important in modern communications. TheWiener filter provides the optimum solution when the channel characteristics are known (van Trees; 2001). However, in real-life applications basis expansions such as Fourier bases and discrete prolate spheroidal sequences (DPSS) have been adopted for such problems (Zemen and Mecklenbrauker; 2005; Alcocer-Ochoa et al.; 2006). If the bases and the channel under investigation occupy the same band, accurate
Auscultation is a useful procedure for diagnostics of pulmonary or cardiovascular disorders. The effectiveness of auscultation depends on the skills and experience of the clinician. Further issues may arise due to the fact that heart sounds, for example, have dominant frequencies near the human threshold of hearing, hence can often go undetected (1). Computer-aided sound analysis, on the other hand, allows for rapid, accurate, and reproducible quantification of pathologic conditions, hence has been the focus of more recent research (e.g., (1–5)). During computer-aided auscultation, however, lung sounds are often corrupted by intrusive quasiperiodic heart sounds, which alter the temporal and spectral characteristics of the recording. Separation of heart and lung sound components is a difficult task as both signals have overlapping frequency spectra, in particular at frequencies below 100 Hz (6). For lung sound analysis, signal processing strategies based on conventional time, frequency, or time-frequency signal representations have been proposed for heart sound cancelation. Representative strategies include entropy calculation (7) and recurrence time statistics (8) for heart sound detection-and-removal followed by lung sound prediction, adaptive filtering (e.g., (9; 10)), time-frequency spectrogram filtering (11), and time-frequency wavelet filtering (e.g., (12–14)). Subjective assessment, however, has suggested that due to the temporal and spectral overlap between heart and lung sounds, heart sound removal may result in noisy or possibly “non-recognizable" lung sounds (15). Alternately, for heart sound analysis, blind source extraction based on periodicity detection has recently been proposed for heart sound extraction from breath sound recordings (16); subjective listening tests, however, suggest that the extracted heart sounds are noisy and often unintelligible (17). In order to benefit fully from computer-aided auscultation, both heart and lung sounds should be extracted or blindly separated from breath sound recordings. In order to achieve such a difficult task, a few methods have been reported in the literature, namely, wavelet filtering (18), independent component analysis (19; 20), and more recently, modulation domain filtering (21). The motivation with wavelet filtering lies in the fact that heart sounds contain large components over several wavelet scales, while coefficients associated with lung sounds quickly decrease with increasing scale. Heart and lung sounds are iteratively separated based on an adaptive hard thresholding paradigm. As such, wavelet coefficients at each scale with amplitudes above the threshold are assumed to correspond to heart sounds and the remaining coefficients are associated with lung sounds. Independent component analysis, in turn, makes use
Breath sounds in patients with obstructive sleep apnea are very dynamic and variable signals due to their versatile nature. In this paper, we present an adaptive segmentation algorithm for these sounds. The algorithm divides the breath sounds into segments with similar amplitude levels. As the first step, the proposed scheme creates an envelope of the signal characterizing its long term amplitude variations. Then, K-means clustering is iteratively applied to detect borders between different segments in the envelope, which will then be used to segment and normalize the original signal.
The reassignment method is a widespread approach for obtaining high resolution time-frequency representations. Nevertheless, its performance is not always optimal and can deteriorate for low signal-to-noise ratio (SNR) values. In order to overcome these obstacles, a novel method for obtaining high resolution time-frequency representations is proposed in this paper. The new method implements recently proposed nonparametric snakes in order to obtain accurate locations of the signal ridges in the time-frequency domain. The results of numerical analysis show that the proposed method is capable of achieving significantly higher concentration of signals in the time-frequency domain in comparison to the spectrogram and the traditional reassignment method. Furthermore, the new scheme also maintains good performance for low SNR values, while the performance of the other two considered methods significantly diminishes. It is clear from the results that the proposed method might be of significance in applications where accurate estimation of the signal components is required for low SNR values.
Dysphagia (swallowing difficulty) is a serious and debilitating condition that often accompanies stroke, acquired brain injury, and neurodegenerative illnesses. Individuals with dysphagia are prone to aspiration (the entry of foreign material into the airway), which directly increases the risk of serious respiratory consequences such as pneumonia. Swallowing accelerometry is a promising noninvasive tool for the detection of aspiration and the evaluation of swallowing. In this paper, dual-axis accelerometry was implemented since the motion of the hyolaryngeal complex occurs in both anterior-posterior and superior-inferior directions during swallowing. Dual-axis cervical accelerometry signals were acquired from 408 healthy subjects during dry, wet, and wet chin tuck swallowing tasks. The proposed segmentation algorithm is based on the idea of sequential fuzzy partitioning of the signal and is well suited for long signals with nonstationary variance. The algorithm was validated with simulated signals with known swallowing locations and a subset of 295 real swallows manually segmented by an experienced speech language pathologist. In both cases, the algorithm extracted individual swallows with over 90% accuracy. The time duration analysis was carried out with respect to gender, body mass index (BMI), and age. Demographic and anthropometric variables influenced the duration of these segmented signals. Male participants exhibited longer swallows than female participants (p=0.05). Older participants and participants with higher BMIs exhibited swallows with significantly longer (p=0.05) duration than younger participants and those with lower BMIs, respectively.
The paper presents two novel applications of Thomson Multitaper Analysis. It is shown how a wideband simulator of a double mobile MIMO channel could be developed based on geometrical channel model. It is also shown how modification of Discrete Prolate Spheroidal Sequences could be used to better estimation of sparse channels. A number of other potential applications is also mentioned.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više