Spectro-Temporal Analysis of Auscultatory Sounds
Auscultation is a useful procedure for diagnostics of pulmonary or cardiovascular disorders. The effectiveness of auscultation depends on the skills and experience of the clinician. Further issues may arise due to the fact that heart sounds, for example, have dominant frequencies near the human threshold of hearing, hence can often go undetected (1). Computer-aided sound analysis, on the other hand, allows for rapid, accurate, and reproducible quantification of pathologic conditions, hence has been the focus of more recent research (e.g., (1–5)). During computer-aided auscultation, however, lung sounds are often corrupted by intrusive quasiperiodic heart sounds, which alter the temporal and spectral characteristics of the recording. Separation of heart and lung sound components is a difficult task as both signals have overlapping frequency spectra, in particular at frequencies below 100 Hz (6). For lung sound analysis, signal processing strategies based on conventional time, frequency, or time-frequency signal representations have been proposed for heart sound cancelation. Representative strategies include entropy calculation (7) and recurrence time statistics (8) for heart sound detection-and-removal followed by lung sound prediction, adaptive filtering (e.g., (9; 10)), time-frequency spectrogram filtering (11), and time-frequency wavelet filtering (e.g., (12–14)). Subjective assessment, however, has suggested that due to the temporal and spectral overlap between heart and lung sounds, heart sound removal may result in noisy or possibly “non-recognizable" lung sounds (15). Alternately, for heart sound analysis, blind source extraction based on periodicity detection has recently been proposed for heart sound extraction from breath sound recordings (16); subjective listening tests, however, suggest that the extracted heart sounds are noisy and often unintelligible (17). In order to benefit fully from computer-aided auscultation, both heart and lung sounds should be extracted or blindly separated from breath sound recordings. In order to achieve such a difficult task, a few methods have been reported in the literature, namely, wavelet filtering (18), independent component analysis (19; 20), and more recently, modulation domain filtering (21). The motivation with wavelet filtering lies in the fact that heart sounds contain large components over several wavelet scales, while coefficients associated with lung sounds quickly decrease with increasing scale. Heart and lung sounds are iteratively separated based on an adaptive hard thresholding paradigm. As such, wavelet coefficients at each scale with amplitudes above the threshold are assumed to correspond to heart sounds and the remaining coefficients are associated with lung sounds. Independent component analysis, in turn, makes use