Multilevel image thresholding is a challenging digital image processing problem with numerous applications, including image segmentation, image analysis and higher level image processing. Although, threshold estimation based on exhaustive search is a relatively straight forward task, it can be computationally very expensive to evaluate optimal thresholds when the number of threshold levels is large. In this paper, a metaheuristic approach to multilevel thresholding of x-ray images has been examined. Specifically, firefly and bat algorithms are used in the conjunction with Kapur's entropy, Tsallis entropy and Otsu's between-class variance criterion to estimate optimal threshold values. The performance of various image segmentation strategies have been evaluated on a dataset of x-ray images. The simulation results show that the bat algorithm in conjunction with Otsu's objective function offers the best X-ray image segmentation strategy. Out of all considered strategies, this multilevel thresholding approach to image segmentation produces the highest PSNR and SSIM values as well as fast execution times.
Digital image processing techniques are commonly employed for food classification in an industrial environment. In this paper, we propose the use of supervised learning methods, namely multi-class support vector machines and artificial neural networks to perform classification of different type of almonds. In the process of defining the feature vectors, the proposed method has relied on the principal component analysis to identify the most significant shape and color parameters. The comparative analysis of considered classification algorithms has shown that the higher levels of accuracy in almond classification are attained when support vector machine are used as the basis for classification, rather than artificial neural networks. Moreover, the experimental results have demonstrated that the proposed method exhibits significant levels of robustness and computational efficiency to facilitate the use in the real-time applications. In addition, for the purpose of this paper, a dataset of almond images containing various classes of almonds is formed and made freely available to be used by other researchers in this field.
This paper proposes an efficient algorithm for noise level estimation in still images. The images are assumed to be corrupted by additive white Gaussian noise. The proposed method relies on block-based image segmentation and Gaussian filtering to estimate the standard deviation of Gaussian noise. The proposed method employs adaptive image segmentation, where the size of segmentation blocks is derived from the initial estimates of noise standard deviation. Although image segmentation is a two-stage process that allows forming local noise level estimates from very small image patches, specific measures have been taken to improve computational efficiency of the proposed method. Image prefiltering is also adaptive in a sense that the coefficients of a Gaussian filter are evaluated as a function of the initial noise level estimate. The proposed method is designed to reduce the likelihood of underestimation of noise level due to intensity clipping. The results obtained on database of natural images show that the proposed scheme can accurately estimate the noise variance for a wide range of noise levels from very small to very high noise levels. In addition, it has been demonstrated that the proposed image segmentation scheme offers an accurate and consistent estimation of homogenous image patches.
This paper proposes a fast algorithm for additive white Gaussian noise level estimation from still digital images. The proposed algorithm uses a Laplacian operator to suppress the underlying image signal. In addition, the algorithm performs a non-overlapping block segmentation of images in conjunction with the local averaging to obtain the local noise level estimates. These local noise level estimates facilitate a variable block size image tessellation and adaptive estimation of homogenous image patches. Thus, the proposed algorithm can be described as a hybrid method as it adopts some principal characteristics of both filter-based and block-based methods. The performance of the proposed noise estimation algorithm is evaluated on a dataset of natural images. The results show that the proposed algorithm is able to provide a consistent performance across different image types and noise levels. In addition, it has been demonstrated that the adaptive nature of homogenous block estimation improves the computational efficiency of the algorithm.
This paper shows a review investigation the possibility of increasing the efficiency of existing line test solutions for troubleshooting IPTV over xDSL, by the results of experimental research on real system under commercial exploitation. At the beginning of this paper the main weaknesses of the existing troubleshooting testing are described. In the rest of the paper the parameters of the physical layer of xDSL transceiver are listed, followed by analysis how they can be used for the purposes of more efficient measurement of parameters of copper pairs.
Accurate and fast estimation of noise levels from medical images has numerous applications in medical image processing, including image enhancement, image segmentation and feature extraction. In this paper, a block-based noise level estimation algorithm in SVD domain is proposed. The proposed algorithm employs the non-overlapping block image segmentation to estimate homogenous image regions. Each homogenous block is used to obtain an independent noise level estimates in SVD domain. For any particular image, the overall noise level estimate is ascertained by averaging over the set of noise level estimates associated with the homogenous image blocks. In this paper, the optimal size of image segmentation blocks is evaluated systematically over a large dataset of x-ray images. The experimental results show that the proposed method offers numerous advantages over some alternative SVD domain method.
One of the most challenging problems in the field of digital image processing is image denoising. When processing medical images, it is of particular relevance to improve the perceived quality of images, while preserving the diagnostically relevant information. This paper investigates the capacity of a neural network framework for medical image denoising. Specifically, the performance of the proposed image denoising method is evaluated on a database of computed tomography images of lungs using various image quality metrics, such as peak signal-to-noise ratio and mean squared error. Image denoising relies on block segmentation of noisy and low-pass filtered images to generate the input and the target data for the neural network training. This paper investigates how the choice of block size, network architecture, and the training method affect the denoising performance when image is degraded with additive Gaussian noise. The paper proposes the use of Kohonen's self-organizing maps for segmentation of feature space and the use of multiple, finely tuned multi-layer perceptrons to achieve an improved denoising performance.
Efficient compression of medical images is needed to decrease the storage space and enable efficient image transfer over network for access of electronic patient records. Since the medical images contain diagnostically relevant information, it is necessary for the process of image compression to preserve high levels of image fidelity, especially when the images are compressed at low bit rates. This paper investigates the capacity of an artificial neural network framework for medical image compression. Specifically, the performance of the proposed image compression method is evaluated on a database of computed tomography images of lungs, where PSNR and MSE are used as the principal image quality metrics. The compressed image data is derived from the hidden layer outputs, where the artificial neural networks are trained to reconstruct the network input features. The results of image block segmentation are used as the network training features. The paper proposes the use of Kohonen's self-organizing maps for segmentation of feature space and the use of multiple finely tuned multi-layer perceptrons to achieve an improved compression performance. This paper presents a study on how the choice of block size, network architecture, and training method affect the compression performance. An attempt is made to optimize the artificial neural network framework for the compression of computed tomography lung images.
Analysis and modeling of voice source waveforms are some of the most challenging fields of speech processing owing to the fact that voice source signals commonly exhibit complex temporal morphology and contain numerous artifacts of data collection process. In this paper, we have proposed a novel and a fully automatic source-filter based framework for voice source parameterization, modelling and synthesis. The proposed method is not constrained to the idealized glottal waveform approximations, but instead relies on the observed signal to ascertain a non-deterministic and adaptable model for the voice source signal. The proposed signal synthesis algorithm is able to independently account for the temporal and spatial dynamics of consecutive voice source pulses. In comparative evaluation with the popular Liljencrants-Fant’s model, it was found that the proposed method has the capacity to represent complex voice source features that cannot be accurately or efficiently represented by a deterministic model. It is demonstrated that the proposed method offers high levels of robustness and accuracy in signal parameterization and reconstruction. Key-Words: Biomedical Signal Processing; Digital Speech Processing; Glottal Flow Modeling
Parameterization and synthesis of electrocardiogram (ECG) recordings are some of the most challenging problems in biomedical signal processing owing to the fact that ECG signals commonly exhibit complex temporal morphology and contain numerous artifacts of data collection process. In this paper, we present a fully automatic framework for accurate and robust parameterization and reconstruction of ECG waveforms. The method uses the observed signal to ascertain a nondeterministic model for the ECG signal and employs the Dynamic Time warping (DTW) algorithm to determine a non-linear temporal relationship between the established ECG model and the individual pulses in the ECG signal. The results of parameterization provide a set of data that accurately describe the morphology of the ECG pulses. The proposed signal synthesis algorithm is able to independently account for the temporal and spatial dynamics of consecutive ECG pulses and provide a faithful reconstruction of ECG signals. Performance evaluation experiments are conducted on a database of 135 one-minute ECG recordings. The percentage rootmean-square difference measure is employed to evaluate the quality of signal reconstruction and also validate the results of signal parameterization.
Processing and classification of electrocardiogram (ECG) recordings are some of the most challenging fields of biomedical signal processing owing to the fact that ECG signals commonly exhibit complex temporal morphology and contain numerous artifacts of data collection process. This paper presents a comparative analysis between the Artificial Neural Networks and Support Vector Machines classification performances based on the feature vectors developed from the Filter-Bank processing of ECG signal. The system is evaluated in the context of Supraventricular Arrhythmia diagnostics. FIR Filter-Bank decomposes the ECG waveform into a various frequency components and enables independent temporal and spectral processing of ECG signal. The feature vectors are developed as a set of statistical measures that describe the energy distribution in the individual sub-bands. The considered statistical descriptors include mean, variance, skewness and kurtosis. In this paper, a systematic study of diagnostic performance is imposed on the choice of feature vector. An optimal Filter-Bank size is ascertained and the relevance of individual frequency bands is evaluated. Furthermore, the diagnostic relevance of statistical descriptors is assessed. The experimental results demonstrate that optimization of feature vectors, in terms of sub-band selection and statistical descriptor choices, leads to a considerable reduction in the feature vector size and to an improvement in the classification accuracy rate. Key-Words: Biomedical signal processing; ECG diagnostics; filter-banks; support vector machines; ANN
This paper presents a comparative study of the temporal structure of the glottal flow derivative signal associated with laryngeal pathology in relation to an idealized view of voice source realizations as defined by LiljencrantsFant’s model. Specifically, we endeavor to ascertain the extent by which Liljencrants-Fant’s model can be used to represent the glottal flow derivative estimates obtained via closed-phase pitch synchronous inverse filtering of recorded speech. The results obtained on six common voice pathology examples show that due to the limited degrees of freedom, Liljencrants-Fant’s model is only capable of adequately representing the “coarse” glottal pulse structure. Our findings demonstrate that the “fine” structural elements constitute an important aspect of a glottal flow derivative realization and we have presented evidence that they contain information related to voice individuality. A further inadequacy of Liljencrants-Fant’s model is that its parameters do not always accurately portray significant events in the vocal fold dynamics.
Parametrization and modeling of electrocardiogram (ECG) recordings are some of the most challenging areas of biomedical signal processing owing to the fact that ECG signals commonly exhibit complex temporal morphology and contain various artifacts of data collection process. In this paper, we propose a novel and fully automatic framework for highly accurate and robust ECG parametrization and reconstruction. The proposed method facilitates adaptive ECG signal modeling, and as such it is not constrained to an opportune combination of mathematical functions. The method relies on Dynamic Time warping (DTW) algorithm to establish the temporal relationship between the ECG model and the analyzed ECG pulses and to obtain their parametric description. Performance evaluation experiments conducted on a database of 40 one-minute ECG signal recordings, including examples of Normal Sinus Rhythm R Interval ECG record, Arrhythmia, Supraventricular Arrhythmia and Atrial Fibrillation, have shown that the proposed method is able to consistently produce accurate signal parametrization and reconstruction.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više