Logo

Publikacije (77)

Nazad
S. Omanovic, Admir Midzic, Z. Avdagić, Damir Pozderac, Amel Toroman

Missing values handling in any collected data is one of the first issues that must be resolved to be able to use that data. This paper presents an approach used for missing values interpolation in PurpleAir particle pollution sensor data, based on a correlation of the measurements from the observed locations with the measurements from its neighboring locations, using KNIME Analytics Platform. Results of our experiments with data from five locations in Bosnia & Herzegovina, presented in this paper, show that this approach, which is relatively simple to implement, gives good results. All modeling and experiments were conducted using KNIME Analytics Platform.

Ingmar Bešić, Herzegovina, Z. Avdagić, K. Hodzic

Visual impairments often pose serious restrictions on a visually impaired person and there is a considerable number of persons, especially among aging population, which depend on assistive technology to sustain their quality of life. Development and testing of assistive technology for visually impaired requires gathering information and conducting studies on both healthy and visually impaired individuals in a controlled environment. We propose test setup for visually impaired persons by creating RFID based assistive environment – Visual Impairment Friendly RFID Room. The test setup can be used to evaluate RFID object localization and its use by visually impaired persons. To certain extent every impairment has individual characteristics as different individuals may better respond to different subsets of visual information. We use virtual reality prototype to both simulate visual impairment and map full visual information to the subset that visually impaired person can perceive. Time-domain color mapping real-time image processing is used to evaluate the virtual reality prototype targeting color vision deficiency.

Mathematical modelling to compute ground truth from 3D images is an area of research that can strongly benefit from machine learning methods. Deep neural networks (DNNs) are state-of-the-art methods design for solving these kinds of difficulties. Convolutional neural networks (CNNs), as one class of DNNs, can overcome special requirements of quantitative analysis especially when image segmentation is needed. This article presents a system that uses a cascade of CNNs with symmetric blocks of layers in chain, dedicated to 3D image segmentation from microscopic images of 3D nuclei. The system is designed through eight experiments that differ in following aspects: number of training slices and 3D samples for training, usage of pre-trained CNNs and number of slices and 3D samples for validation. CNNs parameters are optimized using linear, brute force, and random combinatorics, followed by voter and median operations. Data augmentation techniques such as reflection, translation and rotation are used in order to produce sufficient training set for CNNs. Optimal CNN parameters are reached by defining 11 standard and two proposed metrics. Finally, benchmarking demonstrates that CNNs improve segmentation accuracy, reliability and increased annotation accuracy, confirming the relevance of CNNs to generate high-throughput mathematical ground truth 3D images.

Hydropower dam displacement is influenced by various factors (dam ageing, reservoir water level, air, water, and concrete temperature), which cause complex nonlinear behaviour that is difficult to predict. Object deformation monitoring is a task of geodetic and civil engineers who use different instruments and methods for measurements. Only geodetic methods have been used for the object movement analysis in this research. Although the whole object is affected by the influencing factors, different parts of the object react differently. Hence, one model cannot describe behaviour of every part of the object precisely. In this research, a localised approach is presented—two individual models are developed for every point strategically placed on the object: one model for the analysis and prediction in the direction of the X axis and the other for the Y axis. Additionally, the prediction of horizontal dam movement is not performed directly from measured values of influencing factors, but from predicted values obtained by machine learning and statistical methods. The results of this research show that it is possible to perform accurate short-term time series dam movement prediction by using machine learning and statistical methods and that the only limiting factor for improving prediction length is accurate weather forecast.

M. Kafadar, Z. Avdagić, L. Fazlic

There are many challenges in accurately measuring cigarette tar constituents. These include the need for standardized smoke generationmethodsrelatedtounstablemixtures.Inthisresearchweredevelopedalgorithmsusingfusionofartificialintelligencemethodstopredicttarconcentration.Outputsofdevelopmentarethreefuzzystructuresoptimizedwithgeneticalgorithmsresultingingeneticalgorithm(GA)-FUZZY,GA-adaptiveneurofuzzyinferencesystem(ANFIS),GA-GA-FUZZYalgorithms.Proposedalgorithmsareusedforthetarpredictioninthecigaretteproductionprocess.Theresultsofpredictionarecomparedwithgaschromatograph(high-performanceliquidchromatography(HPLC))readings.

Visual impairment severely constraints the ability to independently conduct many everyday tasks that we usually do not consider challenging. Although some types of visual impairment can be treated efficiently there is still a considerable number of visually impaired persons, especially among aging population, which depend on help of others or assistive technology to sustain their life quality. Visually impaired person cannot perceive the full extent of surrounding information due to the lack of visual details. However great progress can be achieved if surrounding information can be somehow visually transformed to the subset of visual information that visually impaired person can perceive. To certain extent every impairment has individual characteristics, as different individuals may better respond to different subsets of visual information. Thus any assistive solution aiming to visually transform surrounding information to accommodate broad range of impairment conditions must be personalized in order to be effective. Virtual reality enables individuals to experience imaginary surroundings by tricking their visual senses and such virtual surroundings can be personalized to any extent desired. We use virtual reality, image processing, and RFID to create a test setup able to simulate visual impairment and visually transformed surroundings suitable for visual Impairment studies. The test setup enables gathering information and conducting studies on both healthy and visually impaired individuals in a controlled environment enabling reliable assistive technology development and testing.

Visually impaired person might find it very difficult to locate an object that has been even slightly misplaced from its usual position. Unfortunately this is very common situation in a shared environment where multiple individuals can affect object’s position and where visually impaired person cannot rely on object’s position remaining unchanged since the last interaction with the object. In order to independently localize the object of its interest visually impaired person must rely on assistive technology. It is yet very unlikely that any single wearable assistive device will encompass the whole range of object localization scenarios and be universally adoptable to a broad range of environments. In this paper we propose indoors test setup for visually impaired persons by creating RFID based assistive environment – Visual Impairment Friendly RFID Room. The test setup can be used to evaluate RFID object localization and its use by visually impaired persons.

Digital analysis and biomedical image processing has become important part within modern medicine and biology. Digital pathology is just one of many medicine areas that is being upgraded by constant biomedical engineering research and development. It is very important that some of disciplines as nucleus detection, image segmentation or classification become more and more effective, with minimum human intervention on these processes, and maximum accuracy and precision. Improved optimization of nucleus segmentation methods parameters based on two levels of voting processes is presented in this paper. First level includes hybrid nucleus segmentation based on 7 segmentation algorithms: OTSU, Adaptive Fuzzy-c means, Adaptive K-means, KGB (Kernel Graph Cut), APC (Affinity Propagation Clustering), Multi Modal and SRM (Statistical region merging) based on optimization of algorithms parameters along with implemented first level voting structure. Second level voting structure includes segmentation results obtained in the first level of voting structure in combination with 3rd party segmentation tools: ImageJ/Fiji and MIB (Microscopy Image Browser). A definite segmented image of a nucleus could serve as a generic ground truth image because it is formed as a result of a consensus based on several different methods of segmentation and different parameter settings, which guarantees better objectivity of the results. In addition, this approach can be used with great scalability on 3D-stack image datasets.

Admir Midzic, Z. Avdagić, S. Omanovic

This research uses artificial intelligence methods for computer network intrusion detection system modeling. Primary classification is done using self-organized maps (SOM) in two levels, while the secondary classification of ambiguous data is done using Sugeno type Fuzzy Inference System (FIS). FIS is created by using Adaptive Neuro-Fuzzy Inference System (ANFIS). The main challenge for this system was to successfully detect attacks that are either unknown or that are represented by very small percentage of samples in training dataset. Improved algorithm for SOMs in second layer and for the FIS creation is developed for this purpose. Number of clusters in the second SOM layer is optimized by using our improved algorithm to minimize amount of ambiguous data forwarded to FIS. FIS is created using ANFIS that was built on ambiguous training dataset clustered by another SOM (which size is determined dynamically). Proposed hybrid model is created and tested using NSL KDD dataset. For our research, NSL KDD is especially interesting in terms of class distribution (overlapping). Objectives of this research were: to successfully detect intrusions represented in data with small percentage of the total traffic during early detection stages, to successfully deal with overlapping data (separate ambiguous data), to maximize detection rate (DR) and minimize false alarm rate (FAR). Proposed hybrid model with test data achieved acceptable DR value 0.8883 and FAR value 0.2415. The objectives were successfully achieved as it is presented (compared with the similar researches on NSL KDD dataset). Proposed model can be used not only in further research related to this domain, but also in other research areas

Organizations can improve efficiency of process execution through a correct resource allocation, as well as increase income, improve client satisfaction, and so on. This work presents a novel approach for solving problems of resource allocation in business processes which combines process mining, statistical techniques, and metaheuristic algorithms for optimization. In order to get more reliable results of the simulation, in this paper, we use process mining analysis and statistical techniques for building a simulation model. For finding optimal human resource allocation in business processes, we use the improved differential evolution algorithm with population adaptation. Because of the use of a stochastic simulation model, noise appears in the output of the model. The differential evolution algorithm is modified in order to include uncertainty in the fitness function. In the end, validation of the model was done on three different data sets in order to demonstrate the generality of the approach, and the comparison with the standard approach from the literature was done. The results have shown that this novel approach gives solutions which are better than the existing model from literature.

The aim of this research is to automate an analysis of the EGFR gene as a whole, and especially an analysis of those exons with clinically identified microdeletion mutations which are recorded with non-mutated nucleotides in a long chains of a, c, t, g nucleotides, and “-“ (microdeletion) in the NCBI database or other sites. In addition, the developed system can analyze data resulting from EGFR gene DNA sequencing or DNA extraction for a new patient and identify regions potential microdeletion mutations that clinicians need to develop new

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više