Evolutionary algorithms have gained widespread recognition as a viable approach to numerous optimization problems that are characterized by infeasible optimal solutions, owing to the presence of large search spaces and computational limitations. Forecasting personnel radiation exposure can be one of these problems. Radiation exposure poses risks to various practitioners as well as patients in the healthcare facilities. In this study, we model the problem as a specific time series instance. Moreover, we investigate the impact of the training an adaptive neuro fuzzy system using evolutionary algorithms, namely, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), on the overall performance of forecasting personnel radiation exposure. The results show that GA and PSO could provide effective solution. On the other hand, they might be highly affected by the initial state of the fuzzy inference system leading to unstable performances. We recommend further experimentation with a combination of other advanced optimization and machine learning methods to assure the most effective results.
Early characterization of security requirements supports system designers to integrate security aspects into early architectural design. However, distinguishing security related requirements from other functional and non-functional requirements can be tedious and error prone. To address this issue, machine learning techniques have proven to be successful in the identification of security requirements. In this paper, we have conducted an empirical study to evaluate the performance of 22 supervised machine learning classification algorithms and two deep learning approaches, in classifying security requirements, using the publicly availble SecReq dataset. More specifically, we focused on the robustness of these techniques with respect to the overhead of the pre-processing step. Results show that Long short-term memory (LSTM) network achieved the best accuracy (84%) among non-supervised algorithms, while Boosted Ensemble achieved the highest accuracy (80%), among supervised algorithms.
IoT devices have a wide spectrum of applications in the real-life environment. While these applications range based on the area covered, having the best scenario related to the devices covering the optimal area is a challenge. In this work, we consider the improvement of the industrial laboratory by transferring it to the smart lab using the IoT devices. We analyzed the tradeoffs between different scenarios of the smart lab with the focus on the security and congestion of the network and its effects on the overall performance. For the smart lab case study we can conclude that security-enabled feature will not significantly affect the performance of the smart lab compared to the benefits of the IoT-integrated devices on the overall improvement of the lab experience given that the traditional lab had significant time delay.
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više