Logo

Publikacije (10)

Nazad
Rialda Spahic, M. Lundteigen

The growing need for autonomous systems in offshore industries has contributed to the increased use of machine learning methods. These systems promise to improve safety in operations. However, the methods as enablers of autonomy are susceptible to various failures while interpreting data and making decisions. Several studies have highlighted the lack of research on the reliability and resilience of autonomous systems powered by these standard methods. Recent research provides sets of data interpretation methods. Despite the popularity of machine learning, there is a significant drop in knowledge when these methods result in failures. These failures further support autonomous systems in making wrong decisions. For autonomous systems, resilience and safety management should be an integrated functionality for recovery from risky situations and reporting of incidents. This research proposes an overview of machine learning methods for interpreting sensor data captured by drones operated manually and autonomously. We apply Isolation Forest for anomaly detection analysis and evaluate the Decision tree, Random forest, kNN, Logistic Regression, SVM, and, Naive Bayes for classification analysis. The methods are chosen based on their adequacy and comparative research prevalence. Comparison between the two drone operation modes contributes to understanding the reliability level for autonomously collected data. This research’s results provide an evaluation of machine learning methods’ performance across sensor data.

Kanita Karađuzović-Hadžiabdić, Rialda Spahic, Emin Tahirović

Social media has opened the gates for collecting big data that can be used to monitor epidemic trends in real time. We evaluate whether Watson NLP service can be used to reliably predict infectious disease such as influenza-like illness (ILI) outbreaks using Twitter data during the period of the main influenza season. Watson’s performance is evaluated by computing Pearson correlation between the number of tweets classified by Watson as ILI and the number of ILI occurrences recovered from traditional epidemic surveillance system of the Centers for Disease Control and Prevention (CDC). Achieved correlation was 0.55. Furthermore, a 12 week discrepancy was found between peak occurrences of ILI predicted by Watson and CDC reported data. Additionally, we developed a scoring method for ILI prediction from Twitter posts using a simple formula with the ability to predict ILI two weeks ahead of CDC reported ILI data. The method uses Watson’s sentiment and emotion scores together with identified ILI features to analyze influenza-related posts in real time. Due to Watson's high computational costs of sentiment and emotion analysis, we tested if machine learning approach can be used to predict influenza using only identified ILI keywords as influenza predictors. All three evaluated methods (Random Forest, Logistic Regression, K-NN), achieved overall accuracy of ~68.2% and 97.5% respectively, when Watson and the developed formula are used as medical experts. The obtained results suggest that data found within social media can be used to supplement the traditional surveillance of influenza outbreaks with the help of intelligent computations.

Rialda Spahic, V. Hepsø, M. Lundteigen

In the offshore industry, unmanned autonomous systems are expected to have a permanent role in future operations. During offshore operations, the unmanned autonomous system needs definite instructions on evaluating the gathered data to make decisions and react in real-time when the situation requires it. We rely on video surveillance and sensor measurements to recognize early warning signals of a failing asset during the autonomous operation. Missing out on the warning signals can lead to a catastrophic impact on the environment and a significant financial loss. This research is helping to solve the issue of trustworthiness of the algorithms that enable autonomy by capturing the rising risks when machine learning unintentionally fails. Previous studies demonstrate that understanding machine learning algorithms, finding patterns in anomalies, and calibrating trust can promote the system’s reliability. Existing approaches focus on improving the machine learning algorithms and understanding the shortcomings in the data collection. However, recollecting the data is often an expensive and extensive task. By transferring knowledge from multiple disciplines, diverse approaches will be observed to capture the risk and calibrate the trust in autonomous systems. This research proposes a conceptual framework that captures the known risks and creates a safety net around the autonomy-enabling algorithms to improve the reliability of the autonomous operations.

27. 4. 2019.
1
Rialda Spahic, Dzana Basic, Emina Yaman

The idea of chatbots firstly appeared in the 1960s. But only after more than half a century passed the world became ready for their implementation into the real life, this being a result of the rapid progress in natural language processing, artificial intelligence, and the global presence of text messaging applications. Today, specialized chatbots exist in different domains, thus helping organizations handle large amount of inquiries. Idea of this project was to develop one friendly chatbot with whom you can talk about politics, movies, weather, sport, emotions and similar everyday things. Friendly chatbot named Zeka, is a web-based chatbot developed with the help of Chatterbot library. Chatbot relies on different natural processing and machine learning algorithms altered by its developers to increase its performance.

Kanita Karađuzović-Hadžiabdić, Rialda Spahic

We examine a machine learning approach for detecting common Class and Method level code smells (Data Class and God Class, Feature Envy and Long Method). The focus of the work is selection of reduced set of features that will achieve high classification accuracy. The proposed features may be used by the developers to develop better quality software since the selected features focus on the most critical parts of the code that is responsible for creation of common code smells. We obtained a high accuracy results for all four code smells using the selected features: 98.57% for Data Class, 97.86% for God Class, 99.67% for Feature Envy, and 99.76% for Long Method.

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više