To bring robots into human everyday life, their capacity for social interaction must increase. One way for robots to acquire social skills is by assigning them the concept of identity. This research focuses on the concept of \textit{Explanation Identity} within the broader context of robots' roles in society, particularly their ability to interact socially and explain decisions. Explanation Identity refers to the combination of characteristics and approaches robots use to justify their actions to humans. Drawing from different technical and social disciplines, we introduce Explanation Identity as a multidisciplinary concept and discuss its importance in Human-Robot Interaction. Our theoretical framework highlights the necessity for robots to adapt their explanations to the user's context, demonstrating empathy and ethical integrity. This research emphasizes the dynamic nature of robot identity and guides the integration of explanation capabilities in social robots, aiming to improve user engagement and acceptance.
This study scrutinizes five years of Sarajevo’s Air Quality Index (AQI) data using diverse machine learning models — Fourier autoregressive integrated moving average (Fourier ARIMA), Prophet, and Long short-term memory (LSTM)—to forecast AQI levels. Focusing on various prediction frames, we evaluate model performances and identify optimal strategies for different temporal granularities. Our research unveils subtle insights into each model’s efficacy, shedding light on their strengths and limitations in predicting AQI across varied timeframes. This research presents a robust framework for automatic optimization of AQI predictions, emphasizing the influence of temporal granularity on prediction accuracy, automatically selecting the most efficient models and parameters. These insights hold significant implications for data-driven decision-making in urban air quality control, paving the way for proactive and targeted interventions to improve air quality in Sarajevo and similar urban environments.
The choices made by autonomous robots in social settings bear consequences for humans and their presumptions of robot behavior. Explanations can serve to alleviate detrimental impacts on humans and amplify their comprehension of robot decisions. We model the process of explanation generation for robot navigation as an automated planning problem considering different possible explanation attributes. Our visual and textual explanations of a robot’s navigation are influenced by the robot’s personality. Moreover, they account for different contextual, environmental, and spatial characteristics. We present the results of a user study demonstrating that users are more satisfied with multimodal than unimodal explanations. Additionally, our findings reveal low user satisfaction with explanations of a robot with extreme personality traits. In conclusion, we deliberate on potential future research directions and the associated constraints. Our work advocates for fostering socially adept and safe autonomous robot navigation.
The decisions made by autonomous robots hold substantial influence over how humans perceive their behavior. One way to alleviate potential negative impressions of such decisions by humans and enhance human comprehension of them is through explaining. We introduce visual and textual explanations integrated into robot navigation, considering the surrounding environmental context. To gauge the effectiveness of our approach, we conducted a comprehensive user study, assessing user satisfaction across different forms of explanation representation. Our empirical findings reveal a notable discrepancy in user satisfaction, with significantly higher levels observed for explanations that adopt a multimodal format, as opposed to those relying solely on unimodal representations.
The identification of bacterial colonies is deemed to be crucial in microbiology as it helps in identifying specific categories of bacteria. The careful examination of colony morphology plays a crucial role in microbiology laboratories for the identification of microorganisms. Quantifying bacterial colonies on culture plates is a necessary task in Clinical Microbiology Laboratories, but it can be time‐consuming and susceptible to inaccuracies. Therefore, there is a need to develop an automated system that is both dependable and cost‐effective. Advancements in Deep Learning have played a crucial role in improving processes by providing maximum accuracy with a negligible amount of error. This research proposes an automated technique to extract the bacterial colonies using SegNet, a semantic segmentation network. The segmented colonies are then counted with the assistance of blob counter to accomplish the activity of colony counting. Furthermore, to ameliorate the proficiency of the segmentation network, the network weights are optimized using a swarm optimizer. The proposed methodology is both cost‐effective and time‐efficient, while also providing better accuracy and precise colony counts, ensuring the elimination of human errors involved in traditional colony counting techniques. The investigative assessments were carried out on three distinct sets of data: Microorganism, DIBaS, and tailored datasets. The results obtained from these assessments revealed that the suggested framework attained an accuracy rate of 88.32%, surpassing other conventional methodologies with the utilization of an optimizer.
Navigation is a must-have skill for any mobile robot. A core challenge in navigation is the need to account for an ample number of possible configurations of environment and navigation contexts. We claim that a mobile robot should be able to explain its navigational choices making its decisions understandable to humans. In this paper, we briefly present our approach to explaining navigational decisions of a robot through visual and textual explanations. We propose a user study to test the understandability and simplicity of the robot explanations and outline our further research agenda.
Epilepsy is a life threatening neurological disorder. The person with epilepsy suffers from recurrent seizures. Sudden emission of electrical signal in the nerves of the human brain is called seizure event. The most widely used method for diagnosing epilepsy is analysing electroencephalogram signals in short called as EEG signals collected from the scalp of the patient. The EEG data are normally used for seizure detection. If the recurrent seizure signals are detected in the input EEG dataset, then it can be considered as the presence of epilepsy disorder. Manual inspection of seizure signals in the EEG data is a laborious process. An automated system is very crucial for the neurologists to identify seizures. In this paper, an automated seizure detection method is presented using deep learning method, pre‐trained convolutional neural network architecture. Freely available EEG dataset from Temple University Hospital database is used for the study. The pre‐trained CNN networks, VGGNet and ResNet are used for classifying the seizure activities from non‐seizure activities. CNNs are extremely good in learning the features of the input data. A very large dataset from TUH is provided as input to the multiple layers of CNN model. The same data is fed to VGGNet and ResNet models. The results of CNN, VGGNet and ResNet models are assessed using performance metrics accuracy, AUC, precision and recall. All the three models gave extremely good performance compared to state‐of‐the‐art works in the literature. In comparison VGGNet performed with little higher results giving 97% accuracy, 96% AUC, 97% precision and 79% recall.
The integration of technology in education has become indispensable in acquiring new skills, knowledge, and competencies. This paper addresses the issue of analyzing and predicting the learning behavior of Computer Science students. Specifically, we present a dataset of compiler errors made by students during the first semester of an Introduction to Programming course where they learn the C programming language. We approach the problem of predicting the number of student errors as a missing data imputation problem, utilizing several prediction methods including Singular Value Decomposition, Polynomial Regression via Latent Tensor Reconstruction, Neural Network-based method, and Gradient Boosting. Our experimental results demonstrate high accuracy in predicting student learning behaviors over time, which can be leveraged to enhance personalized learning for individual students.
: Artificial Intelligence techniques are widely used for medical purposes nowadays. One of the crucial applications is cancer detection. Due to the sensitivity of such applications, medical workers and patients interacting with the system must get a reliable, transparent, and explainable output. Therefore, this paper examines the interpretability and explainability of the Logistic Regression Model (LRM) for breast cancer detection. We analyze the accuracy and transparency of the LRM model. Additionally, we propose an NLP-based interface with a model interpretability summary and a contrastive explanation for users. Together with textual explanations, we provide a visual aid for medical practitioners to understand the decision-making process better.
For users to trust planning algorithms, they must be able to understand the planner's outputs and the reasons for each action selection. This output does not tend to be user-friendly, often consisting of sequences of parametrised actions or task networks. And these may not be practical for non-expert users who may find it easier to read natural language descriptions. In this paper, we propose PlanVerb, a domain and planner-independent method for the verbalization of task plans. It is based on semantic tagging of actions and predicates. Our method can generate natural language descriptions of plans including causal explanations. The verbalized plans can be summarized by compressing the actions that act on the same parameters. We further extend the concept of verbalization space, previously applied to robot navigation, and apply it to planning to generate different kinds of plan descriptions for different user requirements. Our method can deal with PDDL and RDDL domains, provided that they are tagged accordingly. Our user survey evaluation shows that users can read our automatically generated plan descriptions and that the explanations help them answer questions about the plan.
The continued development of robots has enabled their wider usage in human surroundings. Robots are more trusted to make increasingly important decisions with potentially critical outcomes. Therefore, it is essential to consider the ethical principles under which robots operate. In this paper we examine how contrastive and non-contrastive explanations can be used in understanding the ethics of robot action plans. We build upon an existing ethical framework to allow users to make suggestions about plans and receive automatically generated contrastive explanations. Results of a user study indicate that the generated explanations help humans to understand the ethical principles that underlie a robot’s plan.
The scientific discipline of Computer Vision (CV) is a fast developing branch of Machine Learning (ML). It addresses various tasks important for robotics, medicine, autonomous driving, surveillance, security or scene understanding. The development of sensor technologies enabled wide usage of 3D sensors, and therefore, it increased the interest of the CV research community in creating methods for 3D sensor data. This paper outlines seven CV tasks with 3D point cloud data, state-of-the-art techniques, and datasets. Additionally, we identify key challenges.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više