The decisions made by autonomous robots hold substantial influence over how humans perceive their behavior. One way to alleviate potential negative impressions of such decisions by humans and enhance human comprehension of them is through explaining. We introduce visual and textual explanations integrated into robot navigation, considering the surrounding environmental context. To gauge the effectiveness of our approach, we conducted a comprehensive user study, assessing user satisfaction across different forms of explanation representation. Our empirical findings reveal a notable discrepancy in user satisfaction, with significantly higher levels observed for explanations that adopt a multimodal format, as opposed to those relying solely on unimodal representations.
The identification of bacterial colonies is deemed to be crucial in microbiology as it helps in identifying specific categories of bacteria. The careful examination of colony morphology plays a crucial role in microbiology laboratories for the identification of microorganisms. Quantifying bacterial colonies on culture plates is a necessary task in Clinical Microbiology Laboratories, but it can be time‐consuming and susceptible to inaccuracies. Therefore, there is a need to develop an automated system that is both dependable and cost‐effective. Advancements in Deep Learning have played a crucial role in improving processes by providing maximum accuracy with a negligible amount of error. This research proposes an automated technique to extract the bacterial colonies using SegNet, a semantic segmentation network. The segmented colonies are then counted with the assistance of blob counter to accomplish the activity of colony counting. Furthermore, to ameliorate the proficiency of the segmentation network, the network weights are optimized using a swarm optimizer. The proposed methodology is both cost‐effective and time‐efficient, while also providing better accuracy and precise colony counts, ensuring the elimination of human errors involved in traditional colony counting techniques. The investigative assessments were carried out on three distinct sets of data: Microorganism, DIBaS, and tailored datasets. The results obtained from these assessments revealed that the suggested framework attained an accuracy rate of 88.32%, surpassing other conventional methodologies with the utilization of an optimizer.
Navigation is a must-have skill for any mobile robot. A core challenge in navigation is the need to account for an ample number of possible configurations of environment and navigation contexts. We claim that a mobile robot should be able to explain its navigational choices making its decisions understandable to humans. In this paper, we briefly present our approach to explaining navigational decisions of a robot through visual and textual explanations. We propose a user study to test the understandability and simplicity of the robot explanations and outline our further research agenda.
Epilepsy is a life threatening neurological disorder. The person with epilepsy suffers from recurrent seizures. Sudden emission of electrical signal in the nerves of the human brain is called seizure event. The most widely used method for diagnosing epilepsy is analysing electroencephalogram signals in short called as EEG signals collected from the scalp of the patient. The EEG data are normally used for seizure detection. If the recurrent seizure signals are detected in the input EEG dataset, then it can be considered as the presence of epilepsy disorder. Manual inspection of seizure signals in the EEG data is a laborious process. An automated system is very crucial for the neurologists to identify seizures. In this paper, an automated seizure detection method is presented using deep learning method, pre‐trained convolutional neural network architecture. Freely available EEG dataset from Temple University Hospital database is used for the study. The pre‐trained CNN networks, VGGNet and ResNet are used for classifying the seizure activities from non‐seizure activities. CNNs are extremely good in learning the features of the input data. A very large dataset from TUH is provided as input to the multiple layers of CNN model. The same data is fed to VGGNet and ResNet models. The results of CNN, VGGNet and ResNet models are assessed using performance metrics accuracy, AUC, precision and recall. All the three models gave extremely good performance compared to state‐of‐the‐art works in the literature. In comparison VGGNet performed with little higher results giving 97% accuracy, 96% AUC, 97% precision and 79% recall.
The integration of technology in education has become indispensable in acquiring new skills, knowledge, and competencies. This paper addresses the issue of analyzing and predicting the learning behavior of Computer Science students. Specifically, we present a dataset of compiler errors made by students during the first semester of an Introduction to Programming course where they learn the C programming language. We approach the problem of predicting the number of student errors as a missing data imputation problem, utilizing several prediction methods including Singular Value Decomposition, Polynomial Regression via Latent Tensor Reconstruction, Neural Network-based method, and Gradient Boosting. Our experimental results demonstrate high accuracy in predicting student learning behaviors over time, which can be leveraged to enhance personalized learning for individual students.
: Artificial Intelligence techniques are widely used for medical purposes nowadays. One of the crucial applications is cancer detection. Due to the sensitivity of such applications, medical workers and patients interacting with the system must get a reliable, transparent, and explainable output. Therefore, this paper examines the interpretability and explainability of the Logistic Regression Model (LRM) for breast cancer detection. We analyze the accuracy and transparency of the LRM model. Additionally, we propose an NLP-based interface with a model interpretability summary and a contrastive explanation for users. Together with textual explanations, we provide a visual aid for medical practitioners to understand the decision-making process better.
In lymphoblastic leukaemia (ALL), the bone marrow naturally produces immature cells. Each year ALL is diagnosed with over 6500 instances, and the trend is still going upward. Technological advancements in AI and big data analytics help doctors and radiologists make accurate and efficient clinical decisions. The proposed method consists of two core steps: segmentation and classification based on the quantum convolutional networks. A three‐dimensional U‐network is proposed having 70 layers that are trained on the optimal hyperparameters, which provides 0.98 dice scores. The four‐qubit quantum transfer learning model is proposed for classifying different types of blood cells. The accuracies achieved are 0.99 on blast cells, 0.99 on Basophils, 0.98 on Eosinophils, 0.97 on Neutrophils, 0.99 on Lymphocytes, and 0.96 on Monocytes. The proposed classification model provides 0.99 average accuracy.
For users to trust planning algorithms, they must be able to understand the planner's outputs and the reasons for each action selection. This output does not tend to be user-friendly, often consisting of sequences of parametrised actions or task networks. And these may not be practical for non-expert users who may find it easier to read natural language descriptions. In this paper, we propose PlanVerb, a domain and planner-independent method for the verbalization of task plans. It is based on semantic tagging of actions and predicates. Our method can generate natural language descriptions of plans including causal explanations. The verbalized plans can be summarized by compressing the actions that act on the same parameters. We further extend the concept of verbalization space, previously applied to robot navigation, and apply it to planning to generate different kinds of plan descriptions for different user requirements. Our method can deal with PDDL and RDDL domains, provided that they are tagged accordingly. Our user survey evaluation shows that users can read our automatically generated plan descriptions and that the explanations help them answer questions about the plan.
The continued development of robots has enabled their wider usage in human surroundings. Robots are more trusted to make increasingly important decisions with potentially critical outcomes. Therefore, it is essential to consider the ethical principles under which robots operate. In this paper we examine how contrastive and non-contrastive explanations can be used in understanding the ethics of robot action plans. We build upon an existing ethical framework to allow users to make suggestions about plans and receive automatically generated contrastive explanations. Results of a user study indicate that the generated explanations help humans to understand the ethical principles that underlie a robot’s plan.
The scientific discipline of Computer Vision (CV) is a fast developing branch of Machine Learning (ML). It addresses various tasks important for robotics, medicine, autonomous driving, surveillance, security or scene understanding. The development of sensor technologies enabled wide usage of 3D sensors, and therefore, it increased the interest of the CV research community in creating methods for 3D sensor data. This paper outlines seven CV tasks with 3D point cloud data, state-of-the-art techniques, and datasets. Additionally, we identify key challenges.
Motion planning is a hard problem that can often overwhelm both users and designers: due to the difficulty in understanding the optimality of a solution, or reasons for a planner to fail to find any solution. Inspired by recent work in machine learning and task planning, in this paper we are guided by a vision of developing motion planners that can provide reasons for their output—thus potentially contributing to better user interfaces, debugging tools, and algorithm trustworthiness. Towards this end, we propose a preliminary taxonomy and a set of important considerations for the design of explainable motion planners, based on the analysis of a comprehensive user study of motion planning experts. We identify the kinds of things that need to be explained by motion planners ("explanation objects"), types of explanation, and several procedures required to arrive at explanations. We also elaborate on a set of qualifications and design considerations that should be taken into account when designing explainable methods. These insights contribute to bringing the vision of explainable motion planners closer to reality, and can serve as a resource for researchers and developers interested in designing such technology.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više