Logo
User Name

Amar Halilovic

Universität Ulm

Društvene mreže:

Polje Istraživanja: Robotics (Computer science)

Institucija

In robotics, ensuring that autonomous systems are comprehensible and accountable to users is essential for effective human-robot interaction. This paper introduces a novel approach that integrates user-centered design principles directly into the core of robot path planning processes. We propose a probabilistic framework for automated planning of explanations for robot navigation, where the preferences of different users regarding explanations are probabilistically modeled to tailor the stochasticity of the real-world human-robot interaction and the communication of decisions of the robot and its actions towards humans. This approach aims to enhance the transparency of robot path planning and adapt to diverse user explanation needs by anticipating the types of explanations that will satisfy individual users.

To bring robots into human everyday life, their capacity for social interaction must increase. One way for robots to acquire social skills is by assigning them the concept of identity. This research focuses on the concept of \textit{Explanation Identity} within the broader context of robots' roles in society, particularly their ability to interact socially and explain decisions. Explanation Identity refers to the combination of characteristics and approaches robots use to justify their actions to humans. Drawing from different technical and social disciplines, we introduce Explanation Identity as a multidisciplinary concept and discuss its importance in Human-Robot Interaction. Our theoretical framework highlights the necessity for robots to adapt their explanations to the user's context, demonstrating empathy and ethical integrity. This research emphasizes the dynamic nature of robot identity and guides the integration of explanation capabilities in social robots, aiming to improve user engagement and acceptance.

The choices made by autonomous robots in social settings bear consequences for humans and their presumptions of robot behavior. Explanations can serve to alleviate detrimental impacts on humans and amplify their comprehension of robot decisions. We model the process of explanation generation for robot navigation as an automated planning problem considering different possible explanation attributes. Our visual and textual explanations of a robot’s navigation are influenced by the robot’s personality. Moreover, they account for different contextual, environmental, and spatial characteristics. We present the results of a user study demonstrating that users are more satisfied with multimodal than unimodal explanations. Additionally, our findings reveal low user satisfaction with explanations of a robot with extreme personality traits. In conclusion, we deliberate on potential future research directions and the associated constraints. Our work advocates for fostering socially adept and safe autonomous robot navigation.

Amar Halilovic, Vanchha Chandrayan, Senka Krivic

The decisions made by autonomous robots hold substantial influence over how humans perceive their behavior. One way to alleviate potential negative impressions of such decisions by humans and enhance human comprehension of them is through explaining. We introduce visual and textual explanations integrated into robot navigation, considering the surrounding environmental context. To gauge the effectiveness of our approach, we conducted a comprehensive user study, assessing user satisfaction across different forms of explanation representation. Our empirical findings reveal a notable discrepancy in user satisfaction, with significantly higher levels observed for explanations that adopt a multimodal format, as opposed to those relying solely on unimodal representations.

Navigation is a must-have skill for any mobile robot. A core challenge in navigation is the need to account for an ample number of possible configurations of environment and navigation contexts. We claim that a mobile robot should be able to explain its navigational choices making its decisions understandable to humans. In this paper, we briefly present our approach to explaining navigational decisions of a robot through visual and textual explanations. We propose a user study to test the understandability and simplicity of the robot explanations and outline our further research agenda.

Amar Halilovic, F. Lindner

With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.

Amar Halilovic, F. Lindner

With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.

Amar Halilovic, Nedim Zaimovic, Hamid Reza Feyzmahdavian, T. Seceleanu

The greater the number of devices on a network, the higher load in the network, the more chance of a collision occurring, and the longer it takes to transmit a message. The size of load can be identified by measuring the network occupancy, hence it is desirable to minimize the latter. In this paper, we present an approach for network occupancy minimization by optimizing the packing process while satisfying multiple constraints. We formulate the minimization problem as a bin packing problem and we implement a modification of the Best-Fit Decreasing algorithm to find the optimal solution. The approach considers grouping signals that are sent to different destinations in the same package. The analysis is done on a medium-sized plant model, and different topologies are tested. The results show that the proposed solution lowers the network occupancy compared to a reference case.

...
...
...

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više