Logo

Publikacije (44)

Nazad
M. Brandão, Gerard Canal, Senka Krivic, P. Luff, A. Coles

Motion planning is a hard problem that can often overwhelm both users and designers: due to the difficulty in understanding the optimality of a solution, or reasons for a planner to fail to find any solution. Inspired by recent work in machine learning and task planning, in this paper we are guided by a vision of developing motion planners that can provide reasons for their output—thus potentially contributing to better user interfaces, debugging tools, and algorithm trustworthiness. Towards this end, we propose a preliminary taxonomy and a set of important considerations for the design of explainable motion planners, based on the analysis of a comprehensive user study of motion planning experts. We identify the kinds of things that need to be explained by motion planners ("explanation objects"), types of explanation, and several procedures required to arrive at explanations. We also elaborate on a set of qualifications and design considerations that should be taken into account when designing explainable methods. These insights contribute to bringing the vision of explainable motion planners closer to reality, and can serve as a resource for researchers and developers interested in designing such technology.

M. Brandão, Gerard Canal, Senka Krivic, P. Luff, A. Coles

Motion planning is a hard problem that can often overwhelm both users and designers: due to the difficulty in understanding the optimality of a solution, or reasons for a planner to fail to find any solution. Inspired by recent work in machine learning and task planning, in this paper we are guided by a vision of developing motion planners that can provide reasons for their output—thus potentially contributing to better user interfaces, debugging tools, and algorithm trustworthiness. Towards this end, we propose a preliminary taxonomy and a set of important considerations for the design of explainable motion planners, based on the analysis of a comprehensive user study of motion planning experts. We identify the kinds of things that need to be explained by motion planners ("explanation objects"), types of explanation, and several procedures required to arrive at explanations. We also elaborate on a set of qualifications and design considerations that should be taken into account when designing explainable methods. These insights contribute to bringing the vision of explainable motion planners closer to reality, and can serve as a resource for researchers and developers interested in designing such technology.

M. Brandão, Gerard Canal, Senka Krivic, D. Magazzeni

Recent research in AI ethics has put forth explainability as an essential principle for AI algorithms. However, it is still unclear how this is to be implemented in practice for specific classes of algorithms—such as motion planners. In this paper we unpack the concept of explanation in the context of motion planning, introducing a new taxonomy of kinds and purposes of explanations in this context. We focus not only on explanations of failure (previously addressed in motion planning literature) but also on contrastive explanations—which explain why a trajectory A was returned by a planner, instead of a different trajectory B expected by the user. We develop two explainable motion planners, one based on optimization, the other on sampling, which are capable of answering failure and constrastive questions. We use simulation experiments and a user study to motivate a technical and social research agenda.

Benjamin Krarup, Senka Krivic, D. Magazzeni, D. Long, Michael Cashmore, David E. Smith

In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user’s expectation. We frame Explainable AI Planning as an iterative plan exploration process, in which the user asks a succession of contrastive questions that lead to the generation and solution of hypothetical planning problems that are restrictions of the original problem. The object of the exploration is for the user to understand the constraints that govern the original plan and, ultimately, to arrive at a satisfactory plan. We present the results of a user study that demonstrates that when users ask questions about plans, those questions are usually contrastive, i.e. “why A rather than B?”. We use the data from this study to construct a taxonomy of user questions that often arise during plan exploration. Our approach to iterative plan exploration is a process of successive model restriction. Each contrastive user question imposes a set of constraints on the planning problem, leading to the construction of a new hypothetical planning problem as a restriction of the original. Solving this restricted problem results in a plan that can be compared with the original plan, admitting a contrastive explanation. We formally define model-based compilations in PDDL2.1 for each type of constraint derived from a contrastive user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity. The compilations were implemented as part of an explanation framework supporting iterative model restriction. We demonstrate its benefits in a second user study.

Gerard Canal, Senka Krivic, P. Luff, A. Coles

To increase user trust in planning algorithms, users must be able to understand the output of the planner while getting some notion of the underlying reasons for the action selection. The output of task planners have not been traditionally user-friendly, often consisting of sequences of parametrised actions or task networks, which may not be practical for lay and non-expert users who may find it easier to read natural language descriptions. In this paper, we propose PlanVerb, a domain and planner-independent method for the verbalization of task plans based on semantic tagging of the actions and predicates. Our method can generate natural language descriptions of plans including explanations of causality be-tween actions. The verbalized plans can be summarized by compressing the actions that act on the same parameters. We further extend the concept of verbalization space , previously applied to robot navigation, and apply it to planning to generate different kinds of plan descriptions depending on the needs or preferences of the user. Our method can deal with PDDL and RDDL domains, provided that they are tagged accordingly. We evaluate our results with a user survey that shows that users can read our automatically generated plan descriptions, and are able to successfully answer questions about the plan. We believe methods like the one we propose can be used to foster trust in planning algorithms in a wide range of domains and applications.

Senka Krivic, Michael Cashmore, D. Magazzeni, S. Szedmák, J. Piater

We present a novel approach for decreasing state uncertainty in planning prior to solving the planning problem. This is done by making predictions about the state based on currently known information, using machine learning techniques. For domains where uncertainty is high, we define an active learning process for identifying which information, once sensed, will best improve the accuracy of predictions. We demonstrate that an agent is able to solve problems with uncertainties in the state with less planning effort compared to standard planning techniques. Moreover, agents can solve problems for which they could not find valid plans without using predictions. Experimental results also demonstrate that using our active learning process for identifying information to be sensed leads to gathering information that improves the prediction process.

Gerard Canal, R. Borgo, A. Coles, Archie Drake, D. Huynh, P. Keller, Senka Krivic, Paul Luff et al.

Benjamin Krarup, Senka Krivic, F. Lindner, D. Long

The development of robotics and AI agents has enabled their wider usage in human surroundings. AI agents are more trusted to make increasingly important decisions with potentially critical outcomes. It is essential to consider the ethical consequences of the decisions made by these systems. In this paper, we present how contrastive explanations can be used for comparing the ethics of plans. We build upon an existing ethical framework to allow users to make suggestions to plans and receive contrastive explanations.

Michael Cashmore, Anna Collins, Benjamin Krarup, Senka Krivic, D. Magazzeni, David Smith

Explainable AI is an important area of research within which Explainable Planning is an emerging topic. In this paper, we argue that Explainable Planning can be designed as a service -- that is, as a wrapper around an existing planning system that utilises the existing planner to assist in answering contrastive questions. We introduce a prototype framework to facilitate this, along with some examples of how a planner can be used to address certain types of contrastive questions. We discuss the main advantages and limitations of such an approach and we identify open questions for Explainable Planning as a service that identify several possible research directions.

Gerard Canal, Michael Cashmore, Senka Krivic, G. Alenyà, D. Magazzeni, C. Torras

Michael Cashmore, Anna Collins, Benjamin Krarup, Senka Krivic, D. Magazzeni, David E. Smith

Explainable AI is an important area of research within which Explainable Planning is an emerging topic. In this paper, we argue that Explainable Planning can be designed as a service – that is, as a wrapper around an existing planning system that utilises the existing planner to assist in answering contrastive questions. We introduce a framework to facilitate this, along with some examples of how a planner can be used to address certain types of contrastive questions. We discuss the main advantages and limitations of such an approach and we identify open questions for Explainable Planning as a service that identify several possible research directions.

Senka Krivic, J. Piater

Pushing is a common task in robotic scenarios. In real-world environments, robots need to manipulate various unknown objects without previous experience. We propose a data-driven approach for learning local inverse models of robot-object interaction for push manipulation. The robot makes observations of the object behaviour on the fly and adapts its movement direction. The proposed model is probabilistic, and we update it using maximum a posteriori (MAP) estimation. We test our method by pushing objects with a holonomic mobile robot base. Validation of results over a diverse object set demonstrates a high degree of robustness and a high success rate in pushing objects towards a fixed target and along a path compared to previous methods. Moreover, based on learned inverse models, the robot can learn object properties and distinguish between different object behaviours when they are pushed from different sides.

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više