Logo
Nazad
Jakob Karalus, Amar Halilovic, Felix Lindner
4 .

Explanations In, Explanations Out: Human-in-the-Loop Social Navigation Learning

—Social navigation is a desirable capacity of every mobile robot that operates in a human-populated environment. A core challenge is the need to account for a vast number of possible configurations of social spaces, interaction contexts, and individual preferences. We assert that a mobile robot should be able to explain its navigational choices to humans in terms of social aspects of a situation. This way, humans can challenge the robot’s decisions and give corrective feedback. In our approach, explanations are first-class citizens employed both as inputs and as outputs. As inputs, explanations enhance human feedback (“Robot, you should not go this way, because . . . ”). Our preliminary results indicate that allowing humans to formulate explanations as feedback can speed up training. As outputs, explanations make navigational decisions transparent to humans. This way, humans can verify that the learnt model incorporates the intended social norm. We show how explanation-generation methods known from explainable AI (XAI) community can be adopted for this task. We sketch the project in its early stage and point out planned research directions.

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više