Logo
Nazad
0 16. 3. 2026.

Simulating Preference-Aware Robot Explanations: A Probabilistic Perspective on When and How to Explain

Robots increasingly provide explanations to support transparency in Human-Robot Interaction (HRI), yet users differ widely in how much explanation they prefer and when it is appropriate. We present a lightweight simulation framework in which a robot selects among explanation policies ranging from no explanation to norm-based, preference-based, and a Bayesian Adaptive (BA) policy that learns user preferences online while respecting normative expectations. Using synthetic user archetypes, we evaluate how these policies trade off utility, alignment, explanation cost, and regret. Results show that BA consistently achieves low regret across individual users while maintaining strong utility and alignment across diverse user archetypes. These findings motivate the development of preference-aware, uncertainty-driven explanation mechanisms for robust, adaptive robot communication in heterogeneous HRI settings.


Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više