AI agents such as robots are often not aware of their own competence under varying situations. Yet it is critically important to know whether the AI can be trusted for the current situation. We propose a method that enables a robot to assess the competence of its AI model.

To assess its competence in the current situation, the robot asks the human for feedback about how it is performing. The robot is able to generalize this feedback to other related situations. If the robot knows it is incompetent, it can switch to alternative solutions to perform the task, such as asking the human for assistance.

This is joint work with Huizing, A.G. (Albert), Neerincx, M.A. (Mark) and the Controllable AI team of TNO’s research program Hybrid AI. It was published at the ACM/IEEE Human-Robot Interaction, 2020 (see Publications).