Explainable AI

Causal explanation of autonomous vehicle decision-making

Artificial Intelligence (AI) shows promising applications for the perception and planning tasks in autonomous driving (AD) due to its superior performance compared to conventional methods. However, inscrutable AI systems exacerbate the existing challenge of safety assurance of AD. One way to mitigate this challenge is to utilize explainable AI (XAI) techniques.

To this end, this project aims to develop novel methods to explain AI-driven decisions in AD to various stakeholders using intelligible language. Until now, we have obtained the following milestones for explainaing a Monte Carlo Tree Search (MCTS)-based decision-making:

1, A human-centric method for generating causal explanations in natural language for autonomous vehicle motion planning [paper][video]

The work aims to explain IGP2, which uses reinforcement learning for AD decision-making. The information during the search process of MCTS is used to build a Bayesian network model. Based on this model, causal relationships are obtained for user's query and then converted to natural language.

2, Causal Social Explanations for Stochastic Sequential Multi-Agent Decision-Making [paper]

As the previous method doesnot include low level causal relationships, we extend it by defining causal attributions and use logistic regression to identify the sailent causes to generate more natural explanations.

To evaluate the our method’s performance, we also conduct user study. The collected data could be found here if you are interested in human explanations for AD decision-making. Moreover, our work can also be found at FIVE AI.