Explainable Artificial Intelligence (XAI) is a broad field of Artificial Intelligence (AI) that concerns the ability of AI systems to not only make decisions but to also explain these decisions.
As AI becomes ever more embedded in the decisionmaking that affects our lives, we run into a dangerous situation whereby we are no longer able to determine why crucial decisions are being made, verify that they are being made for the right reasons or have any confidence in our ability to effectively detect and correct errors.
Our work in XAI covers three vital areas of this problem.
- The definition of explanation itself. We are developing novel semantics to effectively map the explanatory requirements of different applications and users to the explanatory capabilities of different AI techniques.
- Explaining Machine Learned Policies for Goal Reasoning. As systems of multiple intelligent agents make collaborative decisions on goals to pursue, it becomes necessary to perform Goal Reasoning to decide how goals progress through the Goal Lifecycle and get developed and assigned to different agents. We are working with the US Naval Research Laboratory and the University of New South Wales on Machine Learning (ML) techniques that can perform this reasoning and, at the same time, explain how they are making their decisions. Since 2017, the consortium has received $180,000 USD in funding to support this work.
- Extracting practical, informative explanations from real-world decision trees. Decision trees are often touted as being “transparent” or “explainable”. Yes they are capable of expressing their decisions as rules but how informative are these basic, rule-based explanations? We are developing theoretical techniques and code infrastructure to extract, curate and present a wide variety of explanations from decision trees that go beyond simply dumping out their rules. Some of these techniques also have application to other modelling techniques, including certain types of deep neural networks.