Strict Standards: Static function WP_STATISTICS\MetaBox\MetaBoxAbstract::get() should not be abstract in /home/raybotco/public_html/intelligentrobots/wp-content/plugins/wp-statistics/includes/admin/meta-box/wp-statistics-meta-box-abstract.php on line 20
Explainable Artificial Intelligence (XAI) – The Intelligent Robots Group at Curtin University
Deprecated: sanitize_url is deprecated since version 2.8.0! Use esc_url_raw() instead. in /home/raybotco/public_html/intelligentrobots/wp-includes/functions.php on line 4720

Deprecated: sanitize_url is deprecated since version 2.8.0! Use esc_url_raw() instead. in /home/raybotco/public_html/intelligentrobots/wp-includes/functions.php on line 4720

Explainable Artificial Intelligence (XAI)

Explainable Artificial Intelligence (XAI) is a broad field of Artificial Intelligence (AI) that concerns the ability of AI systems to not only make decisions but to also explain these decisions.

As AI becomes ever more embedded in the decisionmaking that affects our lives, we run into a dangerous situation whereby we are no longer able to determine why crucial decisions are being made, verify that they are being made for the right reasons or have any confidence in our ability to effectively detect and correct errors.

Our work in XAI covers three vital areas of this problem.

  • The definition of explanation itself. We are developing novel semantics to effectively map the explanatory requirements of different applications and users to the explanatory capabilities of different AI techniques.
  • Explaining Machine Learned Policies for Goal Reasoning. As systems of multiple intelligent agents make collaborative decisions on goals to pursue, it becomes necessary to perform Goal Reasoning to decide how goals progress through the Goal Lifecycle and get developed and assigned to different agents. We are working with the US Naval Research Laboratory and the University of New South Wales on Machine Learning (ML) techniques that can perform this reasoning and, at the same time, explain how they are making their decisions. Since 2017, the consortium has received $180,000 USD in funding to support this work.
  • Extracting practical, informative explanations from real-world decision trees. Decision trees are often touted as being “transparent” or “explainable”. Yes they are capable of expressing their decisions as rules but how informative are these basic, rule-based explanations? We are developing theoretical techniques and code infrastructure to extract, curate and present a wide variety of explanations from decision trees that go beyond simply dumping out their rules. Some of these techniques also have application to other modelling techniques, including certain types of deep neural networks.