Strict Standards: Static function WP_STATISTICS\MetaBox\MetaBoxAbstract::get() should not be abstract in /home/raybotco/public_html/intelligentrobots/wp-content/plugins/wp-statistics/includes/admin/meta-box/wp-statistics-meta-box-abstract.php on line 20
Special Issue on Trust in AI for Springer KI – Künstliche Intelligenz – The Intelligent Robots Group at Curtin University
Deprecated: sanitize_url is deprecated since version 2.8.0! Use esc_url_raw() instead. in /home/raybotco/public_html/intelligentrobots/wp-includes/functions.php on line 4720

Deprecated: sanitize_url is deprecated since version 2.8.0! Use esc_url_raw() instead. in /home/raybotco/public_html/intelligentrobots/wp-includes/functions.php on line 4720

Special Issue on Trust in AI for Springer KI – Künstliche Intelligenz

Special Issue on Trust in AI: Requirements and Capabilities
Springer KI – Künstliche Intelligenz

Please visit for the latest version of this CFP!

This special issue on Trust in AI is dedicated to exploring the issue of how to improve understanding of the trust requirements of various applications, and trust capabilities of AI-based systems, among AI practitioners, users and broader society.

AI-based systems are becoming increasingly pervasive in modern society. Their influence extends from movie recommendations to mine optimisation, from border security to financial securities, from driver assistance to medical assessments. All of these applications bring with them different requirements in relation to, and definitions of, trust.

We welcome high quality submissions on issues of Trust in AI systems including, but not limited to:
* Trust in their level of performance – End to End Real World Performance Testing for AI Systems.
* Trust that they’re making decisions for the correct reasons – Explainable AI (XAI).
* Trust in our ability to correct errors – Repairable AI and Lifelong Learning.
* Trust in their integrity – Cyber Security challenges faced by AI systems.
* Trust in our ability to oversee them – Oversight and Anomaly Detection for AI in Complex Tasks and Environments.
* Trust definitions, requirements and capabilities – Semantics and Definitions for Requirements Analysis around Trust in AI.

Applications of interest include, but are not limited to:
* Technologies for assisting human drivers, operators and other controllers.
* Collaborative and autonomous robots.
* Service, entertainment and other robots in broader society.
* Industrial optimisation.
* Medical sensing, smart sensors and diagnosis.
* Applied image and sensor understanding, recognition and analysis.
* Recommendation, diagnosis and decision support systems.
* Safety, security and mission critical systems across cyber, cyberphysical and physical domains.
* Big data analytics.
* AI in finance, trading and assignment of resources.

This special issue of KI accepts original contributions in the following forms (in KI format):
* Short and full length technical and survey papers (up to 7 pages).
* Reports on research projects (4-6 pages).
* Discussion pieces (2-6 pages).
* Dissertation abstracts (up to 4 pages).
* Conference/workshop reports (1 page).
* Industry papers (AI Market) (2-4 pages).
See Instructions for Authors (below) for the requirements for each of these formats.

Important Dates:
* 15 June 2018, deadline for submissions (EXTENDED).
* 20 Jul 2018, notification of acceptance.
* 15 Aug 2018, final version due.

Important links:
* Instructions for Authors,
* Specific instructions for the different contributions,
* Previous issues,
* Paper submission (use the category “SI: Trust in AI”),

Special Issue Guest Editors:
* Raymond Sheh, Curtin University
* Claude Sammut, The University of New South Wales
* Mihai Lazarescu, Curtin University

For comments, suggestions or requests please send email to Raymond Sheh <>.

We look forward to your contribution to this special issue!


– Raymond, Claude and Mihai