Skip to the content.
Date October 1, 2023
Location The workshop will be held in-person at the ECAI 2023 conference in Kraków, Poland.
Room A-1-13 in the Faculty of Physics, Astronomy and Applied Computer Science of the Jagiellonian University

Call for Papers

Welcome to the Joint workshops on XAI methods, challenges and applications (XAI^3), where we aim to discuss opportunities for the new generation of explainable AI (XAI) methods that are reliable, robust, and trustworthy. Explainability of AI models and systems is crucial for humans to trust and use intelligent systems, yet their utility in high-risk applications such as healthcare and industry has been severely limited. Our workshop will have three tracks: medical, industry, and future challenges, where we will explore the challenges and opportunities in creating useful XAI methods for medical applications, integrating explainability in highly automated industrial processes, and evaluating current and future XAI methods. We welcome contributions from researchers, academia, and industries primarily from a technical and application point of view, but also from an ethical and sociological perspective. Join us in discussing the latest developments in XAI and their practical applications at the 26th European Conference on Artificial Intelligence (ECAI 2023) in Kraków, Poland.

Tracks and topics

Towards Explainable AI 2.0 (XAI2.0)
Chair: Przemysław Biecek
Explainable AI for Medical Applications (XAIM)
Chair: Neo Christopher Chung
XAI for Industry 4.0 & 5.0 (XAI4I)
Chair: Sławomir Nowaczyk
  • Emerging challenges in explainable AI towards XAI 2.0
  • Evaluation and limitations of current XAI methods
  • Trade-off between model-agnostic and model-specific explainability
  • Adversarial attacks and defenses in XAI
  • Privacy, leakage of sensitive information, fairness and bias
  • Human-centered XAI through visualization, active learning, model improvement and debugging
  • XAI beyond classification and regression, e.g. in unsupervised learning, image segmentation, survival analysis
  • Theory and application of XAI for medical imaging and other medical applications
  • Uncertainty estimation of AI models using medical data
  • Multimodal learning, e.g., PET/CT, healthcare records, genomics, and other heterogeneous datasets
  • Clinical cases, evaluation, and software of XAI for medicine
  • Fairness, bias, and transparency in medical AI models
  • Human-computer interaction (HCI) and human in the loop (HITL) approaches in medicine
  • Inherently interpretable models in supervised, unsupervised and semi-supervised learning for biology and medicine
  • Ethical considerations in industrial deployment of AI
  • AI transparency and accountability in smart factories
  • Explainable systems fusing various sources of industrial information
  • XAI in performance and efficiency of industrial systems
  • Prediction of maintenance, product, and process quality
  • Data and information fusion in the industrial XAI context
  • Application in manufacturing systems, production processes, energy, power, and transport systems


Paper submission deadline July 18, 2023
Decision notification August 15, 2023
Camera-ready due September 11, 2023

All times Anywhere on Earth (AoE), UTC-12

Submission instructions

Submissions should be anonymised and follow the LNCS template available as downloadable LaTeX and Word, as well as online Overleaf template, at

We welcome two types of papers:

The page limit does not include references and supplementary material.

Submissions can be done at