Skip to the content.
Date October 1, 2023
Location The workshop will be held in-person at the ECAI 2023 conference in Kraków, Poland.
Room A-1-13 in the Faculty of Physics, Astronomy and Applied Computer Science of the Jagiellonian University

Invited Speakers [Abstracts]

Concha Bielza Pedro Larrañaga Issam El Naqa Christin Seifert
Concha Bielza
Technical University of Madrid
Pedro Larrañaga
Technical University of Madrid
Issam El Naqa
Moffitt Cancer Center
Christin Seifert
University of Marburg


This is the workshop’s schedule in Central European Time (CET).

Time (CET)
09:00 - 09:10 Opening remarks
  [Session 1]
09:10 - 10:00 Invited talk by Christin Seifert: Can we trust XAI? Current status and challenges of evaluating XAI methods. [slides]
10:00 - 10:15 Paper: SHAP values from a Practical Perspective of a Social Scientist.
Łukasz Borowiecki.
10:15 - 10:30 Paper: Evaluation of Local Model-agnostic Explanations: Taxonomy and Limitations.
Amir Hossein Akhavan Rahnama.
10:30 - 11:00 Coffee break
  [Session 2]
11:00 - 11:50 Invited Talk by Concha Bielza & Pedro Larrañaga: Explanation Capabilities of Bayesian Networks in Dynamic Industrial Domains. [slides]
11:50 - 12:05 Paper: Explainable Anomaly Detection in Industrial Streams.
Jakub Jakubowski, Przemysław Stanisz, Szymon Bobek, Grzegorz Nalepa.
12:05 - 12:20 Paper: Clash of the Explainers: Argumentation for Context-Appropriate Explanations.
Leila Methnani, Andreas Theodorou, Virginia Dignum.
12:30 - 13:30 Lunch break
  [Session 3]
13:30 - 14:20 Invited Talk by Issam El Naqa: Towards Trustworthy AI for Clinical Oncology. [slides]
14:20 - 14:35 Paper: From Data to Insights: Fusing Explainable AI and Epidemiological Thinking to Enhance Primary Care Practices.
Awais Ashfaq, Slawomir Nowaczyk.
14:35 - 14:50 Paper: EcoShap: Save Computations by Only Calculating Shapley Values for Relevant Features.
Parisa Jamshidi, Slawomir Nowaczyk, Mahmoud Rahat.
15:00 - 15:30 Coffee break
  [Session 4]
15:30 - 15:45 Paper: Towards Explainable Deep Domain Adaptation.
Szymon Bobek, Zahra Taghiyarrenani, Sepideh Pashami, Slawomir Nowaczyk, Grzegorz Nalepa.
15:45 - 16:00 Paper: Interpretability benchmark for spatial alignment of prototypical parts.
Mikołaj Sacha, Bartosz Jura, Dawid Rymarczyk, Łukasz Struski, Jacek Tabor, Bartosz Zieliński.
16:00 - 16:15 Paper: Evaluation of Human-Understandability of Global Model Explanations using Decision Tree.
Adarsa Sivaprasad, Ehud Reiter.

Call for Papers

Welcome to the Joint workshops on XAI methods, challenges and applications (XAI^3), where we aim to discuss opportunities for the new generation of explainable AI (XAI) methods that are reliable, robust, and trustworthy. Explainability of AI models and systems is crucial for humans to trust and use intelligent systems, yet their utility in high-risk applications such as healthcare and industry has been severely limited. Our workshop will have three tracks: medical, industry, and future challenges, where we will explore the challenges and opportunities in creating useful XAI methods for medical applications, integrating explainability in highly automated industrial processes, and evaluating current and future XAI methods. We welcome contributions from researchers, academia, and industries primarily from a technical and application point of view, but also from an ethical and sociological perspective. Join us in discussing the latest developments in XAI and their practical applications at the 26th European Conference on Artificial Intelligence (ECAI 2023) in Kraków, Poland.

Tracks and topics

Towards Explainable AI 2.0 (XAI2.0)
Chair: Przemysław Biecek
Explainable AI for Medical Applications (XAIM)
Chair: Neo Christopher Chung
XAI for Industry 4.0 & 5.0 (XAI4I)
Chair: Sławomir Nowaczyk
  • Emerging challenges in explainable AI towards XAI 2.0
  • Evaluation and limitations of current XAI methods
  • Trade-off between model-agnostic and model-specific explainability
  • Adversarial attacks and defenses in XAI
  • Privacy, leakage of sensitive information, fairness and bias
  • Human-centered XAI through visualization, active learning, model improvement and debugging
  • XAI beyond classification and regression, e.g. in unsupervised learning, image segmentation, survival analysis
  • Theory and application of XAI for medical imaging and other medical applications
  • Uncertainty estimation of AI models using medical data
  • Multimodal learning, e.g., PET/CT, healthcare records, genomics, and other heterogeneous datasets
  • Clinical cases, evaluation, and software of XAI for medicine
  • Fairness, bias, and transparency in medical AI models
  • Human-computer interaction (HCI) and human in the loop (HITL) approaches in medicine
  • Inherently interpretable models in supervised, unsupervised and semi-supervised learning for biology and medicine
  • Ethical considerations in industrial deployment of AI
  • AI transparency and accountability in smart factories
  • Explainable systems fusing various sources of industrial information
  • XAI in performance and efficiency of industrial systems
  • Prediction of maintenance, product, and process quality
  • Data and information fusion in the industrial XAI context
  • Application in manufacturing systems, production processes, energy, power, and transport systems


Paper submission deadline July 18, 2023
Decision notification August 15, 2023
Camera-ready due September 11, 2023

All times Anywhere on Earth (AoE), UTC-12

Submission instructions

Submissions should be anonymised and follow the LNCS template available as downloadable LaTeX and Word, as well as online Overleaf template, at

We welcome two types of papers:

The page limit does not include references and supplementary material.

Submissions can be done at


Hubert Baniecki Przemysław Biecek Albert Bifet Szymon Bobek Lennart Brocki Giuseppe Casalicchio
Hubert Baniecki
University of Warsaw
Przemysław Biecek
Warsaw University of Technology
Albert Bifet
Szymon Bobek
Jagiellonian University
Lennart Brocki
University of Warsaw
Giuseppe Casalicchio
Ludwig Maximilian University of Munich
Neo Christopher Chung Joao Gama Mathieu Hatt Grzegorz J. Nalepa Sławomir Nowaczyk Panagiotis Papadimitroulas
Neo Christopher Chung
University of Warsaw
Joao Gama
University of Porto
Mathieu Hatt
Grzegorz J. Nalepa
Jagiellonian University
Sławomir Nowaczyk
Halmstad University
Panagiotis Papadimitroulas
Sepideh Pashami Rita P. Ribeiro Dawid Rymarczyk Jacek Tabor Bruno Veloso Bartosz Zieliński
Sepideh Pashami
Halmstad University
Rita P. Ribeiro
University of Porto
Dawid Rymarczyk
Jagiellonian University
Jacek Tabor
Jagiellonian University
Bruno Veloso
University of Porto
Bartosz Zieliński
Jagiellonian University

Co-organized and supported by

Logo: MI2.AI