Skip to the content.
Date October 1, 2023
Location The workshop will be held in-person at the ECAI 2023 conference in Kraków, Poland.
Room A-1-13 in the Faculty of Physics, Astronomy and Applied Computer Science of the Jagiellonian University


This is the workshop’s schedule in Central European Time (CET).

Time (CET)
09:00 - 09:10 Opening remarks
  [Session 1]
09:10 - 10:00 Invited talk by Christin Seifert: Can we trust XAI? Current status and challenges of evaluating XAI methods. [slides]
10:00 - 10:15 Paper: SHAP values from a Practical Perspective of a Social Scientist.
Łukasz Borowiecki.
10:15 - 10:30 Paper: Evaluation of Local Model-agnostic Explanations: Taxonomy and Limitations.
Amir Hossein Akhavan Rahnama.
10:30 - 11:00 Coffee break
  [Session 2]
11:00 - 11:50 Invited Talk by Concha Bielza & Pedro Larrañaga: Explanation Capabilities of Bayesian Networks in Dynamic Industrial Domains. [slides]
11:50 - 12:05 Paper: Explainable Anomaly Detection in Industrial Streams.
Jakub Jakubowski, Przemysław Stanisz, Szymon Bobek, Grzegorz Nalepa.
12:05 - 12:20 Paper: Clash of the Explainers: Argumentation for Context-Appropriate Explanations.
Leila Methnani, Andreas Theodorou, Virginia Dignum.
12:30 - 13:30 Lunch break
  [Session 3]
13:30 - 14:20 Invited Talk by Issam El Naqa: Towards Trustworthy AI for Clinical Oncology. [slides]
14:20 - 14:35 Paper: From Data to Insights: Fusing Explainable AI and Epidemiological Thinking to Enhance Primary Care Practices.
Awais Ashfaq, Slawomir Nowaczyk.
14:35 - 14:50 Paper: EcoShap: Save Computations by Only Calculating Shapley Values for Relevant Features.
Parisa Jamshidi, Slawomir Nowaczyk, Mahmoud Rahat.
15:00 - 15:30 Coffee break
  [Session 4]
15:30 - 15:45 Paper: Towards Explainable Deep Domain Adaptation.
Szymon Bobek, Zahra Taghiyarrenani, Sepideh Pashami, Slawomir Nowaczyk, Grzegorz Nalepa.
15:45 - 16:00 Paper: Interpretability benchmark for spatial alignment of prototypical parts.
Mikołaj Sacha, Bartosz Jura, Dawid Rymarczyk, Łukasz Struski, Jacek Tabor, Bartosz Zieliński.
16:00 - 16:15 Paper: Evaluation of Human-Understandability of Global Model Explanations using Decision Tree.
Adarsa Sivaprasad, Ehud Reiter.