Skip to the content.
Date October 1, 2023
Location The workshop will be held in-person at the ECAI 2023 conference in Kraków, Poland.
Room A-1-13 in the Faculty of Physics, Astronomy and Applied Computer Science of the Jagiellonian University

Invited Speakers

Concha Bielza Pedro Larrañaga
Concha Bielza
Technical University of Madrid
Pedro Larrañaga
Technical University of Madrid

Explanation Capabilities of Bayesian Networks in Dynamic Industrial Domains [slides]

This talk will describe how Bayesian network models can provide natural explanations in temporal domains from the industry. After a brief introduction to Bayesian networks in static settings, discrete-time versions for temporal domains will be presented, which include dynamic Bayesian networks and the popular hidden Markov models. The more recent continuous time Bayesian networks and their supervised classification counterparts, both in uni- and multi-dimensional settings, will be explained. How Bayesian networks can be used in dynamic clustering will also be covered. In all cases real examples from industry will illustrate the versatile capabilities of Bayesian networks to intrinsically explain the model as a whole, predictions (reasoning), instances (evidences) and decisions. This is the so-called XBN framework, that encourages efficient communication with end users, supports understanding of how and why certain predicitions were made, gaining new industrial insights.

Issam El Naqa
Issam El Naqa
Moffitt Cancer Center

Towards Trustworthy AI for Clinical Oncology [slides]

Artificial intelligence (AI) and Machine learning (ML) algorithms are currently transforming biomedical research, especially in the context of cancer research and clinical care. Despite the anticipated potentials, their application in oncology and healthcare has been limited in scope with less than 5% of major healthcare providers implementing any form of AI/ML solutions. This is partly attributed to concerning issues that AI/ML driven technologies instead of reducing healthcare disparities would exacerbate existing racial and gender equity due to inherent bias and lack of prediction transparency. In this work, we will present different approaches for detecting and mitigating such bias in AI/ML algorithms. We further show examples of implementing these approaches in oncology applications from our work and others and discuss their implications for the future of AI/ML.

Christin Seifert
Christin Seifert
University of Marburg

Can we trust XAI? Current status and challenges of evaluating XAI methods [slides]

The XAI community develops methods to make black-box models more transparent, with transparency catering for multiple stakeholders. In case of post-hoc, local XAI methods, the user facing output is a prediction and its accompanying explanation. The user should refer to the explanation to understand why a prediction was (not) made. But how can we evaluate whether an explanation is correct, understandable, useful to the user and truthful to the model? And how can we compare multiple XAI methods in a harmonised way to measure and ensure scientific progress? In this talk, I will discuss different evaluation approaches, methods, and metrics, that have developed – mostly bottom-up – in the XAI community. I will then revisit other scientific fields, such as machine learning and information retrieval and their de-facto evaluation standard. Finally, I will present obstacles and challenges towards a unified evaluation framework for XAI.