Goal. The goal of the project group is to practically apply and experimentally compare concrete explainable AI (XAI) techniques for predictive process monitoring. As a participant, you will acquire the fundamental knowledge and learn the recent developments in XAI and Business Process Management (BPM). Moreover, you will gain practical experience and insights by applying XAI techniques to predictive process monitoring problems for concrete real-world benchmark data sets (e.g., from transport, logistics, and finance).

Background. Predictive process monitoring attempts to answer, “what will happen and when?” during the execution of a business process. Predictive process monitoring thereby helps process managers decide whether to proactively adapt a running process. Such adaptation can help prevent problems and mitigate the impact of those that do occur by dynamically re-planning a running process instance.

Recently, we can witness the increasing application of sophisticated prediction models, such as random forests and deep neural network models (e.g., LSTMs) for predictive process monitoring, because they achieve consistently better prediction accuracy. While achieving high prediction accuracy, the inner logic and causal relations of these models are not intrinsically understandable and interpretable by process managers. This makes it difficult for process managers to evaluate whether they can rely and trust the predictions. This is not a problem for BPM alone. In recent years, research under the name of explainable artificial intelligence (XAI) has regained growing interests due to the broader use of artificial intelligence system in the real-world applications. To increase interpretability of the sophisticated models, researchers started to investigate applying XAI techniques to predictive process monitoring.