PyData Eindhoven 2024

Explainable AI in the LIME-light
07-11, 10:00–10:30 (Europe/Amsterdam), If (1.1)

LIME, a model-agnostic AI framework, illuminates the path to local explainability, primarily for classification models. Delving into the theory underpinning LIME, we explore diverse use cases and its adaptability across various scenarios. Through practical examples, we showcase the breadth of applications for LIME. By the presentation's conclusion, you'll have gained insights into leveraging LIME to clarify individual prediction logic, leading to more accessible explanations.


Although AI toolkits have simplified model implementation, understanding and interpreting these models remain challenging. With regulatory frameworks like the EU AI Act emphasizing explainability, the need for tools like LIME is paramount.

This presentation will provide an in-depth overview of LIME (Local Interpretable Model-agnostic Explanations), highlighting its utility in facilitating model comprehension. No prior expertise is assumed. Beginning with an explanation of LIME's theory and its practical implementation in Python, we'll then delve into diverse classification scenarios to showcase LIME's effectiveness. Additionally, we'll explore how the original LIME framework has been extended to handle time series data.


Prior Knowledge Expected

Python and classification model knowledge is convenient but not necessary

For the past 3 years I have been working as a Data Science consultant at Pipple. Since Pipple is active in multiple different sectors, I have had the opportunity to do many different projects. What I have discovered is that explainability of the machine learning used was a critical topic in all of these projects. Fortunately, frameworks like LIME have emerged to provided this much needed explainability. I am excited to discuss more about LIME at the upcoming 2024 PyData Eindhoven conference.