LIME is a popular explainable AI (XAI) method. It is known as a local model agnostic method. This means it can be used to explain the individual predictions of any machine learning model. It does this by building simple surrogate models around the black-box model’s prediction for an individual instance. We will:
Explain the algorithm used by LIME to get local interpretations.
Discuss in detail some of your choices related to these steps including the number of features, how to weigh features using the kernel width and which surrogate model to use.
At first, these choices may seem like a good thing but it leads to the biggest weakness for this method. That is we can manipulate the method to give us contradictory interpretations.
🚀 Free Course 🚀
Signup here: mailchi.mp/40909011987b/signup
XAI course: adataodyssey.com/courses/xai-with-python/
SHAP course: adataodyssey.com/courses/shap-with-python/
🚀 Companion article with link to code (no-paywall link): 🚀
medium.com/data-science/a-deep-dive-on-lime-for-lo…
🚀 Useful playlists 🚀
XAI: • Explainable AI (XAI)
SHAP: • SHAP
Algorithm fairness: • Algorithm Fairness
🚀 Get in touch 🚀
Medium: conorosullyds.medium.com/
Threads: www.threads.net/@conorosullyds
Twitter: twitter.com/conorosullyDS
Website: adataodyssey.com/
🚀 Chapters 🚀
00:00 Introduction
01:48 LIME example
02:32 The LIME algorithm
04:07 Algorit
コメント