What is LIME?
LIME acronym stands for local interpretable model-agnostic explanations. It is based on the paper “Why Should I Trust You?”: Explaining the Predictions of Any Classifier” by Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin.
LIME is a technique that approximates any black-box ML model prediction with a local, interpretable model to explain each prediction made. It can drive interpretations for single prediction scores generated by any classifier. It has a library for both Python and R.
LIME brings local interpretability and helps determine which feature changes will impact the forecast. It generates outputs as a list of explanations with corresponding contributions of each feature to the prediction. It is compatible with several classifiers and works with text, image, and tabular data.
How does LIME help?
To trust AI systems, models must be explainable to human users. AI interpretability helps understand models and identify any potential issues such as bias and information leakage.
LIME provides a generic framework to interpret black box models. It helps understand the ‘why’ behind ML model predictions. Finding details about the features that affect the model decisions requires minimal effort.
LIME aligns with what humans are interested in when observing ML outcomes. It helps you answer why a specific prediction was made and which variables caused the decision.
Key Features of LIME
LIME explains the black-box model by approximating the local linear behavior of an ML model. It helps explain why a single data point was classified as a specific class.
LIME is model-independent and treats the model as a black box. It does not access the internals of any model to make interpretations. It works with a wide range of models as compared to model-specific approaches.
LIME generates output as a list of explanations presenting the contribution of each feature to the prediction of a data sample. Such local interpretability helps determine which feature changes are impacting the forecast.
One of the desirable properties of a model explainer is local fidelity or being locally faithful. It must correspond to how the model behaves in the vicinity of the instance being predicted.
While explaining individual predictions, the model should be interpretable in its entirety. Global perspective consideration is an important criterion considered by LIME developers.