LIME
Modeling

LIME

Released: 
Aug 2016
  •  
Documentation
  •  
License:  
BSD-2-Clause Licence
66
Github open issues
10435
Github stars
30 Jul
Github last commit
102
Stackoverflow questions

What is LIME?

LIME acronym stands for local interpretable model-agnostic explanations. It is based on the paper “Why Should I Trust You?”: Explaining the Predictions of Any Classifier” by Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 

LIME is a technique that approximates any black-box ML model prediction with a local, interpretable model to explain each prediction made. It can drive interpretations for single prediction scores generated by any classifier. It has a library for both Python and R. 

LIME brings local interpretability and helps determine which feature changes will impact the forecast. It generates outputs as a list of explanations with corresponding contributions of each feature to the prediction. It is compatible with several classifiers and works with text, image, and tabular data. 

How does LIME help?

To trust AI systems, models must be explainable to human users. AI interpretability helps understand models and identify any potential issues such as bias and information leakage.

LIME provides a generic framework to interpret black box models. It helps understand the ‘why’ behind ML model predictions. Finding details about the features that affect the model decisions requires minimal effort. 

LIME aligns with what humans are interested in when observing ML outcomes. It helps you answer why a specific prediction was made and which variables caused the decision. 

Key Features of LIME

Key Features Of LIME

Local

LIME explains the black-box model by approximating the local linear behavior of an ML model. It helps explain why a single data point was classified as a specific class.  

Model-agnostic

LIME is model-independent and treats the model as a black box. It does not access the internals of any model to make interpretations. It works with a wide range of models as compared to model-specific approaches.

Local interpretability 

LIME generates output as a list of explanations presenting the contribution of each feature to the prediction of a data sample. Such local interpretability helps determine which feature changes are impacting the forecast. 

Local fidelity 

One of the desirable properties of a model explainer is local fidelity or being locally faithful. It must correspond to how the model behaves in the vicinity of the instance being predicted.

Global perspective

While explaining individual predictions, the model should be interpretable in its entirety. Global perspective consideration is an important criterion considered by LIME developers.

Companies using

LIME

sas
neptune.ai
anaconda
analyttica
No items found.

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring