SHAP
Modeling

SHAP

Released: 
May 2018
  •  
Documentation
  •  
License:  
MIT License
1233
Github open issues
15100
Github stars
4 Dec
Github last commit
238
Stackoverflow questions

What is SHAP?

SHapley Additive exPlanations is a game-theoretic approach helping explain the output generated by any machine learning model. SHAP computes the contribution of each feature corresponding to a specific prediction to explain it.

SHAP is a visualization tool that helps make ML model outcomes understandable to users. It supports modeling procedures involving libraries such as SciKit-Learn, Keras, PyTorch, TensorFlow, PySpark, and more.

SHAP applies the classic Shapley values from game theory and their extensions to affix optimal credit allocation with local explanations. Shapley values are derived by incorporating concepts from Cooperative Game Theory and local explanations.

For a given set of players, 'Cooperative Game Theory' states how to fairly distribute the payoff amongst the players working in coordination. Here, players correspond to independent features, and payoff corresponds to a prediction.


How Does SHAP Help?

SHAP helps understand the ‘why’ behind ML predictions and how your model predicts in a certain way. It brings explainability to model predictions by computing the contribution of each feature to the model prediction. 

SHAP library helps data scientists debug models by observing how the specific prediction was made. It helps make ML outcomes more understandable for users having less knowledge about ML model predictions.

Key Features of SHAP

Proven efficiency

SHAP tool helps explain ML model predictions by understanding the fair distribution of the feature’s gain. SHAP library is based on the Shapley values concept. These Shapley values hold a profound theoretical foundation from computational game theory.

Global or local comparison

SHAP offers the flexibility to compare the feature contribution and importance globally. However, it also allows changing the dataset from global to a specific subset of interest.

Unified methods

SHAP unifies different methods such as LIME, Shapley sampling value, QII, DeepLIFT, Shapley regression values, and Tree interpreter.

Consistency

Consistency refers to the condition where a model changes due to some simplified input’s contribution change while the input’s attribution remains the same. It happens because the remaining input’s attribution adjusts accordingly to keep this consistency.  

Reliability

SHAP achieves good reliability where a slight change in the input does not affect output drastically and causes a domino effect.  

Companies using

SHAP

IBM
DataRobot
Databricks
No items found.

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring