What is SHAP?
SHapley Additive exPlanations is a game-theoretic approach helping explain the output generated by any machine learning model. SHAP computes the contribution of each feature corresponding to a specific prediction to explain it.
SHAP is a visualization tool that helps make ML model outcomes understandable to users. It supports modeling procedures involving libraries such as SciKit-Learn, Keras, PyTorch, TensorFlow, PySpark, and more.
SHAP applies the classic Shapley values from game theory and their extensions to affix optimal credit allocation with local explanations. Shapley values are derived by incorporating concepts from Cooperative Game Theory and local explanations.
For a given set of players, 'Cooperative Game Theory' states how to fairly distribute the payoff amongst the players working in coordination. Here, players correspond to independent features, and payoff corresponds to a prediction.
How Does SHAP Help?
SHAP helps understand the ‘why’ behind ML predictions and how your model predicts in a certain way. It brings explainability to model predictions by computing the contribution of each feature to the model prediction.
SHAP library helps data scientists debug models by observing how the specific prediction was made. It helps make ML outcomes more understandable for users having less knowledge about ML model predictions.
Key Features of SHAP
SHAP tool helps explain ML model predictions by understanding the fair distribution of the feature’s gain. SHAP library is based on the Shapley values concept. These Shapley values hold a profound theoretical foundation from computational game theory.
Global or local comparison
SHAP offers the flexibility to compare the feature contribution and importance globally. However, it also allows changing the dataset from global to a specific subset of interest.
SHAP unifies different methods such as LIME, Shapley sampling value, QII, DeepLIFT, Shapley regression values, and Tree interpreter.
Consistency refers to the condition where a model changes due to some simplified input’s contribution change while the input’s attribution remains the same. It happens because the remaining input’s attribution adjusts accordingly to keep this consistency.
SHAP achieves good reliability where a slight change in the input does not affect output drastically and causes a domino effect.