AI Observability
 • 
9
 minutes read

Explained: Global, Local and Cohort Explainability

This blog will be focused on explaining the importance of explainability and how Censius solves the explainability problem.

By 
Harshil Patel
Explained: Global, Local and Cohort Explainability
In this post:

Machine learning algorithms and AI systems are now widely employed in a variety of fields, and data is used practically everywhere to solve problems and assist humans. Progress in deep learning, as well as the general development of new and innovative methods to use data, has played a significant role in this achievement. As a result, even AI professionals struggle to comprehend the intricacy of these systems, which is why models are sometimes referred to as black boxes.

Machine learning techniques are used in various industries, including banking and healthcare. These techniques could extend from simple applications to life-altering decisions. Therefore, machine learning developers must understand what happens behind their algorithms. In this essay, we shall learn what Explainable Artificial Intelligence (XAI) is and why it is important.

What is XAI (Explainable Artificial Intelligence)?

Explainable AI is a branch of study focused on ML interpretability techniques, with the goal of better understanding machine learning model prediction and explaining it in human words.

Explainable AI aims to improve our comprehension of machine learning models—why do people make decisions the way they do? The following are the three most critical features of model explainability:

  • Transparency
  • Questioning ability
  • Better Understanding 

Why is Explainability Important?

To explain particular ML model predictions, AI systems require a solid strategy of "if/then" reasoning. This lack of transparency can lead to erroneous judgments, mistrust, and refusal to adopt AI. AI explainability improves governance by bridging the gap between technical and non-tech teams, allowing them to better comprehend initiatives. 

  • We can tell if a model has any bias after we have a good understanding of it. For example, if a model is designed for the American population, it may not apply to other people.
  • It also aids in determining if the models are fit for use in the actual world.
The need for Explainable AI or XAI
The need for Explainable AI or XAI | Source 

Benefits of XAI:

Transparency

When a model makes a bad or erroneous judgment, it's critical to understand the elements that led to that conclusion, as well as who is responsible for the failure, to avoid such issues in the future. Data science teams may use XAI to provide enterprises more control over AI tools.

Improves Performance

Explainable AI aids in the systematic monitoring and management of models to enhance business outcomes. You may review and improve the model's performance regularly.

Trust 

Trust plays an important role in data-sensitive industries like healthcare, autonomous driving, and banking. XAI assists all major stakeholders in interpreting the model. It increases models' trustworthiness.

Reduce the risk 

XAI assists you in keeping your AI models basic and understandable. All regulatory, compliance, risk, and other criteria are addressed using XAI. It saves time and money with the help of automated decision-making tools.

Explainable AI Examples

XAI use cases
XAI use cases | Image by author 

Insurance

Insurance is a multibillion-dollar industry, and even a minor human error can result in significant financial losses. AI can assist companies in assessing risk, detecting fraud, and reducing human error throughout the application process.

XAI can help insurance companies in many ways:

Claims Management: False claims, claim rejection without sufficient information, and other issues can negatively influence a company's image. Companies can manage their claims and improve customer satisfaction by using XAI. XAI not only assists organizations in understanding claims but also assists them in detecting false claims.

Customer Retention: Retaining existing clients is one of the most cost-effective strategies. XAI can forecast customer attrition and provide reasons for it. It can help you in understanding customer behavior.

Pricing: Insurance premiums are determined by a variety of factors. Customers may better comprehend insurance price changes and make educated decisions about their needs with the help of XAI. It can assist you in determining pricing for various customers or in introducing new plans for specific user categories.

Healthcare

Artificial intelligence is applied in a variety of medical sectors. Healthcare is a discipline that needs extreme precision.

As a result, AI can only be used in specific ways and in particular areas. 

Improvement in Medicines: XAI has the potential to boost human talents for building novel bioactive compounds with desired features. XAI assists doctors and researchers in the development of new drugs and the advancement of present studies.

Prediction of medical problems: AI is used widely in healthcare to detect various health issues and how to prevent them. XAI increases transparency and trust in the predictions. As a result, doctors may be able to make decisions based on more trustworthy information.

Explainable AI methods

Let's take a quick look at the differences between white-box(glass) models and black-box models before we go into ways to interpret our model:

Comparison between black box models and white box models


Your machine learning model can be interpreted in a variety of ways. Let’s discuss a few Explainable AI methods:

Categorization in XAI
Categorization in XAI | Image by author 

Global Explainability

Global explainability helps derive the features that are most responsible for the model output. It helps determine what part a particular feature plays in the final decision or prediction of the model.

It is also used to understand how a model is “learning” by considering the changes in the extent of a particular feature used in the model decision-making.

Non-Data Science teams mainly use global explainability to determine what data variables are responsible for the decisions made by the models.

Global vs. Local Explainability
Global vs. Local Explainability | Source 

Let’s discuss a few Global Explainability methods:

PDP (Partial Dependence Plot)

PDP gets a global visual depiction of how one or two factors impact the model's predicted outcome while keeping the rest of the features constant. PDP determines if the goal and chosen characteristic have a linear or complicated connection. 

Image Source 

ICE (Individual Conditional Expectation)

PDP(global technique) is an extension of ICE, yet ICE is easier to comprehend than PDP. We can describe diverse connections using ICE. While PDP allows for two feature explanations, we can only explain one feature at a time using ICE.

As a result, it generates a graph representing the average projected outcomes. These are the results for various feature values while keeping other feature values constant.

ICE Example
ICE Example | Image Source 

Other than these methods, you can also employ platforms such as the Censius AI Observability Platform. With the Censius Platform, it is possible to encapsulate the complications of complex methods while interpreting global explainability. Censius uses SHAP values to interpret the impact of features on global predictions and instantly displays the top features without the need to configure or code algorithms. These features can be further analyzed for changes in distribution and outlier values. Finally, there’s an option to detect causal relationships between data drift and global performance at the end of the Explain tab on Censius.

The top section of the explainability screen on Censius
The top section of the explainability screen on Censius

These features make it possible to instantly understand the root cause behind the red flags detected by Censius monitors in a no-code environment. The end result is a speedy recovery of model issues and the most negligible impact on end-users.

Cohort Explainability

This method is mostly employed throughout the model development phase, notably during the model validation phase before deployment. This critical phase determines the model's generality before deployment, allowing you to test its correctness with new or unknown data.

The Explainability of cohort models will aid organizations in identifying the particular variables that may be driving this decline in accuracy in this subset. Model correctness may be measured and compared between different cohorts or subsets of data using this method.

Cohort explainability may be extremely helpful to a model owner in explaining why a model isn't doing as well for a portion of its inputs. It can assist you in detecting bias in your model and identifying areas where your datasets may need to be strengthened.

Local Explainability

Local Explainability explains how a machine learning model makes individual predictions. The concept of local model explainability is beneficial for determining which individual elements influenced a particular choice. For models used in regulated organizations, local model explainability is critical. Organizations may be audited or required to defend why the model made a business choice. Local explainability will answer questions like:

  • Why did the model anticipate this particular outcome?
  • What was the impact of this particular feature value on the prediction?

Let’s discuss a few Local explainability methods:

 

LIME (Local Interpretable Model-Agnostic Explanations)

It is a local model interpretation approach that uses local surrogate models to approximate the underlying black-box model's predictions. LIME creates a fresh dataset from the data point of interest to train a surrogate model. Depending on the kind of data, it creates the dataset in a different method.

  • Works on any black-box model.
  • The internals of the model are hidden.
  • Works with a variety of data types.
  • We can confirm the explanations and build confidence using domain knowledge.

Advantages

LIME is a suitable choice for examining predictions. LIME is also one of the few data interpretation approaches that can handle tabular, text, and picture data.

Disadvantages

Sometimes, the explanations might be unstable, which means that the explanation of two data points that are relatively near can differ significantly.

Read more about LIME

SHAP (SHapley Additive exPlanations)

It is a game-theoretic way to explain any machine learning model's output. It describes how a prediction value is distributed fairly across the many inputs.

  • SHAP provides high reliability.
  • One of the Most Popular methods.
  • Combines local explanations with efficient credit allocation.
  • Predictions are evenly distributed. 

Advantages

SHAP approach is based on a proven theory, which makes it easy to interpret the models. It provides consistent and reliable results.

Disadvantages

The downside of SHAP is that it takes a long time to compute. This is because the data is required to replace portions of the instance of interest. This issue can only be prevented if data instances that appear like genuine data instances are created.

Read more about SHAP

Dealing with Explainability using the Censius AI Observability Platform

AI explainability tools like Censius AI Observability Platform can help you uncover blind spots in your model's decision-making to successfully train your models to repair them. With Censius, users can execute any of the above-mentioned explainability types - global, cohort, or local, to understand and explain model predictions. 

For example, a sensitive data segment such as “credit transactions after midnight” can be closely monitored for anomalies, bias, drift, or performance dips. Based on the issues flagged by the monitors, Censius Explainability can instantly point out the most impactful features in that cohort, track and uncover outliers, and find causal relationships between data drift and model performance. Combining all the results from the above analysis, it is possible to quickly identify the root cause of model issues in the cohort and resolve them at the earliest.

With Censius, you can:

  • Discover the "why" underlying model selections
  • Enhance the performance of your predictions
  • Detect unwanted bias and fix models
  • Ensure models stay compliant

It will aid organizations in maintaining control over their model decision-making processes by supporting them in examining model outputs that exceed a predetermined threshold or baseline.

See how Censius can help you achieve AI explainability: Schedule a customized demo 

Future of Explainable AI

Explainable AI Trend
Explainable AI Trend | Image Source

‍Explainable AI (XAI) enables companies to be transparent in their deployment of AI technologies, increasing consumer trust and acceptance of AI in general. XAI brings together developers and business executives to improve the efficiency of applications.

The dataset will be enhanced using XAI, improving the model's accuracy. As a consequence, we can determine if our model has any bias. We can quickly analyze and improve our model if there is any bias.

“The rise and advancements in Explainable AI technology will be the next frontier of AI. When used in a range of new sectors, they will become more agile, adaptable, and intelligent." - CEO, Beyond AI.

Summing it up

As confidence grows, explainable AI will aid in increasing model adoption, enabling models to have a greater influence on research and decision-making. Explainability must be designed from the beginning and integrated throughout the full ML lifecycle; it cannot be an afterthought. AI explainability simplifies the interpretation of complicated models while preserving transparency and efficiency.

Learn how Censius helps you achieve AI explainability.

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Censius AI Monitoring Platform
Automate ML Model Monitoring

Explore how Censius helps you monitor, analyze and explain your ML models

Explore Platform

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring