AI Observability
 • 
11
 minutes read

AI Observability 101: A Complete Guide You Are Looking For

An in-depth guide that simplifies AI observability concepts with many examples, helping you understand AI observability beyond traditional ML monitoring.

By 
Mugdha Somani
AI Observability 101: A Complete Guide You Are Looking For
In this post:

Despite great excitement around AI, its application in industry is constrained by different factors. Not surprising, but failures have to be expected. Whether it’s Microsoft Tay, Amazon’s recruiting tool, or even the inability of a facial recognition system to identify 3D printed masks, AI system failures are not new.

Given the high risks for both businesses and society, it’s crucial to have a guiding system that helps users understand AI, alerts users on possible mishaps, and helps fix AI system issues when it breaks. The need for such a guiding system led us to a new term in the AI landscape - AI observability.

We have compiled this comprehensive guide to help you understand the various concepts around AI observability. 

Let’s begin.

What is AI Observability?

AI observability brings a modern, holistic and complete approach to driving insights into the ML model’s data, behavior, and performance across its lifecycle.

Observability brings a proactive approach to detecting ML pipeline issues beforehand, enabling you to avoid failures. It helps foster people’s trust in ML systems as they can better understand how a particular prediction was made. Observable AI systems ensure the consistency of model predictions with human thought processes.

AI Observability captures insights into an ML model’s performance across its lifecycle. It empowers users to find and analyze the root cause behind issues to build performant and responsible models. Such a deeper analysis of model behavior helps you detect what went wrong, the severity of the issue, its implications, and the best strategy to overcome it. It enables the team to boost model performance through timely iterations.

 

How is AI Observability Achieved in Practice?  

Bringing AI observability with in-house solutions or MLOps components is a great idea. However, this approach is mainly challenged by maintenance overheads,  slow speed of system adoption, and talent acquisition hurdles. On the other hand, complete AI observability platforms helped beat these challenges with their key benefits which include.  

  • Ease in tool adoption
  • Faster and effortless processes
  • Sustainable partnership with vendor
  • Reliable support by experts

Advanced AI observability platforms facilitate automated monitoring of ML pipelines, data, and models. These platforms help identify and fix ML pipeline issues quickly. The platforms also simplify monitoring thousands of deployed ML models simultaneously. 

 

Observability vs. Monitoring - A Quick Comparison

ML monitoring and AI observability differ in scope, although they share the same intention. Let’s see how.

ChristopherGS defines AI observability as a superset of monitoring and testing in his ebook ‘Monitoring Machine Learning Models in Production’. It empowers teams with root cause analysis capabilities to fix issues faster. Observability depicts a bigger picture with testing, validation, explainability, and preparedness for unpredictable failure modes.

The following comparative points might help you understand the exact difference between monitoring and observability.

ML model monitoring vs. AI Observability
ML model monitoring vs. AI Observability

What could Go Wrong with Models?

You built your model, tested it, and deployed it in production. But model deployment is not the last step in your ML development.

You have to keep a close eye on your ML model performance. ML models degrade over time. If you overlook model performance degradation, you add negative business value at some point with your ML deployments.

Let's check what can happen with your deployed models and what needs your attention to avoid consequent losses.

Model drift

Model drift is the degradation of the model’s predictive performance due to changes in the digital environment and resulting changes in variables like data and concepts.

Model drift causes reduced accuracy of predictions generated by production models compared to the training models. It is classified into the following categories:

  • Concept drift: ML models are built to discover patterns from historical data and predict future behavior based on these patterns. These patterns are nothing but concepts that characterize the relationships between variables. When these relations change in the real world, it also affects the predictive ability of the ML model. The real-world changes in variables result in invalid patterns learned by ML models causing “concept drift.”
  • Data drift: Data drift is observed due to changes in the statistical properties of the independent variables, such as feature distributions. ML models might drift due to real-world changes, changes in the data collection system, or dynamic behavior of noise in the data. 

Model drift examples 

A model trained in 2020 to classify spam emails might degrade in 2022 due to drifts and upstream data changes.

Data drift might result from changes in the target variable's statistical properties, making the ML model’s learning about mappings irrelevant in the new context.

On the other hand, changes in consumer priorities — what we like today may not be our choice tomorrow, might cause concept drift.

Data quality issues 

Monitoring data quality of models defines the first line of defense as prediction quality is highly dependent on the data provided. Some of the data quality issues include  

Issues with preprocessing pipeline

If the streaming data pipeline contains multiple data sources, a change in one or more data sources can cause a breakage in the preprocessing pipeline. The following conditions can affect your model’s data quality.

  • Wrong source: A data pipeline points to an older version of tables due to unresolved version conflict.
  • No access: Permissions are not updated to access the data's new location.
  • Database updates: The database is updated to a new version with specific changes. For example, space is replaced with underscores, and the naming convention is changed for column names.

Alteration in source data schema

Data quality issues can even arise due to valid changes made to the data at the source. Although preprocessing pipeline works fine after the changes, the model fails to produce the correct response. 

For example, a schema change such as renaming an existing feature column and adding one more column to catch new data.  These changes can complicate the model unless it is not updated to map the relationship between the new column and the existing columns. 

Data loss at the source

Data loss or corruption at the source defines a big threat to the data integrity of your ML systems. It can be due to hardware malfunction, broken sensors, or similar reasons. 

Sometimes the ML system remains unnoticed and accepts data from a corrupted source. For example, a fractured sensor returns its last reading consistently to the ML system. Such negative feedback loops pose a high risk to your ML models.

 

Outliers

Outlier is an instance in a given dataset that lies far from the rest of the data points. Outlier indicates a vastly larger or smaller instance than all remaining values in the data set. Detecting and fixing the outliers is crucial as they can adversely affect ML models' statistical analysis and training process. 

Monitoring outliers is business-critical as well as challenging. If a data point is recognized as an outlier, you can be more aware that the model’s accuracy might drop. In-time and accurate outlier detection also help plan human intervention to reevaluate model predictions. Outlier detection also helps avoid losses such as the deliberate manipulation of ML systems.

In data science, all outliers are classified into one of the following three types. 

  • Global outliers or point anomalies: A data point is classified as a global outlier if its value is far outside the entirety of the remaining data set. 
  • Contextual or conditional outlier: If an individual instance deviates in a specific context or condition (but not otherwise), it is classified as a contextual outlier. These are difficult to spot and require some additional background information. 
  • Collective outliers: If a collection or subset of data points is totally different compared to the complete data set, it is recognized as a collective outlier.
Data drift as compared to outliers
Data drift as compared to outliers

Check this simple example to understand how outlier affects predictions. 

One dataset contains an outlier value of 81, and the other dataset is without an outlier. You can notice the vast difference in standard deviation and other statistical parameters.

 

Outlier example 


Detecting outliers

Below are some of the techniques used to detect outliers:

  • For larger datasets, techniques such as Z-score, Boxplots, and Inter Quantile Range are useful for outlier detection
  • Application of statistical distance tests on events to detect out-of-distribution issues
  • Analysis of model features to understand the feature your model is sensitive to
  • Performing distribution tests to compare feature distribution in training and production
  • Unsupervised learning methods to categorize model inputs and predictions that help discover cohorts of anomalous examples and predictions

 

Training-serving skew

Training-serving skew occurs when the ML model’s performance during training considerably differs from performance during serving. It occurs due to a discrepancy between data handling in training and serving pipelines, or a change in the data between training and serving.

Typical reasons for data skew include

  • Incorrect design of training data
  • A feature not available, removed, or re-created by combining other features.  
  • A data dependency situation where the model ingests data from an external system. This external system changes how it produces data, which is not communicated in advance.

 

How does AI Observability Solve Model Performance Issues?

The accuracy of deployed ML models depends on several components like data, features, and the model itself. For the ML model to perform, these components must be well organized and work in the expected sequence. The performance of ML models depends on underlying data, sudden concept changes, procedures used to handle pipelines, and several other factors. 

Consider an example

A lending institute deployed an ML model for credit risk approval. It was working well for a few months. But after assessing loan rejection complaints by genuine clients, the team noticed that model disapproves loans to candidates in the income range of 50K-70K USD. The segmented analysis showed that the model is targeting this salary group.

When the operations team checked with the data scientist, they looked into the data that trained the model and discovered that the data used to train the model had fewer examples in this income range.

The model was trained using data that failed to reflect the income range of the 50K-70K segment adequately. The team had to append the new data and retrain the model for accurate predictions of all income ranges. 

AI observability enables here to understand different issues and plan the best remedial tweaks. In the above example, observability can help you identify the exact training issue that became a big threat later. Timely detection of drifts, analyzing the root cause of specific cohorts, and continuous monitoring of model performance help solve most issues in time. 

With an AI observability tool, you can make more confident decisions by combining multiple outcomes. For example, AI observability can help you dig deep into drift issues and understand what is causing them. With this ability, you can identify problems at their earliest stages and tackle them thereby preventing a major mishap.

On the other hand, a mere monitoring tool will only notify you of the drift. You will still have to do the analysis on your own to identify and rectify the core problem. In the real world, this might take days. Leaving your users on such a cliff hampers the credibility and relationship with your customers.

What is AI Explainability?

AI explainability brings a set of processes that allows humans to understand, analyze, comprehend, and trust the results produced by ML algorithms. 

Explainability helps comprehend models' predictions, expected impact, and potential biases that might affect your model’s performance. It fosters people’s trust in AI systems.

As your ML system gets scaled, it becomes challenging to introspect and understand why a model has delivered a specific prediction. Thankfully explainability helps here. 

Machine learning model behavior could be explained using these approaches.

Global explainability 

It helps derive the features that are most responsible for the model output. It helps determine what part a particular feature plays in the final decision or prediction of the model. It is also used to understand how a model is “learning” by considering the changes in the extent of a particular feature used in the model decision-making.

Cohort explainability

It helps you break down your model’s decision-making over cohorts or segments of your data. It enables you to analyze anomalous behavior towards a section of your data and discover biases in your models. It helps you identify performance gaps in your model between a good performing cohort and a poor performing cohort. 

To ensure consistency in your model performance, you will have to maintain coherence in its decision-making throughout different cohorts of your data. This helps in better generalization of your models and avoids errors like overfitting, where your model relies heavily on information that is too specific to the data it was trained on.

Local explainability 

It is more individualistic as compared to global and cohort explainability. It helps you single out a particular decision of your model and analyze what parameters led to that decision.

Techniques and tools for explainability 

ELI5

A Python package that helps debug and visualize classifiers and explains their predictions. ’. It supports many ML frameworks, including scikit-learn, Keras, XGBoost, LightGBM, etc.

SHAP

SHapley Additive exPlanations is a game-theoretic approach to explain the output given by any ML model. It applies the classical Shapley values from game theory and their extensions to ensure optimal credit allocation with local explanations. 

LIME

Local Interpretable Model Agnostic Explanation is a visualization technique that explains individual predictions and handles irregular input. This technique approximates any black-box ML model prediction with a local, interpretable model to explain each prediction made. 

Class Activation Maps (CAMs) 

Image-specific explainability tools 

Skater and AIX360

An open-source solution to ensure XAI

AI Observability for Every Role

AI Observability is the essence of modern, AI-enabled business success. It benefits all associated stakeholders with actionable insights into ML systems. AI observability helps different roles of the data science team.

Data scientists

  • Track the performance of all deployed models from a central location
  • Get more profound insights into model behavior 
  • Optimized model retraining decisions
  • Automated experiment tracking helping DS to focus on core tasks

Data engineers 

  • Track the performance of all deployed models
  • Experiments around data sets
  • Address data pipeline issues faster
  • Monitor entire data pipelines

Model risk compliance team

  • Mitigate model failure risks faster 
  • Meet legal, ethical, and social compliance needs for ML projects
  • Optimize model retraining and other corrective actions

DevOps

  • Build seamless deployment pipeline of performant ML models
  • Align IT monitoring practices with MLOps strategy 
  • Drive deeper insights into the entire ML pipeline, model behavior 

Software engineers 

  • Automated pipeline monitoring
  • Prediction monitoring helps teams to be more productive
  • Parallel working 

Are You on the AI Observability Bandwagon?

The next challenge is getting AI observability ingrained in your model development. We have two options here - buy or build observability solutions

Having built an AI Observability Platform that addresses all the challenges mentioned, we would recommend you to consider buying an Observability solution for the following reasons:

  • Automated monitoring of the entire ML pipeline
  • Proactive identification of model flaws
  • Easy root cause identification 
  • Understand model decisions
  • Build trust with end-users faster
  • Model health tracking with a fully-customized dashboard 
  • Plug-and-play support, easy configurations, and integrations

Censius is one such complete AI observability platform that helps you track your model performance, ML pipelines and detect issues before they cause a mess. You can set up monitors to track your ML pipelines for drift detection, training-serving skew, model activity, performance metrics, and more.

Our 5-stage AI observability framework brings end-to-end observability to your ML applications. It comprises a series of the following steps. 

  • Deployment: Release candidate model and use Censius configurable dashboard to assess the result.
  • Observe: Monitors alert users in real-time for possible issues
  • Rectify:  Dive deep into problematic segments and use explanations to fix issues
  • Training: Log your training dataset as a baseline for monitoring
  • Validation: Log your validation dataset and check different performance concerns 

Censius helps mitigate model failure risks, optimize performance, and troubleshoot issues faster.

Feel free to sign up for a demo experience the best in class ML model observability solutions

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Censius AI Monitoring Platform
Automate ML Model Monitoring

Explore how Censius helps you monitor, analyze and explain your ML models

Explore Platform

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring