MLOps
 • 
11
 minutes read

How To Validate Data For ML Models In Production

This blog will familiarize you with two critical terms part of the Machine Learning Pipeline -- “model validation” and “data validation” and how to perform them in your ML journey.

By 
Neetika Khandelwal
How To Validate Data For ML Models In Production
In this post:

As a machine learning practitioner, it becomes a routine to handle various components of the Machine Learning pipeline. To get a positive response from your resultant model, the ML pipeline steps should be in sync with one another, i.e., the result of one component should not affect the other one adversely. It can be achieved by regular checks, validations, and maintenance of your model. 

Our primary focus will be on model validation and data validation steps in this blog. Until your data is validated correctly, no matter how powerful a machine learning or deep learning model is, it can never do what we want it to do. On the other hand, model validation lets you confirm that your trained model is giving you the expected output on the test data. So both these components are vital in determining your model's performance before and after the deployment process and, for the same reason, cannot be neglected.

What is Model Validation?

Difference between model development, model validation and model monitoring
Difference between model development, model validation and model monitoring

Most people use their training data to produce predictions and then compare those predictions to the training data's target values. This is a common blunder made when assessing the model's accuracy. The model's predictions should, in theory, be close to reality. Model validation is the process used to get to this position. It evaluates the trained model with a testing data set once model training is completed. The testing data may or may not be a subset of the data used to create the training set. This gives a trained model the ability to generalize.

Model validation is a complex process that is difficult to categorize or classify in a way that can apply to all models. It's a part of the machine learning governance system. Validation is essential for ensuring that models are sound. It also detects and analyses potential constraints and assumptions and their consequences

Basic steps of model validation include:

  • checking if the model inputs are clean
  • ensuring that the dataset on which a model is trained is representative of the data on which the model is going to be executed 
  • analyzing the model performance
  • analyzing the model's stability 
  • analyzing the robustness of the calibration procedure

Importance of Model Validation

Model validation ensures the effectiveness and correctness of a trained model. Your model may perform poorly if the process is not validated, resulting in a significant loss. Thus it is not wise to rely on its prediction. In sensitive sectors such as healthcare, self-driving automobiles, stock analysis, and prediction, any error in the detection can result in severe fatalities due to the machine's incorrect judgments in real-life forecasts.

There are several model validation procedures, but the two most common are in-time and out-of-time validation. In the case of in-time validation, a portion of the development dataset is held aside. The model is then tested to evaluate how it performs on data from the current time segment to the one on which it was constructed. On the other hand, a dataset is collected from a different time segment for out-of-time validation, and the model is tested on this unseen chunk of data to judge its reaction to unseen data.

As a result, if a model has been thoroughly verified, the developers can be confident in its performance. A model that has gone through the validation procedure is deemed eligible to act in future scenarios. It gives a better ROI by preventing poor future performance, which is why it should be valued considerably more than it is now.

Build a Trustworthy AI Model using Data Validation

Data validation entails evaluating the accuracy and quality of the source data. It ensures that rare anomalies or the ones reflected in incremental data are not overlooked. Because of many reasons, such as code modifications that produce problems in the serving data ingestion component, the new incoming data in the serving layer can change. 

The following points demonstrate how data validation helps in building an AI model:

  • Data mistakes are easy to spot in the early stages
  • With better data, the quality of your AI model improves
  • Because the datasets obtained and used in processing are clean and accurate, data validation promotes cost-effectiveness by saving engineers time and money when debugging errors
  • In model development, there is a shift toward data-centric approaches
  • It guarantees that data gathered from various sources, whether structured or unstructured, meets the business requirement
  • Improved data accuracy can lead to increased profitability in the long term
  • It increases AI model integration and interoperability with most procedures

Benefits of Model Validation

If the model is validated correctly, it will be able to perform well on unseen data, which is the final step of any machine learning model. This process helps developers ensure that they can be confident with the performance of their model. 

Additionally, stress testing procedures are included in the data validation process. This aids in sending a version of a model into production that has already been thoroughly tested for stress scenarios and will not fail when disaster strikes.

So, model validation determines the trustworthiness of the learned model. The resultant model after the validation process is apt to act robustly in future circumstances. Model validation also reduces costs, uncovers more errors, improves scalability and flexibility, and improves model quality.

How do you Validate Data for Machine Learning 

  • For analyzing how your model will behave in the real world, you must split your dataset into training, testing, and validation data. The training data set can be used for model training, the testing data set can be used for testing on unknown data, and the validation data set can be used for hyperparameter tuning and model selection. We may measure the actual model performance by creating multiple samples and splits in the dataset. The number of samples and the model used determine the dataset split ratio.
  • Duplicate samples can emerge in both training and testing sets for various reasons. Therefore it's critical to spot and eliminate them. Before splitting the data, it is best to remove duplicates, check for partial copies, sort by different columns, and review the resulting data.
  • It's critical to create a dataset that closely resembles real-world data and use it to evaluate your model. This is especially significant when the dataset and the production data are not from the same source. To perform this validation, compare the structure of a real-world data point to the structure of your training data.
  • If your model was trained on a clean dataset, it's critical to create a dataset that closely resembles real-world data and use it to evaluate your model. This is especially significant when the dataset and the production data are not from the same source. To perform this validation, compare the structure of a real-world data point to the structure of your training data.
  • One of the most common causes of model accuracy degradation over time is data drift. It essentially means that the data distribution changes over time and differs from the training data distribution. So, to determine drift, you can train your model on past data and then evaluate it on current data; if the results deviate significantly from the historical data, you're dealing with data drift.

Approaches to Validate a Machine Learning Model in Production 

Poor data quality might cause system crashes or slow model performance degradation. It occurs as a result of changes in real-world data. As a result, after the model is put into production, it's critical to retrain it after a certain amount of time.

The impact may be less in the early stages, while in the later stages may be more dreaded. It may become difficult to debug and isolate.

Model validation should be performed after model testing and before deployment. If you make any modifications to your machine learning model after being deployed, you should re-validate it. Further, validation should be performed regularly after deployment, such as once a year, as part of the monitoring process.

Here is the list of the popular approaches to validate your machine learning model in production:

K-fold Cross-Validation Method

K-fold cross validation method
K-fold cross validation method

One of the most extensively used methodologies among data scientists is K-fold cross-validation. This method detects overfitting or variations in the training data that the model selects and learns as concepts. It adheres to a data partitioning rule that is more effective in this method. As a result, you'll be able to use your data to create a more generalized model.

The process includes only one parameter, k, which specifies the number of groups into which a given data sample should be divided. It's a popular strategy because it's straightforward to grasp and generally produces less biased results.

For your data sample, the k value must be carefully chosen. A k value that is incorrectly chosen may give an inaccurate impression of the model's skill. 

Leave-one-out cross-validation method

LOOCV is a type of cross-validation in which one observation is kept for validation, one record is used for training, and one just for testing. For each observation that is held out, the model is evaluated. The mean of all the individual evaluations is then used to compute the final result. However, there are certain disadvantages to employing this method.

Using LOOCV can be computationally expensive, especially if the data set is enormous and the model takes a long time to learn. Another issue with LOOCV is that you provide the model with practically all of the training data and only a single observation to evaluate, leading to significant variance or overfitting.

Random subsampling validation method

Random Subsampling Validation Method
Random Subsampling Validation Method

Random subsampling, commonly known as Monte Carlo cross-validation, is a technique that divides data into subgroups at random. The user determines the size of the subsets. At various times the training and testing sets are split, resulting in multiple data sets being picked at random. The data division at random can be repeated indefinitely.

The accuracy gained from each partition is averaged, and the model's error rate is the average of each iteration's error rate. The random subsampling method has the advantage of being able to be repeated indefinitely.

Time-series cross-validation

The time-series cross-validation method, also known as the forward-chaining approach or rolling cross-validation, divides data into train and validation according to time for time-series datasets. The procedure is as follows:

  • Start with a small subset of data for training
  • Predict for later data points
  • Verify the accuracy of the projected data points

The same forecasted data points are incorporated in the following training dataset, and additional data points are predicted. In short, the next instance of train data can be treated as validation data for a given iteration. For problems involving time series, the sequence of the data is critical.

Stratified k-fold cross-validation

Stratified k-fold cross-validation
Stratified k-fold cross-validation

When we have imbalanced data, and the data size is on the small side, Stratified K fold cross-validation is often beneficial. It's designed for classification issues where the desired class ratio is the same in each fold as in the entire dataset, rather than being entirely random. The dataset is divided into k groups or folds, each with an equal number of instances of the target class label in the validation data. This eliminates the possibility of an overabundance of one class in the validation or training data.

Data Validation Components 

The data validation stage has three main components:

  • Data analyzer - it computes statistics over the new batch of data.
  • Data validator - it checks the properties of the data against a schema, and 
  • Model unit tester - it checks for errors in the training code using synthetic data generated through the schema. 

Now let's discuss the data validation framework in some detail.

The goal of the data validator is to avoid overfitted data. When verifying a new batch of data, an overfitted schema is more likely to generate false alerts, which raises the cognitive load for on-call engineers, lowers their trust in the system, and may even lead them to turn off data validation entirely. 

The data validator component tries to detect errors as early as possible in the pipeline to avoid training on bad data. It would help if you relied on the per-batch data statistics produced by a prior data analyzer module to ensure it can do so scalably and efficiently. The data validator component verifies each batch of data by comparing it to the schema. Any discrepancy is recognized as an anomaly, and the on-call is notified for further inquiry.

Model unit testing differs from other types of testing in that it validates the training code's ability to handle the range of data it may encounter.

Google Research developed a similar method but modified it for machine learning by using ideas from the data management system. This method first codifies the expected statistics from correct data and then performs data validation using these anticipated statistics and a user-defined validation schema. This framework allows the user to check a single batch of incremental data by looking for anomalies, detect substantial changes between batches of incremental training data, and find assumptions in the training code that aren't reflected in the data.

Importance of Model Monitoring 

Many challenges may arise when your machine learning model meets the real world, either immediately or after some time in production.

  • Many machine learning models are trained using clean datasets that are hand-crafted. Due to this mismatch, when these models are applied to real-world data, they perform poorly. Even a tiny mismatch in the format of the data provided to your machine learning model can significantly impact its performance.
  • Data in the actual world changes all the time, which might alter the distribution of data provided to your model and the accuracy of the target forecast. As a result, the data you used to train your model loses relevance over time. Your model may get stale, and performance may suffer as a result.
  • The data pipeline is generally complicated, and the data format may vary over time, with fields being renamed, categories being added or split, and so on. Any such adjustment can have a significant impact on your model's performance.
  • It's also possible that your model isn't getting the traffic you expect, or the model's latency is so high that the forecasts aren't being considered.

Companies regularly monitor their machine learning models to avoid such disastrous consequences.

Model monitoring allows you to keep track of changes in performance. It also aids in your understanding of how to successfully debug if something goes wrong. The most basic way to track the shift is to evaluate performance against real-world data regularly.

Additionally, to get clear with the terms model validation and model monitoring, here are some points that specify how they are different.

  • Model validation happens in the same order as the model construction. If a model fails to function as expected during validation, it is returned to the development stage. This is a one-time procedure carried out right after the model is developed. On the other hand, model monitoring begins once a model has entered the production stage. It's a never-ending process. Every model has a specific monitoring frequency determined and analyzed to ensure that it is working as expected and that its results are reliable.
  • Another aspect is that during the model validation stage, you emphasize statistical measures that might help us understand the model's performance and response. On the other hand, the model monitoring step focuses on both statistical and business indicators to arrive at our conclusion of being confident in the relevance and reliability of a certain model.

Conclusion

In this blog, you came across several concepts related to the validation process in ML. Now you would have a clear idea of how vital model validation and data validation are. You also learned how they could help achieve a trustworthy AI model. So, it's critical to examine the metrics that indicate our model's effectiveness and the story behind them. AI practitioners need to ensure the quality of their model. This requires embedding AI system testing, validation, quality assurance, and compliance into the machine learning pipeline.

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Censius AI Monitoring Platform
Automate ML Model Monitoring

Explore how Censius helps you monitor, analyze and explain your ML models

Explore Platform

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring