AI Observability
 • 
7
 minutes read

How To Ensure Data Privacy And Compliance In Model Monitoring?

This article looks at how you can handle sensitive data and maintain data privacy.

By 
Harshil Patel
How To Ensure Data Privacy And Compliance In Model Monitoring?
In this post:

Machine learning might not be as magical as it seems. In fact, machine learning processes produce a lot of data that can reveal your personal information about you. Machine learning also has limitations that make it vulnerable to errors and biased approaches. 

To overcome these challenges, machine learning can incorporate human-judgment input to improve the quality of its inferences. Still, there are some tradeoffs with this approach that must be considered before taking action. Privacy is one such tradeoff that opens up a lot of ethical considerations in acquiring and storing personal data from people to improve the accuracy of an application's insights. In this article, we will evaluate the use of machine learning in relation to its privacy aspects and how you can ensure data privacy in AI/ML.

Data Privacy and Machine Learning 

Many machine learning problems require access to private data, and the issue is that machine learning models tend to memorize their private data even if they are not over-fitting. 

Anonymization has been how we balance data with privacy protection for over 30 years. If your data is present, but I am unaware that it belongs to you, your privacy is protected. So why isn't anonymization enough? Whenever we work with some sort of statistical analysis, we often tend to leak data saying we don’t want to learn about the data set. 

We often overlook the privacy aspect of AI and machine learning. Data privacy is a major concern when training and testing machine learning models, especially those that use sensitive data to learn and infer. 

Learn More: What is model monitoring

Ensure Data-Privacy

Machine learning models require massive datasets to 'learn' from to achieve reasonable accuracy levels. However, the information we provide them is highly sensitive and intimate. As a result, we must figure out how to harness the potential of AI while maintaining data privacy. 

Before getting into tools and techniques, let's discuss a few key components that you should keep in mind to preserve privacy. 

  • Training Data Privacy: The assurance that a malicious party will not reverse-engineer the training data
  • Input Privacy: The guarantee that no one else, including the developer, will be able to see the information given by a user
  • Output Privacy: The guarantee that a model's output is exclusively accessible to the user whose data is being inferred on
  • Model Privacy: The confidence that the model will not be stolen by a malicious individual or group

Learn more about all key pillars in detail.

Key Components to ensure privacy
Key Components to ensure privacy | Image Source

Let’s see a few techniques and tools you can use to preserve data privacy while training machine learning models. 

Differential Privacy

Differential Privacy analyzes the privacy of any mechanism that accesses data and produces some sort of output. Differential Privacy is among the most effective privacy-preserving Machine Learning techniques.

To simplify it, let's say we are training our model on a medical dataset, and we want to predict if a particular patient has cancer or not. The trained model says it has 0.55% confidence that John does have cancer. 

Now, let's say we added John’s data to the previous dataset and retrained the same model. Now, the model says that it has 0.57%(Case A)  confidence John has cancer. This doesn’t necessarily mean John has cancer. However, if the machine learning model had predicted 0.80%(Case B) after we added John's data, John's chance of cancer increases significantly.

So here, the prediction of the machine learning model has actually leaked information. Every time an outcome suddenly becomes much more likely or unlikely after removing or adding that data record from the dataset. 

Despite its protective power, differential privacy is typically compatible with or beneficial to effective data analysis. This also protects against overfitting extending its advantages beyond data security.

Differential Privacy | Example explanation | Image by author
Differential Privacy | Example explanation | Image by author

The main goal of Differential Privacy Is to reduce the value of privacy loss. DP gives a mathematically verifiable assurance of privacy protection against various privacy challenges.

Pros:

  • Data remains on a remote machine
  • Detailed budgeting for privacy

Cons:

  • The information is secure, but the model is in jeopardy
  • What if we need to do a calculation on data from numerous sources

Learn More about Differential Privacy

Remote Execution

Remote execution involves testing your work while offloading heavy processing to the server. It's very helpful for creating and testing analytics. In simple terms, it allows you to communicate R commands from ML Server to a remote session running on another Machine Learning Server instance.

It's the ability to use 'PyTorch' or Python-based processing on computers to which you don't have direct access. You must first import'syft', then 'torch,' then utilize the 'TorchHook,' which extends Torch with privacy-preserving Machine Learning techniques. - reference 

There are numerous approaches to facilitate remote execution:

  • In console programs, from the command line
  • The APIs are called from the code

Pros:

  • Data remains on a remote machine.

Cons:

  • Without viewing the data, how can we perform effective data science

Search and Example Data

This functionality will let you perform effective data science with the data you don't see. So, suppose you want to conduct some type of analysis. In that case, the user may first search for a dataset, and references to this distant data can be supplied together with metadata, which specifies the schema, how it was gathered, the distribution, and so on. 

Pros:

  • Data is stored on a remote machine
  • We can feature engineer with respect to the sample data

Cons:

  • Someone can steal data using PointerTensor.get()

Secure Multi-Party Computation

It means that many individuals can share ownership of a number by computing a function with their private inputs without revealing them to one another. So, in this technique, we'll encrypt some value and divide it among shareholders, with the original encrypted value being unknown and concealed due to the encryption. 

  • Data remains on a remote machine
  • Formal, detailed budgeting for privacy
  • The model can be encrypted during training
  • We can perform tasks for many data owners

Dealing With Sensitive Data

Companies that operate with sensitive data must adhere to various regulations, like PCI certification, ISO 27001, GDPR (General Data Protection Regulation), and many more. Let's look at a few things to keep in mind when working with sensitive data.

Data Access

Machine learning requires collaboration between teams from several disciplines. To carry out their job, machine learning engineers and data scientists want complete freedom without jeopardizing consumer security and privacy.

They must have access to production images, view them, and conduct tests under careful supervision. They must be able to do exploratory data analysis, rapid prototyping, and visually evaluate photos while maintaining strict access control, audit logging, and physical security requirements in the environment in which they operate.

  • Access Control - Restrict the data access to those who require it. 
  • Logging of audits - All access to datasets and information is documented, allowing you to see who, when, and where a file was accessed
  • Exploratory data analysis -  To understand data modality, ML engineers and data scientists must be able to quickly reduce fundamental statistical features of the input data stream

Dataset Governance

Datasets must be generated from the disorganized data lake comprising all data from the incoming data streams to train and evaluate machine learning models. 

  • Cryptography - Encrypting all data in transit and at rest is an unquestionable need. Nobody likes to learn that decommissioned data volumes include their personal information or that this information is transferred across the internet unencrypted.
  • Retention of datasets - Organizations frequently work with derived data in machine learning, which may contain personally identifiable information (PII). A set retention time for generated data is required to prevent the danger of mistakenly keeping invisible PII.
  • Consent - When it comes to data, organizations often don't take consent seriously.  Only data with the customer’s agreement should be used for machine learning. Only data from clients who have given their permission is allowed into the environment where training jobs are carried out. 

Model Productionalization

After a model has been trained, the ML life cycle does not stop. When dealing with a single static dataset, additional issues may not be apparent when running models in production systems. These must be considered when developing ML infrastructure.

Reproducibility. The ability to easily retrain a model with new dataset versions is critical to offering a high-quality service. 

With Censius you can have a piece of mind knowing that your whole machine learning pipeline is being monitored. Censius provides a number of tools for analyzing and improving data and machine learning models.:

  • Detect and study prediction biases 
  • Improve model performance for specific cohorts 
  • Maintain compliance with industry standards

Online monitoring - Attackers always attempt to trick the system when working with sensitive information like identity verification. To detect concept drift, you need to keep a close eye on your production models.

With Censius AI Observability Platform, you can monitor different parameters like performance, traffic, data quality, drift, and a lot more. Drift monitors observe the distribution of statistical properties of the streaming input data, and the output data, allowing the user to get information about changing data properties and their effects on model performance.

Recommended Reading: How to monitor machine learning models in production

What’s Next

When it comes to using PII data to train machine learning models and solve complex machine learning issues, ML developers must exercise extreme caution. With automated and continuous monitoring, Censius helps you scale reliable models while redirecting your team’s efforts towards more strategic tasks. Censius ensures that security is a vital consideration of all company initiatives - technical, physical, and operational levels.

Censius provides an option to create specific, access-controlled environments where authorized users can work on raw and feature data. And lastly, for organizations that want to keep their data in-house, we can provide controls to help them meet their specific security and privacy requirements. In this article, we saw how you could deal with sensitive data and ensure data privacy in machine learning. Hope you liked the article.

You can start observing your models by requesting a tailored demo of the Censius AI Observability Platform - Censius

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Censius AI Monitoring Platform
Automate ML Model Monitoring

Explore how Censius helps you monitor, analyze and explain your ML models

Explore Platform

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring