ML Models usually accompany a certain amount of complexity and sophistication, which is emanated throughout the MLOps process. From the conceptualization of an idea to the final model deployment, the vast array of changes the model goes through makes judging the end product much more difficult.
Therefore, as a general framework, it is much better to have all the stages of the deployment process easily accessible from the get-go. Not only does that make the entire process smooth, but it also streamlines one of the most critical components of MLOps. This brings us to Cerebrium, an AWS Sagemaker alternative providing all the features you need to build an ML product quickly.
Censius and Cerebrium Partnership to Streamline your MLOps
Through this partnership, Censius and Cerebrium enable users not just to deploy the ML product quickly but also to observe and explain its decisions without any intervention. Let’s dive deeper and dissect how this partnership provides the means to build world-class machine-learning projects.
Model Deployment & Testing with Cerebrium
Cerebrium helps companies build machine learning solutions in a fraction of the time by abstracting away a lot of the complexity and mundane infrastructure setup. They allow users to deploy machine learning models to serverless GPUs, handle automatic model versioning, A/B testing, and much more with just a few lines of code. You can run experiments constantly on updated model parameters and test model outcomes continuously.
Model Monitoring & Explainability with Censius
On the other hand, the Censius AI Observability Software is a one-stop solution for all post-deployment needs of ML models. Practitioners can scale their models by equipping them with features that are relevant to the current Trustworthy-ML standards. Some of the many use cases of Censius are to automate AI Monitoring, make models trustworthy by explaining predictions, reduce the time taken to solve issues & understand the functioning and logic behind a decision. With the Explainability suite, you can also explain the black box & enable better models which are moral.
Hit the Ground Running: How to Integrate Censius with Cerebrium?
Getting your models integrated with Cerebrium is pretty straightforward.
We will now demonstrate how to deploy and monitor a Scikit-Learn classifier in simple steps.
Laying the Base
First off, you need to have an active Cerebrium account. Cerebrium offers free tier experience which you can sign up for here.
Once you have logged in, create a new project for your model. Then visit your new Project and copy the API key. This key will be used to deploy the model from your local machine.
Code it!
This part is all about your local machine and your code which could either be a notebook or a simple .py script.
In your development environment, install the Cerebrium package with the following command:
For this example, we will build a Scikit-Learn classifier trained on the Iris dataset. Cerebrium also allows you to deploy Pytorch, Onnx and other model types.
Once you have built the model, save it as a pickle file using the code fragment given below
The Wait is Over
This is the part where we introduce the single line of code to deploy your model.
(Here you need to replace API_KEY with your project API key)
If we want to add logging to our deployment, we need to declare a Conduit instance for your classifier.
Note: To deploy an XGBoost classifier, declaration of the Conduit instance will look like this:
Your newly deployed can be run by firing a POST request like this:
If you navigate back to the project dashboard and click on the name of the deployed model, you will see that an API call was made. From your dashboard, you can monitor your model, roll back to previous versions and see the traffic.
To know more about Cerebrium's functionality or their pre-built models you can check out their docs here.
Firing the Censius AI Observability
Time and again, we have talked about why AI Observability Platform is the need of the hour and how it leads you to robust, reliable models.
Now that you have deployed the Iris classifier, it is time to start monitoring it using the Censius AI Observability Platform.
Before we begin, you need to sign up to start an account with the Censius AI Observability Platform to create your project. Or view the existing project if you are already a user.
Returning to your Python script, begin with the necessary imports:
Simple classifier features and the target class declarations
Following code fragment is to authenticate yourself using your Censius API key and identifiers. These lines use identifiers specific to your Censius project.
Add Censius monitor as performance logger to the Conduit instance:
Time to check if the monitor setup was a success
This line should output the message “(Registered new model version iris-demov1 with Censius411.”, True)”
Let us now fire an event which should cause the creation of a log item on the Censius monitor
This line should output the message “Successfully created log item”
Deploy Conduit instance to get the endpoint to your model
Back at the Censius AI Observability platform, you may check on your project and model versions to see log events.
Wrapping Up
What is better than quick and easy deployment?
Answer: Quick and easy AI observability of your deployed models.
Integrating Cerebrium and Censius into your machine-learning projects will ensure that deployment and AI monitoring are simultaneous and streamlined processes. With Censius, you can quickly dive into any problem to find root causes to continually improve your models in production. If we talk about getting the best of both worlds, Cerebrium ensures that your team can focus their energy on building world-class machine learning projects.
Explore how Censius helps you monitor, analyze and explain your ML models
Explore Platform