Company
 • 
7
 minutes read

Censius & Cerebrium Partnership: Deploy & Monitor ML Models under one roof

Get the best of ML deployment and AI Observability with the Censius and Cerebrium partnership.

By 
Censius Team
Censius & Cerebrium Partnership: Deploy & Monitor ML Models under one roof
In this post:

ML Models usually accompany a certain amount of complexity and sophistication, which is emanated throughout the MLOps process. From the conceptualization of an idea to the final model deployment, the vast array of changes the model goes through makes judging the end product much more difficult. 

Therefore, as a general framework, it is much better to have all the stages of the deployment process easily accessible from the get-go. Not only does that make the entire process smooth, but it also streamlines one of the most critical components of MLOps. This brings us to Cerebrium, an AWS Sagemaker alternative providing all the features you need to build an ML product quickly. 

Censius and Cerebrium Partnership to Streamline your MLOps

Through this partnership, Censius and Cerebrium enable users not just to deploy the ML product quickly but also to observe and explain its decisions without any intervention. Let’s dive deeper and dissect how this partnership provides the means to build world-class machine-learning projects.  

Model Deployment & Testing with Cerebrium

Cerebrium helps companies build machine learning solutions in a fraction of the time by abstracting away a lot of the complexity and mundane infrastructure setup. They allow users to deploy machine learning models to serverless GPUs, handle automatic model versioning, A/B testing, and much more with just a few lines of code. You can run experiments constantly on updated model parameters and test model outcomes continuously.

Model Monitoring & Explainability with Censius

On the other hand, the Censius AI Observability Software is a one-stop solution for all post-deployment needs of ML models. Practitioners can scale their models by equipping them with features that are relevant to the current Trustworthy-ML standards. Some of the many use cases of Censius are to automate AI Monitoring, make models trustworthy by explaining predictions, reduce the time taken to solve issues & understand the functioning and logic behind a decision. With the Explainability suite, you can also explain the black box & enable better models which are moral.

Hit the Ground Running: How to Integrate Censius with Cerebrium?

Getting your models integrated with Cerebrium is pretty straightforward. 

We will now demonstrate how to deploy and monitor a Scikit-Learn classifier in simple steps.

Laying the Base

First off, you need to have an active Cerebrium account. Cerebrium offers free tier experience which you can sign up for here.

Once you have logged in, create a new project for your model. Then visit your new Project and copy the API key. This key will be used to deploy the model from your local machine.

The first look of your project on Cerebrium console.
The first look of your project on Cerebrium console. Source: Cerebrium

Code it!

This part is all about your local machine and your code which could either be a notebook or a simple .py script

In your development environment, install the Cerebrium package with the following command:

pip install cerebrium

For this example, we will build a Scikit-Learn classifier trained on the Iris dataset. Cerebrium also allows you to deploy Pytorch, Onnx and other model types.

Once you have built the model, save it as a pickle file using the code fragment given below

# Save to pickle
with open("iris.pkl", "wb") as f:
    pickle.dump(rf, f)    # rf is the trained model

The Wait is Over

This is the part where we introduce the single line of code to deploy your model.

from cerebrium import deploy, model_type, Conduit, deploy

endpoint = deploy((model_type.SKLEARN, "iris.pkl"), "sk-test-model" , "API_KEY")

(Here you need to replace API_KEY with your project API key)

If we want to add logging to our deployment, we need to declare a Conduit instance for your classifier. 

name = "iris-demo"
c = Conduit(
    name,
    API_KEY,
    [(model_type.SKLEARN, "tests/cache/sktest.json")]
)

Note: To deploy an XGBoost classifier, declaration of the Conduit instance will look like this:

c = Conduit(
    name,
    API_KEY,
    [(model_type.XGBOOST_CLASSIFIER, "tests/cache/xgb.json")]
)

Your newly deployed can be run by firing a POST request like this:

curl --location --request POST 'ENDPOINT' \
--header 'Authorization: API_KEY' \
--header 'Content-Type: text/plain' \
--data-raw '[[5.1, 3.5, 1.4, 0.2]]'

If you navigate back to the project dashboard and click on the name of the deployed model, you will see that an API call was made. From your dashboard, you can monitor your model, roll back to previous versions and see the traffic.

To know more about Cerebrium's functionality or their pre-built models you can check out their docs here.

Firing the Censius AI Observability

Time and again, we have talked about why AI Observability Platform is the need of the hour and how it leads you to robust, reliable models. 

Now that you have deployed the Iris classifier, it is time to start monitoring it using the Censius AI Observability Platform. 

Before we begin, you need to sign up to start an account with the Censius AI Observability Platform to create your project. Or view the existing project if you are already a user. 

A view of how your monitored ML project would look on the Censius AI Observability Platform.
A view of how your monitored ML project would look on the Censius AI Observability Platform. Source: Censius Inc

Returning to your Python script, begin with the necessary imports:

from cerebrium import logging_platform
from censius import ModelType, Dataset
from uuid import uuid4

Simple classifier features and the target class declarations

features = ["sepal_length", "sepal_width", "petal_length", "petal_width"]
targets = ["target"]

Following code fragment is to authenticate yourself using your Censius API key and identifiers. These lines use identifiers specific to your Censius project.

platform_authentication = {
    "api_key": API_KEY,
    "tenant_id": YOUR_CENSIUS_ID
}

platform_args = {
    "project_id": YOUR_CENSIUS_PROJECT_ID,
    "model_type": ModelType.BINARY_CLASSIFICATION,
    "training_info": {
        "method": Dataset.ID,
        "id": YOUR_CENSIUS_DATASET_ID
    },
}

Add Censius monitor as performance logger to the Conduit instance:

c.add_logger(logging_platform.CENSIUS, platform_authentication, features, targets, platform_args)
c.setup_loggers()

Time to check if the monitor setup was a success

c.loggers["Censius"].check_ready("v1", name)

This line should output the message “(Registered new model version iris-demov1 with Censius411.”, True)”

Let us now fire an event which should cause the creation of a log item on the Censius monitor

id = str(uuid4().hex)
response = c.loggers["Censius"].log([0,0,0,0], [0.99, 0.01], id)
print (response)

This line should output the message “Successfully created log item”

Deploy Conduit instance to get the endpoint to your model

endpoint = c.deploy()

Back at the Censius AI Observability platform, you may check on your project and model versions to see log events.

The different model versions being monitored at the Censius AI Observability Platform.
The different model versions being monitored at the Censius AI Observability Platform. Source: Censius Inc

Wrapping Up

What is better than quick and easy deployment? 

Answer: Quick and easy AI observability of your deployed models.

Integrating Cerebrium and Censius into your machine-learning projects will ensure that deployment and AI monitoring are simultaneous and streamlined processes. With Censius, you can quickly dive into any problem to find root causes to continually improve your models in production. If we talk about getting the best of both worlds, Cerebrium ensures that your team can focus their energy on building world-class machine learning projects. 

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Censius AI Monitoring Platform
Automate ML Model Monitoring

Explore how Censius helps you monitor, analyze and explain your ML models

Explore Platform

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring