MLOps
 • 
9
 minutes read

The Five Key Pillars of MLOps and the Cost of Not Building Them

To build ML at scale, companies need to operate multiple ML solutions in parallel. Here, we have discussed the primary pillars of MLOps and the cost of not building them

By 
Samadrita Ghosh
The Five Key Pillars of MLOps and the Cost of Not Building Them
In this post:

Just two to three years ago, it was common to find organizations experimenting with proof-of-concept machine learning (ML) projects. Fast forward to today, AI/ML features are continuously going LIVE in production, especially as add-ons to countless contemporary as well as traditional solutions. 

Instead of being a good-to-have technology that offered a competitive edge, AI has evolved to become a must-have capability due to its industry-wide adoption. The transformation into an AI-based ecosystem might seem rapid because of the gradual changes that affected only a handful of industries during the initial years, but after the foundation was set, the growth has been exponential. Today, smart features are encouraged even in traditional products and have become an attractive USP. Therefore, not adopting ML at scale could directly impact the clients’ and investors’ perceptions, and indirectly influence the bottom line.

What is MLOps?

To build ML at scale, companies need to operate multiple ML solutions in parallel. This feat is only possible if there is proper governance and a standardized system to do the heavy lifting. 

This is where MLOps come into the picture to help development and deployment teams rapidly sync their workflows to move a considerable number of high-quality projects into production.

Through MLOps standardization, teams are not only able to cooperate better and faster but are also able to work through streamlined guidelines that save operational costs and allow reinvestment into advanced capabilities.

The Five Key Pillars of MLOps

The most overlooked action is the act of not taking action. Teams focus heavily on ongoing tasks and miss out on opportunities that arise from adding new capabilities. Over time, loss of such opportunities stacks up to result in the loss of competitive edge, especially when the market is adopting new methods of optimizing their production pipelines.

Such competitive capabilities can be closely compared to the metaphor of a stable structure that is only as strong as its weakest pillar. Missing out on installing even a single pillar might lead to a breakdown, especially while scaling the project. Here, we have discussed the primary pillars of MLOps and the cost of not building them:

1. Automation and CI/CD Pipelines

ML at scale refers to the ability of an organization to produce and maintain several ML models simultaneously. Even if the number of customer solutions being managed is as low as ten, each of the ten ML models faces the inevitable outcome of model drift. This means that the models gradually become sub-optimal primarily due to changes in data.

Any model can fail at any point in time, and it is impossible to deploy ten teams to create, monitor and maintain the ten solutions separately. This is where automation comes in and helps organizations to manage not just ten but hundreds of ML solutions running in parallel.

Each ML-enabled organization goes through different levels of automation, starting from manually handling the solutions to the ultimate stage of CI/CD pipeline automation. CI/CD stands for continuous integration and continuous development, suggestive of the fact that it links and automates the end-to-end pipeline, including build, test, monitor, training, and production stages.

Cost of not automating- When an organization chooses to stay dormant by continuing with manual management of ML pipelines, it primarily bears the following costs  :

  • Suboptimal resource usage - higher cost of hardware and cloud features such as data storage facilities, and most importantly, workforce.
  • Higher downtime and low-quality fix- Fixing model drifts manually takes significant time, which can significantly tarnish customer experience. An automated re-run with the freshest data gives the developer ample time to come up with a more robust and long-lasting solution.
  • Low-volume- It is a myth that there’s a tradeoff between quality and quantity. Large scale organizations like Google, Facebook, Netflix, and Uber produce a huge number of state-of-the-art solutions. With manual management, organizations block the potential to serve a larger client base with ML solutions or even ML add-ons.


2. Feature Store

Yes, every ML solution is unique because data is typically unique. While developers can never steal a solution designed for one client and apply it to another client, they can definitely borrow from the existing solution. This is where the Feature Store comes in.

The AI/ML community universally acknowledges that data processing, which includes working with features, takes up 70-80% of the time required to create the solution. But why create features from scratch when you can borrow them from pre-trained models? For example, consider a marketing solution that recommends attractive words relevant to the customer’s browsing behavior. Now, while creating another marketing solution that recommends products to customers, the same features used in the previous solution can be highly effective. 

Feature Store forms an integral part of MLOps by storing features and their metadata in a central and easily accessible repository. It is in a constant state of updates and grows with the growing number of solutions. In fact, a new ML solution can even benefit from a solution built years before it.

Cost of not implementing a feature store: 

  • Time and effort - Setting up a feature store can be time-taking unless an external service provider is involved. However, once it is set, it can save tons of effort and time. Without a feature store, developers have to start working on the data from scratch with no robust insight, even from similar solutions.
  • Unstable change management- ML teams tend to be dynamic since ML as a technology is still a decade old in the corporate industry. People might move within teams, as well as change organizations in search of different experiences. Having a central feature store teeming with metadata that marks the journey of the features in detail ensures those project handovers are unaffected.
  • Poor quality solutions - Feature stores offer assistance to ML teams that are exposed to human errors. Even if the teams are extremely thorough, they can easily miss out on feature ideas that can massively help the performance. Having a log of previously used features, their impact on different solutions, their logic, pipeline journey, and more can offer many added insights to make the current solution more long-lasting and high-performing.


3. Versioning and Reproducibility

Versioning both model and data are critical for reproducing ML experiments. While reproducibility helps teams to go over past experiments and determine the best fit, versioned data has the notable feature of being immutable. Both work together to avoid corrupted versions of past data to lead the current experiments astray.

Cost of not versioning- Developers who have been part of the ML journey for the last three to five years can vouch for the fact that versioning was never an integral part of proof-of-concept ML solutions that organizations initially undertook. The result was a lot of incoherent manual documentation, and sometimes not even that. Teams had to recreate versions of data and models which were usually not identical to the original version, even if missed by decimals. Surprisingly, many organizations could still be using the same manual approach for continuity and comfort, eventually compromising on the productivity of the process. 


4. Testing and Monitoring

Testing

Testing ML solutions can be divided into three segments - testing/validating the incoming data and features, testing the models, and testing the ML framework. The first two are self-explanatory, and the third segment requires testing of performance, resource utilization in the ML infrastructure, and compliance.

Cost of ineffective testing- Testing needs a standard bar for the ML solution to be measured against. This is why developers devise several metrics that fit the business use-case perfectly. The cost of not selecting the right testing metric can mean rolling back the entire solution and beginning from scratch. It is best to consult the business teams and the client before finalizing any data or model testing metric.

Monitoring

The monitoring stage kickstarts once the solution is finalized and is live in production. It might seem relatively easy to monitor a set of metrics, but in practical conditions, the monitoring conditions keep changing due to shifting dependencies like alterations in data sources or versions. Monitoring has to be continuous and quick so that whenever performance goes below a set threshold, the teams involved are promptly alerted. As discussed earlier, an automated model correction pipeline is the best way to avoid lagging solutions and downtime.

Cost of ineffective monitoring- Assuming that most organizations monitor their solutions periodically, emphasis on effective monitoring takes precedence over no monitoring. While some monitoring techniques can be quick, for certain models that are slow by nature in both training and production environments, the alert system can be slowed down too. Therefore, it is best practice to not just monitor model performance metrics, but also the behavior of incoming data, which is a much more swift process and offers a wholesome view. Similarly, if in addition to model metrics, monitoring the vitals of the solution is not added, the solution is exposed to other passive damages.


5. Operational metrics

Even though a fairly simple addition to the process, operational metrics are among the most critical metrics that measure the process of solution development. For example, metrics like prediction processing time, deployment frequency, and retraining frequency are vital to identify and nullify the red flags in the team’s usual workflow.

Cost of not analyzing operational metrics- It is common to not pay much heed to operational metrics since there are a lot of aspects in the solution building process that take precedence. However, for the long term game, the key to progressive efficiency lies with operational metrics. Without them, the teams can be expected to remain stuck in a repetitive process with limited potential.

Recommended List: An awesome collection of everything about MLOps by Dr. Larysa Visengeriyeva


Build vs. Buy: Cost-effective way of building the five Pillars

For some of the five main aspects of MLOps discussed, you might have noticed that while some are relatively easy to implement, like setting up an automated and effective testing module, others require significant effort, like setting up a feature store.

It is not uncommon for ML teams to specialize in certain aspects and not be specialized in end-to-end MLOps verticals. For such cases where the module falls beyond the organization’s core competence, it is beneficial to compare costs of implementation between external and internal solutions.

While looking for external vendor options, in addition to cost (and hidden costs), some key aspects to look into are security, customization options, and reviews from their long term clients.

PS: Are you struggling with maintaining observability in your MLOps pipeline? Look no further than Censius AI's observability tool! Our tool is designed to help you easily monitor and track the performance of your models, ensuring that you catch any issues before they become major problems. With Censius AI, you can easily visualize the performance of your models, monitor data drift, and receive alerts when anomalies are detected.

Don't let observability be a bottleneck in your MLOps pipeline - try Censius AI's observability tool today!

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Censius AI Monitoring Platform
Automate ML Model Monitoring

Explore how Censius helps you monitor, analyze and explain your ML models

Explore Platform

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring