MLOps
 • 
6
 minutes read

Is the ML Lifecycle Problem-Free?

Why is deploying a model a hard task as compared to building it? Let's crack it down

By 
Sanya Sinha
Is the ML Lifecycle Problem-Free?
In this post:

In a world where 65% of operational companies vouch for machine learning for predictive analysis and performance reporting, there is no denying that the application of machine learning has been a roaring success. Its implementations range from movie recommendations to robust business-oriented predictions and decision-making. Organizations worldwide are leveraging ML to boost their performances and reap optimum results in the business panorama.

However, the application of machine learning is not without its slice of risks, challenges, and adversities. The ML lifecycle can be chunked down to a three-step process from a higher level. The three steps include:

  1. Data preparation
  2. Model development, training, and testing
  3. Model deployment

Model deployment is the most challenging phase of the ML lifecycle. The model is deployed in the production environment to solve real-world problems. The transient dynamics of the production environment are bound to degrade the model's quality and functionality. Therefore, identifying the challenges and developing solutions are critical. 


Challenges in Model Deployment

Organizations deploy machine learning models to gain additional value in their business insights and performance presentations, which would otherwise be unattainable. However, a series of obstacles need to be overcome for the smooth functioning of the model.

The Need for Crisp and Eloquent Code

The Problem

Data scientists unanimously agree that the development code must be organic and adaptable to the demands of the problem statement. Dhruvi Goyal, a data scientist at Delhivery, advises writing code in a ‘deployable fashion while using appropriate data structures to optimize the overall time and space.’ However, a significant hurdle faced by data scientists is the volatility of the development code when it is transited to structurally divergent production pipelines. Models comprising unoptimized code are often unsuccessful in production and may lead to system and statistical failures. 

Solution

To ensure concordance between data scientists, DevOps professionals, and software engineers, the model must be tested to evaluate its performance in a production-like setting before being deployed. This is done to check its accuracy and compatibility with the production framework. 

Reducing friction between the teams and fostering cooperation can result from demarcating the specific domains wherein each team would investigate for outliers and discrepancies. This way, the data scientists would be responsible for fixing bugs, and statistical issues related to the model, whereas the software and DevOps teams, would be accountable for the challenges in the production landscape. All the teams would answer in their respective domains while working towards a common goal. An outstanding example of this could be the principle of ‘bounded context.’ Model Serving tools like Seldon Core and model monitoring platforms like Censius have been proven remarkably efficient to deal with the degeneracy of code. 


The Drifting Data Dilemma

The Problem

The datasets used to train the model are immensely vulnerable to paradigm shifts and fluctuations. A concrete example of this could be how COVID 19 drastically affected market trends, customer inclinations, and shopping archetypes. Companies retailored their mission statements to recuperate the losses incurred during the pandemic. Luxury fragrance brands, including LVHM, Christian Dior, diverged production lines from producing perfumes to manufacturing sanitizers. Hand sanitizer companies witnessed a skyrocketing 600% increment in their sales in 2020 alone. The competition also spiked in the proliferating domain of personal sanitation as both local distilleries and multinational brands commenced manufacturing hand sanitizers to deal with the colossal demand. Besides the outburst in demand for sanitizers, brick-and-mortar outlets lost valuation vis-a-vis online stores. The global pandemic contributed $105 billion to the US e-commerce industries. 

COVID-19 induced challenges in e-commerce
COVID-19 induced challenges in e-commerce

Consequently, these unprecedented swings in the market trends and consumer demands render ML models trained on pre-COVID datasets invalid in the present scenario. How would these data and concept drifts be dealt with for accurate business prediction and insight generation?

The Solution

Model Monitoring and AI Observability platforms are the one-stop solutions for drifting data. Mr. Kumar Shivam, a data scientist at Morningstar, claims that model monitoring and observability platforms help in both problem detection and reparation. “As a practitioner working in the financial domain, input data feature distribution changes are a constant issue. We have to use well-deployed monitoring tools to update the practitioner on the drifts in the distributions over time. Aided by the information given out by the evaluation store, we have been able to tweak our models for sustained performances,” said Kumar. The advent of MLOps has been a boon for production. AI Observability tools like Censius help detect drifts in data and fix them. 

Volatile Parameters and Dysfunctional Input Schema

The Problem

Not only is the dataset volatile in its contents, but even its parameters and inherent wireframe are susceptible to the dynamics of the production landscape. Therefore, to deal with any fluctuations in the input schema, the entire model would have to be retrained from scratch and updated. An example of this could be object-detection models detecting traces of diseases in X-ray images. Every time a new strain of a chest-infection illness is discovered, the model would have to be retrained from scratch to accommodate the more recent strain as an object.

The Solution

Drifting data, volatile parameters, and a dysfunctional input schema are all problems that MLOps could solve. Automating the ML pipelines can ensure that the models get trained and updated after introducing every new feature. Goyal says, “One needs to understand why a particular type of result is being given by the model, and must not be just used as a black box, as then one can improve that or be answerable for the model-related questions.” Thus, Moden Monitoring and Observability platforms could simplify identifying the dynamics of the input schema.

Infrastructure Inadequacy

The Problem

Developing ML models with a diverse toolkit could lead to collaborative inconsistencies. If all the members of the different teams used various tools, concurrence in the models would be rendered impossible. Moreover, the quality of the model would be severely depleted. The use of multiple libraries, frameworks, and toolkits, along with the risks of memory deficiency and runtime dependencies for solving real-world problems, is still a significant risk for models in production.

The Solution

Devising an open format for dealing with diversely varied toolkits could be a possible solution for amalgamating all the efforts into a single, operable application. Workflow management tools like Prefect can integrate optimum workflow practices and facilitate operative sharing of models. This way, an inherent discrepancy caused by a diversity in the toolkits used is often nullified. Thus, shipping uniformly-developed models were more accessible and reaped better outcomes.

Sizing and Scaling 

The Problem

Bulk can be a significant threat for ML pipelines. Bulkiness is a quantitative hazard from large datasets to intricately complex NLP models. Consequently, loading the models on servers with limited memory and computation capabilities symbolizes infrastructural inadequacy. Moreover, to accommodate the rapid influx of demands in the dynamic production environment, models often need to scale themselves to handle this ever-expanding pool of client requests. On-site infrastructure is often insufficient in dealing with models of such magnanimous structural sizes with tremendous demand for scalability.

The Solution

Model serving is a state-of-the-art technology for deploying a model as a service. The model could serve the clients offline, on edge, or through a microservice. One of the most effective model serving methods is deploying the model on a Kubernetes cluster. Kubernetes replaces the demand for on-premises computation and storage capabilities by providing a fully-managed solution for efficient model deployment. Model serving tools like Seldon Core have revolutionized the production landscape by bringing data scientists minimalistic downtimes, robust architecture, and efficient deployment.

Different deploying options in machine learning
Different deploying options in machine learning


Conclusion

To deal with the modern-day challenges in model deployment like demand for organic code, drifting data, volatile parameters, infrastructural inadequacy, and scalability, MLOps workflows have been introduced. Tools such as Seldon, Censius, and BentoML have transformed the deployment process by offering commendable services to the clientele. These toolkits help users take AI Observability, model monitoring, and model serving to the next generation. 

Explore Censius AI Observability Tool

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Censius AI Monitoring Platform
Automate ML Model Monitoring

Explore how Censius helps you monitor, analyze and explain your ML models

Explore Platform

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring