This MLOps wiki provides a set of easy-to-understand explainers to assist anyone understand what the different terms of MLOps are, why they're important, and how they're managed in the ML lifecycle.
Data integrity refers to reliable and accurate data with consistency and context for confident decisions and improved business agility.
A data pipeline includes a set of processes and tools to combine, automate, compute, and transform data from a source to a destination.
Data segments help effectively analyze data by dividing it and grouping similar data together according to the chosen parameter or filter.
A type of model drift observed due to the changes in the properties of dependent variables or the target to be predicted over time.
A shift in the distribution of features in training and serving data while the relationship between the input and target is unchanged.
ML monitors are tools that help check model performance, identify prediction quality issues, and generate real-time alerts to inform users.
Model behavior is explained by global or local interpretations and corresponds to features, target predictions, and learning from segments.
ML Fairness is the model’s quality or state of being fair or impartial, and it relates to the harm of allocation and quality of services.
CI/CD for Machine Learning
CI/CD of ML pipelines enables teams to build source code, run tests, and deploy automated pipelines for continuous delivery and training.
Data And Model Versioning
Versioning is the process of uniquely naming multiple iterations of models used at different stages of ML development to track all changes.
A feature store is a central repository to store features by defining a feature transformation once and computing its values to serve models.
ML diagnostics include tests to identify potential issues and apply improvements at different stages of the ML development process.
ML reproducibility is the ability to replicate the ML workflow previously carried out and produce the same results as the original work.
ML scalability is scaling ML models to handle massive data sets and perform many computations in a cost-effective and time-saving way.
An ML stack is a reference model listing all infrastructural components required to build, deploy, and scale machine learning systems.
ML Governance is an internal framework to control the processes for ML development, implement and track activities, and assign roles.
MLOps tools simplify the complex ML development process and enable maintainability and auditability with ML experiments.
ModelOps is a holistic approach to enable the smooth operationalization of ML models to deliver expected business value to an enterprise.
MLOps defines practices that unify ML development, streamline continuous delivery of models, and enable collaboration between teams
Machine learning allows computers to learn and act like humans without being explicitly programmed and improves their learning over time
Machine Learning Lifecycle
The ML lifecycle defines sequential steps involved in data science projects that carry equal importance and are executed in a cyclic order
Machine Learning Model
An ML model is an object trained over a set of data to learn from it, identify patterns, and infer over data that has not been seen before
ML monitoring is the practice of tracking the performance of ML models in production to identify potential issues in ML pipelines
AI audit defines a piloted audit program to educate C-suite leaders, expose risks involved, and accordingly develop safeguard controls
AI Black Box
Black box AI refers to using ML models that are not explainable by looking at their parameters, making retracing ML outputs difficult
AI explainability refers to a set of processes that allows humans to understand, comprehend and trust the results produced by ML algorithms
AI observability is a holistic and complete approach to drive insights on the model’s behavior, data, and performance across its lifecycle.
Bias In Machine Learning
ML bias is the effect of erroneous assumptions used in the processes or prejudices in the data producing systematically prejudiced results
Ethical AI defines guidelines for individual rights, behavior manipulation, privacy, and non-discrimination to ensure legitimate use of AI