What is Seldon Core?
Seldon Core is an open-source platform that accelerates deploying ML models and experiments on Kubernetes. It serves both cloud and on-premise requirements. Seldon Core serves ML models developed in any open-source or commercial model building platform. It facilitates powerful Kubernetes features such as custom resource definitions to handle model graphs and integration with CI/CD tools to scale and enhance deployments.
Seldon Core supports scaling to thousands of production models with its advanced capabilities like Request Logging, Outlier Detectors, Canaries, Advanced Metrics, A/B tests, and more.
How Does Seldon Core Help?
Seldon Core transforms ML models and language wrappers into production REST/GRPC microservices. It facilitates different deployment patterns such as A/B tests, canary rollouts, and multi-armed bandits.
Seldon comes with an alerting system to notify issues while monitoring models in production. It also enables model explainers such as Alibi to provide high-quality implementations of black-box, white-box, local, and global explanation methods for regression and classification models.
Seldon Core facilitates impeccable ML deployments with runtime and rich inference graphs made out of transformers, routers, predictors, and combiners.
Seldon allows containerizing ML models with custom servers, language wrappers, and pre-packaged inference servers. It offers a secure, robust and reliable system for massive machine learning model deployments with pace.
Key Features of Seldon Core
Seldon Core is built on Kubernetes and is available on any cloud and on-premises.
Agnostic and independent
Seldon Core is framework agnostic and supports top ML libraries, languages, and toolkits. It is tested on Azure AKS, AWS EKS, Google GKE, Digital Ocean, Openshift, and Alicloud.
Rich inference graphs
Seldon supports advanced deployments with runtime inference graphs powered by ensembles, transformers, routers, and predictors.
Seldon affirms full auditability with model input-output requests backed by elastic search and logging integration. Metadata provenance helps trace back each model to its corresponding training system, data, and metrics.
Seldon provides customizable and advanced metrics with integration to Grafana and Prometheus.
Seldon enables open tracing to trace API calls to Seldon Core. By default, Seldon supports Jaeger for distributed tracing to produce insights on latency and microservice-hop performance.