Seldon Core
Model Serving

Seldon Core

Released: 
Jan 2018
  •  
Documentation
  •  
License:  
Apache-2.0 License
324
Github open issues
2609
Github stars
20 Oct
Github last commit
10
Stackoverflow questions

What is Seldon Core?

Seldon Core is an open-source platform that accelerates deploying ML models and experiments on Kubernetes. It serves both cloud and on-premise requirements. Seldon Core serves ML models developed in any open-source or commercial model building platform. It facilitates powerful Kubernetes features such as custom resource definitions to handle model graphs and integration with CI/CD tools to scale and enhance deployments. 

Seldon Core supports scaling to thousands of production models with its advanced capabilities like Request Logging, Outlier Detectors, Canaries, Advanced Metrics, A/B tests, and more. 


How Does Seldon Core Help?

Seldon Core transforms ML models and language wrappers into production REST/GRPC microservices. It facilitates different deployment patterns such as A/B tests, canary rollouts, and multi-armed bandits.

Seldon comes with an alerting system to notify issues while monitoring models in production. It also enables model explainers such as Alibi to provide high-quality implementations of black-box, white-box, local, and global explanation methods for regression and classification models.

Seldon Core facilitates impeccable ML deployments with runtime and rich inference graphs made out of transformers, routers, predictors, and combiners. 

Seldon allows containerizing ML models with custom servers, language wrappers, and pre-packaged inference servers. It offers a secure, robust and reliable system for massive machine learning model deployments with pace. 



Key Features of Seldon Core

Runs anywhere

Seldon Core is built on Kubernetes and is available on any cloud and on-premises. 

Agnostic and independent

Seldon Core is framework agnostic and supports top ML libraries, languages, and toolkits. It is tested on Azure AKS, AWS EKS, Google GKE, Digital Ocean, Openshift, and Alicloud. 

Rich inference graphs

Seldon supports advanced deployments with runtime inference graphs powered by ensembles, transformers, routers, and predictors. 

Auditability

Seldon affirms full auditability with model input-output requests backed by elastic search and logging integration. Metadata provenance helps trace back each model to its corresponding training system, data, and metrics. 

Advanced metrics

Seldon provides customizable and advanced metrics with integration to Grafana and Prometheus. 

Distributed tracing

Seldon enables open tracing to trace API calls to Seldon Core. By default, Seldon supports Jaeger for distributed tracing to produce insights on latency and microservice-hop performance.

Companies using

Seldon Core

ambassador
google
ibm
oracle
No items found.

Liked the content? You'll love our emails!

The best MLOps and AI Observability content handpicked and delivered to your email twice a month

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Censius automates model monitoring

so that you can 

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

improve models

scale businesses

detect frauds

boost healthcare

Start Monitoring