Hydrosphere Serving is an open-source cluster for deploying your machine learning models in production. It is a collection of dockerized services that can run anywhere you can run Docker or Kubernetes – any cloud or on-premises.
- Language- & Framework-agnostic Deployment. No matter which programming language or libraries were used to develop or deploy a model, you still can use Hydrosphere. Python, R, Julia, Scala Spark, custom binary, TensorFlow, PyTorch, etc. are all supported.
- Rich Interfaces. Hydrosphere Serving automatically exposes HTTP, GRPC and Kafka interfaces for your served models.
- Open-Source – enjoy the support of our contributors.
- Model Version Control. Version control your models and pipelines as they are deployed. Explore how metrics change between different model versions and roll-back to a previous version if needed.
- Traffic split. Split your production traffic between your models to perform an A\B test or canary deployment to see how your model versions vary in quality.
- Traffic shadowing. Shadow your traffic between different model versions to examine how different model versions behave on the same traffic.