Model is a machine learning model or a processing function that consumes provided inputs and produces predictions/transformations. Each model is a collection of its own versions. Every time you upload or re-upload a model, the newer version is getting created and added to that collection. At the lowest level a model version is represented as a Docker image created based on the model binaries. This essentially means, that during building stage the model version gets frozen and can no longer change. Each collection is identified by the model’s name.

When you upload a model to Hydrosphere, roughly the following steps are executed:

  1. CLI uploads model binaries to the platform;
  2. Manager builds a new Docker image based on the uploaded binaries and saves the image in the configured Docker registry;
  3. A built image is assigned with the model’s collection with an increased version.


Model can be written using a variety of modern machine learning frameworks. You can implement your model using TensorFlow graph computations or create your model with scikit-learn, PyTorch, Keras, Fastai, MXNet, Spark ML/MLlib, etc. Hydrosphere can understand your models depending on what framework you are using. It’s possible due to the metadata that the frameworks save with the model, but it’s not always the case. You should refer to the table below with listed frameworks and their inference.

Framework Status Inferring Commentaries
TensorFlow maintained 100% TensorFlow saves all needed metadata with SavedModelBuilder, so generated contracts will be very accurate.
Spark partly 50% Spark has metadata, but it's insufficient and contract's inference may be inaccurate. To give an example:
1) there isn't enough notation on how shape of the model is formed (i.e. [30, 40] might be a matrix 30x40 or 40x30);
2) types are not always coincide with what Serving knows, etc.
MXNet manual 0% MXNet has its own export mechanism, but it does not contain any metadata related to types and shapes. Serve the model as a Python model.
SkLearn manual 0% Exported models does not provide required metadata. Serve the model as a Python model.
Theano manual 0% Exported models does not provide required metadata. Serve the model as a Python model.
ONNX manual 80% Currently Serving is able to read ONNX's proto files, but due to the lack of support from other frameworks (PyTorch, TensorFlow, etc.) ONNX models cannot be run in the implemented runtimes.

maintained - No need to provide any self-written contracts, model definitions;
partly - Complicated models will likely fail inference;
manual - Need to provide self-written contracts, model definitions.

If inferring percentage is high, you can omit providing model definitions, otherwise you should. Learn more about writing custom model definitions here.