Model is a machine learning model or a processing function that consume provided inputs and produce predictions/transformations. Each model is a collection of its own versions. Every time you upload or re-upload a model, a new version is getting created and added to the collection. At the lowest level model version represented as a Docker image, created based on the model binaries. This essentially means, that during building stage the model version gets frozen and can no longer change. Each collection is identified by the model’s name.

When you upload a model to Serving, roughly the following steps are executed:

  1. CLI uploads model binaries to the Serving;
  2. Serving builds a new Docker image based on the uploaded binaries and saves it in the configured Docker registry;
  3. A builded image is assigned to the model’s collection with an increased version.


Model can be written using a variety of modern machine learning frameworks. You can implement your model using TensorFlow graph computations, or create your model with scikit-learn package, Pytorch, Keras, fastai, MXNet, Spark ML/MLlib, etc. Serving can understand your models depending on what framework you are using. It’s possible due to the metadata, that frameworks save with the model, but it’s not always the case. You should refer to the table below with listed frameworks and their inference. If inferring percentage is high, you can omit providing contracts, otherwise [you should]({{site.baseurl}}{%link how-to/}).

Framework Status Inferring Commentaries
TensorFlow maintained 100% TensorFlow saves all needed metadata with SavedModelBuilder, so generated contracts will be very accurate.
Spark partly 50% Spark has metadata, but it's insufficient and contract's inference may be inaccurate. To give an example:
1) there isn't enough notation on how shape of the model is formed (i.e. [30, 40] might be a matrix 30x40 or 40x30);
2) types are not always coincide with what Serving knows, etc.
MXNet manual 0% MXNet has its own export mechanism, but it does not contain any metadata related to types and shapes. Serve the model as a Python model.
SkLearn manual 0% Exported models does not provide required metadata. Serve the model as a Python model.
Theano manual 0% Exported models does not provide required metadata. Serve the model as a Python model.
ONNX manual 80% Currently Serving is able to read ONNX's proto files, but due to the lack of support from other frameworks (PyTorch, TensorFlow, etc.) ONNX models cannot be run in the implemented runtimes.

maintained - No need to provide any self-written contracts, model definitions;
partly - Complicated models will likely fail inference;
manual - Need to provide self-written contracts, model definitions.