Installation

You can setup ML Lambda instance in 2 differents ways:

  1. Setup on Docker Environment
  2. Setup on Kubernetes Cluster

ML Lambda works along with the cli-tool. Make sure, you’ve installed it too.



Docker

Prerequisites

Docker installation doesn’t differ much except that from where you’re taking your source files.


From Releases

Download and unpack the latest released version from releases page.


From Source

Clone serving repository.

$ git clone https://github.com/Hydrospheredata/hydro-serving

Once you’ve obtained ML Lambda files, you can deploy one of the 2 versions:

  1. The lightweight version lets you manage your models in a continuous manner, version them, create applications that use your models and deploy all of this into production. To do that, just execute the following:

     $ cd hydro-serving/
     $ docker-compose up -d 
    
  2. The integrations version extends the lightweight version and lets you also integrate kafka, grafana and influxdb.

     $ cd hydro-serving/integrations/
     $ docker-compose up -d
    

To check that everything works fine, open http://localhost/. By default UI is available at port 80.



Kubernetes

There’s already a pre-built Helm charts for installing and maintaining ML Lambda on Kubernetes clusters.

Prerequisites

Installation can be performed in a few ways.


From Repo

Add ML Lambda to the repo.

$ helm repo add ml-lambda https://hydrospheredata.github.io/hydro-serving-helm/

Install the chart from repo to the cluster.

$ helm install --name ml-lambda ml-lambda/serving

From Release

Choose a release from the releases page and install it as usuall.

$ helm install --name ml-lambda https://github.com/Hydrospheredata/hydro-serving-helm/releases/download/0.1.15/serving-0.1.15.tgz

From Source

Clone the repository.

$ git clone https://github.com/Hydrospheredata/hydro-serving-helm.git
$ cd hydro-serving-helm

Build dependencies.

$ helm dependency build serving

Install the chart.

$ helm install --name ml-lambda serving

To check that everything works fine, open http://localhost/. By default UI is available at port 80.

For more information about configuring ml-lambda release refer to the chart’s repository.



CLI

Prerequisites

To install cli-tool, run:

$ pip install hs

What’s Next?