Seldon comes installed with Kubeflow. The Seldon documentation site provides full documentation for running Seldon inference.
If you have a saved model in a PersistentVolume (PV), Google Cloud Storage bucket or Amazon S3 Storage you can use one of the prepackaged model servers provided by Seldon.
Seldon also provides language specific model wrappers to wrap your inference code for it to run in Seldon.
You need to ensure the namespace where your models will be served has:
- An Istio gateway named kubeflow-gateway
- A label set as
The following example applies the label
my-namespace to the namespace for serving:
kubectl label namespace my-namespace serving.kubeflow.org/inferenceservice=enabled
Create a gateway called
kubeflow-gateway in namespace
kind: Gateway metadata: name: kubeflow-gateway namespace: my-namespace spec: selector: istio: ingressgateway servers: - hosts: - '*' port: name: http number: 80 protocol: HTTP
Save the above resource and apply it with
The Kubeflow Seldon E2E Pipeline shows how to build re-usable components for an ML pipeline.
Seldon provides a large set of example notebooks showing how to run inference code for a wide range of machine learning toolkits.
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.