BetaThis Kubeflow component has beta status. See the Kubeflow versioning policies. The Kubeflow team is interested in your feedback about the usability of the feature.
KFServing enables serverless inferencing on Kubernetes and provides performant, high abstraction interfaces for common machine learning (ML) frameworks like TensorFlow, XGBoost, scikit-learn, PyTorch, and ONNX to solve production model serving use cases.
You can use KFServing to do the following:
Provide a Kubernetes Custom Resource Definition for serving ML models on arbitrary frameworks.
Encapsulate the complexity of autoscaling, networking, health checking, and server configuration to bring cutting edge serving features like GPU autoscaling, scale to zero, and canary rollouts to your ML deployments.
Enable a simple, pluggable, and complete story for your production ML inference server by providing prediction, pre-processing, post-processing and explainability out of the box.
Our strong community contributions help KFServing to grow. We have a Technical Steering Committee driven by Bloomberg, IBM Cloud, Seldon, Amazon Web Services (AWS) and NVIDIA. Browse the KFServing GitHub repo to give us feedback!
Install with Kubeflow
KFServing works with Kubeflow 1.2. Kustomize installation files are located in the manifests repo.
Check the examples running KFServing on Istio/Dex in the
kubeflow/kfserving repository. For installation on major cloud providers with Kubeflow, follow their installation docs.
Kubeflow 1.2 includes KFServing v0.4.1, where the focus has been on enabling KFServing on OpenShift and additionally providing more features, such as adding batcher module as sidecar, Triton inference server renaming and integrations, upgrading Alibi explainer to 0.4.0, updating logger to CloudEvents V1 protocol and allowing customized URL paths on data plane. Additionally, the minimum Istio version is now v1.3.1. More details can be found here and here
Deploy models with out-of-the-box model servers
Deploy models with custom model servers
Deploy models on GPU
Autoscaling and Rollouts
Model explainability and outlier detection
We frequently add examples to our GitHub repo.
- Join our working group for meeting invitations and discussion.
- Read the docs.
- API docs.
- Debugging guide.
- KFServing 101 slides.
- Kubecon Introducing KFServing.
- Kubecon Advanced KFServing.
- Nvidia GTC Accelerate and Autoscale Deep Learning Inference on GPUs.
Knative Serving (v0.11.2 +), Istio (v1.3.1+), and Cert Manager(v0.12.0+) should be available on your Kubernetes cluster. For installing KFServing prerequisites, refer to the README section.
Install the SDK with PiPy.
pip install kfserving
Follow the example(s) to use the KFServing SDK to create, patch, roll out, and delete a KFServing instance.
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.