Overview of Kubeflow Fairing
Kubeflow Fairing streamlines the process of building, training, and deploying machine learning (ML) training jobs in a hybrid cloud environment. By using Kubeflow Fairing and adding a few lines of code, you can run your ML training job locally or in the cloud, directly from Python code or a Jupyter notebook. After your training job is complete, you can use Kubeflow Fairing to deploy your trained model as a prediction endpoint.
Follow a quickstart guide to learn how to get started running training jobs remotely with Kubeflow Fairing:
- Learn how to train and deploy a model on Google Cloud Platform (GCP) from a local notebook.
- Learn how to train and deploy a model on GCP from a notebook hosted on Kubeflow.
- Learn how to train and deploy a model on GCP from a notebook hosted on Google AI Platform Notebooks.
What is Kubeflow Fairing?
Kubeflow Fairing packages your Jupyter notebook, Python function, or Python file as a Docker image, then deploys and runs the training job on Kubeflow or AI Platform. After your training job is complete, you can use Kubeflow Fairing to deploy your trained model as a prediction endpoint on Kubeflow.
The following are the goals of the Kubeflow Fairing project:
- Easily package ML training jobs: Enable ML practitioners to easily package their ML model training code, and their code’s dependencies, as a Docker image.
- Easily train ML models in a hybrid cloud environment: Provide a high-level API for training ML models to make it easy to run training jobs in the cloud, without needing to understand the underlying infrastructure.
- Streamline the process of deploying a trained model: Make it easy for ML practitioners to deploy trained ML models to a hybrid cloud environment.
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.