Experiment with the Pipelines Samples
You can learn how to build and deploy pipelines by running the samples provided in the Kubeflow Pipelines repository or by walking through a Jupyter notebook that describes the process.
Compiling the samples on the command line
This section shows you how to compile the Kubeflow Pipelines samples and deploy them using the Kubeflow Pipelines UI.
Before you start
Set up your environment:
- Clone or download the Kubeflow Pipelines samples.
- Install the Kubeflow Pipelines SDK.
Activate your Python 3 environment if you haven’t done so already:
source activate <YOUR-PYTHON-ENVIRONMENT-NAME>
source activate mlpipeline
Choose and compile a pipeline
Examine the pipeline samples that you downloaded and choose one to work with.
sequential.py sample pipeline:
is a good one to start with.
Each pipeline is defined as a Python program. Before you can submit a pipeline
to the Kubeflow Pipelines service, you must compile the
pipeline to an intermediate representation. The intermediate representation
takes the form of a YAML file compressed into a
dsl-compile command to compile the pipeline that you chose:
dsl-compile --py [path/to/python/file] --output [path/to/output/tar.gz]
For example, to compile the
sequential.py sample pipeline:
export DIR=[YOUR PIPELINES REPO DIRECTORY]/samples/basic dsl-compile --py $DIR/sequential.py --output $DIR/sequential.tar.gz
Deploy the pipeline
Upload the generated
.tar.gz file through the Kubeflow Pipelines UI. See the
guide to getting started with the UI.
Building a pipeline in a Jupyter notebook
You can choose to build your pipeline in a Jupyter notebook. The sample notebooks walk you through the process.
It’s easiest to use the JupyterHub that is installed in the same cluster as the Kubeflow Pipelines system.
Note: The notebook samples don’t work on Jupyter notebooks outside the same cluster, because the Python library communicates with the Kubeflow Pipelines system through in-cluster service names.
Follow these steps to start a notebook:
Deploy Kubeflow and open the pipelines dashboard:
Click Notebooks in the left-hend menu. If this is the first time you’ve visited JupyterHub, you need to sign in. You can use any username and you can leave the password blank.
Click the Spawn button to create a new instance. After a few minutes, the Jupyter UI opens. You can switch to the JupyterLab UI by changing
/labin the URL.
Download the sample notebooks from https://github.com/kubeflow/pipelines/tree/master/samples/notebooks.
Upload these notebooks from the Jupyter UI: In Jupyter, go to the tree view and find the upload button in the top right-hand area of the screen.
Open one of the uploaded notebooks.
Make sure the notebook kernel is set to Python 3. The Python version is at the top right-hand corner in the Jupyter notebook view.
Follow the instructions in the notebook.
The following notebooks are available:
KubeFlow pipeline using TFX OSS components: This notebook demonstrates how to build a machine learning pipeline based on TensorFlow Extended (TFX) components. The pipeline includes a TFDV step to infer the schema, a TFT preprocessor, a TensorFlow trainer, a TFMA analyzer, and a model deployer which deploys the trained model to
tf-servingin the same cluster. The notebook also demonstrates how to build a component based on Python 3 inside the notebook, including how to build a Docker container.
Lightweight Python components: This notebook demonstrates how to build simple Python components based on Python 3 and use them in a pipeline with fast iterations. If you use this technique, you don’t need to build a Docker container when you build a component. Note that the container image may not be self contained because the source code is not built into the container.
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.