Introduction to the Pipelines SDK
The Kubeflow Pipelines SDK provides a set of Python packages that you can use to specify and run your machine learning (ML) workflows. A pipeline is a description of an ML workflow, including all of the components that make up the steps in the workflow and how the components interact with each other.
Note: The SDK documentation here refers to Kubeflow Pipelines with Argo which is the default. If you are running Kubeflow Pipelines with Tekton instead, please follow the Kubeflow Pipelines SDK for Tekton documentation.
The Kubeflow Pipelines SDK includes the following packages:
kfp.compilerincludes classes and methods for compiling pipeline Python DSL into a workflow yaml spec Methods in this package include, but are not limited to, the following:
kfp.compiler.Compiler.compilecompiles your Python DSL code into a single static configuration (in YAML format) that the Kubeflow Pipelines service can process. The Kubeflow Pipelines service converts the static configuration into a set of Kubernetes resources for execution.
kfp.componentsincludes classes and methods for interacting with pipeline components. Methods in this package include, but are not limited to, the following:
kfp.components.func_to_container_opconverts a Python function to a pipeline component and returns a factory function. You can then call the factory function to construct an instance of a pipeline task (
ContainerOp) that runs the original function in a container.
kfp.components.load_component_from_fileloads a pipeline component from a file and returns a factory function. You can then call the factory function to construct an instance of a pipeline task (
ContainerOp) that runs the component container image.
kfp.components.load_component_from_urlloads a pipeline component from a URL and returns a factory function. You can then call the factory function to construct an instance of a pipeline task (
ContainerOp) that runs the component container image.
kfp.dslcontains the domain-specific language (DSL) that you can use to define and interact with pipelines and components. Methods, classes, and modules in this package include, but are not limited to, the following:
kfp.dsl.PipelineParamrepresents a pipeline parameter that you can pass from one pipeline component to another. See the guide to pipeline parameters.
kfp.dsl.componentis a decorator for DSL functions that returns a pipeline component. (
kfp.dsl.pipelineis a decorator for Python functions that returns a pipeline.
kfp.dsl.python_componentis a decorator for Python functions that adds pipeline component metadata to the function object.
kfp.dsl.typescontains a list of types defined by the Kubeflow Pipelines SDK. Types include basic types like
Bool, as well as domain-specific types like
GCRPath. See the guide to DSL static type checking.
kfp.dsl.ResourceOprepresents a pipeline task (op) which lets you directly manipulate Kubernetes resources (
kfp.dsl.VolumeOprepresents a pipeline task (op) which creates a new
PersistentVolumeClaim(PVC). It aims to make the common case of creating a
kfp.dsl.VolumeSnapshotOprepresents a pipeline task (op) which creates a new
VolumeSnapshot. It aims to make the common case of creating a
kfp.dsl.PipelineVolumerepresents a volume used to pass data between pipeline steps.
ContainerOps can mount a
PipelineVolumeeither via the constructor’s argument
kfp.dsl.ParallelForrepresents a parallel for loop over a static or dynamic set of items in a pipeline. Each iteration of the for loop is executed in parallel.
kfp.dsl.ExitHandlerrepresents an exit handler that is invoked upon exiting a pipeline. A typical usage of
ExitHandleris garbage collection.
kfp.dsl.Conditionrepresents a group of ops, that will only be executed when a certain condition is met. The condition specified need to be determined at runtime, by incorporating at least one task output, or PipelineParam in the boolean expression.
kfp.Clientcontains the Python client libraries for the Kubeflow Pipelines API. Methods in this package include, but are not limited to, the following:
kfp.Client.create_experimentcreates a pipeline experiment and returns an experiment object.
kfp.Client.run_pipelineruns a pipeline and returns a run object.
kfp.Client.create_run_from_pipeline_funccompiles a pipeline function and submits it for execution on Kubeflow Pipelines.
kfp.Client.create_run_from_pipeline_packageruns a local pipeline package on Kubeflow Pipelines.
kfp.Client.upload_pipelineuploads a local file to create a new pipeline in Kubeflow Pipelines.
kfp.Client.upload_pipeline_versionuploads a local file to create a pipeline version. Follow an example to learn more about creating a pipeline version.
Kubeflow Pipelines extension modules include classes and functions for specific platforms on which you can use Kubeflow Pipelines. Examples include utility functions for on premises, Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure.
Kubeflow Pipelines diagnose_me modules include classes and functions that help with environment diagnostic tasks.
kfp.cli.diagnose_me.dev_envreports on diagnostic metadata from your development environment, such as your python library version.
kfp.cli.diagnose_me.kubernetes_clusterreports on diagnostic data from your Kubernetes cluster, such as Kubernetes secrets.
kfp.cli.diagnose_me.gcpreports on diagnostic data related to your GCP environment.
Kubeflow Pipelines CLI tool
The Kubeflow Pipelines CLI tool enables you to use a subset of the Kubeflow Pipelines SDK directly from the command line. The Kubeflow Pipelines CLI tool provides the following commands:
kfp diagnose_meruns environment diagnostic with specified parameters.
--json- Indicates that this command must return its results as JSON. Otherwise, results are returned in human readable format.
--namespace TEXT- Specifies the Kubernetes namespace to use. all-namespaces is the default value.
--project-id TEXT- For GCP deployments, this value specifies the GCP project to use. If this value is not specified, the environment default is used.
kfp pipeline <COMMAND>provides the following commands to help you manage pipelines.
get- Gets detailed information about a Kubeflow pipeline from your Kubeflow Pipelines cluster.
list- Lists the pipelines that have been uploaded to your Kubeflow Pipelines cluster.
upload- Uploads a pipeline to your Kubeflow Pipelines cluster.
kfp run <COMMAND>provides the following commands to help you manage pipeline runs.
get- Displays the details of a pipeline run.
list- Lists recent pipeline runs.
submit- Submits a pipeline run.
kfp --endpoint <ENDPOINT>- Specifies the endpoint that the Kubeflow Pipelines CLI should connect to.
Installing the SDK
Follow the guide to installing the Kubeflow Pipelines SDK.
Building pipelines and components
This section summarizes the ways you can use the SDK to build pipelines and components.
A Kubeflow pipeline is a portable and scalable definition of an ML workflow. Each step in your ML workflow, such as preparing data or training a model, is an instance of a pipeline component.
Learn more about building pipelines.
A pipeline component is a self-contained set of code that performs one step in your ML workflow. Components are defined in a component specification, which defines the following:
- The component’s interface, its inputs and outputs.
- The component’s implementation, the container image and the command to execute.
- The component’s metadata, such as the name and description of the component.
Use the following options to create or reuse pipeline components.
You can build components by defining a component specification for a containerized application.
Lightweight Python function-based components make it easier to build a component by using the Kubeflow Pipelines SDK to generate the component specification for a Python function.
You can reuse prebuilt components in your pipeline.
- Learn how to write recursive functions in the DSL.
- Build a pipeline component.
- Find out how to use the DSL to manipulate Kubernetes resources dynamically as steps of your pipeline.
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.