X

zenml io

Bing Rank
Average Position of Bing Search Engine Ranking of related query such as 'Sales AI Agent', 'Coding AI Agent', etc.

Last Updated: 2025-04-15

Information

Ask or search... More 0.75.0 ZenML Documentation ZenML API Docs SDK Docs 0.75.0 Introduction Installation Core concepts System Architecture Deploying ZenML ZenML Pro Starter guide Production guide LLMOps guide Manage your ZenML server Connect to a server Project Setup and Management Set up a ZenML project Collaborate with your team Interact with secrets Pipeline Development Build a pipeline Develop locally Use configuration files Train with GPUs Run remote pipelines from notebooks Configure Python environments Trigger a pipeline Customize Docker builds Data and Artifact Management Understand ZenML artifacts Complex use-cases Visualizing artifacts Model Management and Metrics Use the Model Control Plane Track metrics and metadata Stack infrastructure and deployment Manage stacks & components Infrastructure as code Connect services via connectors Control logging Popular integrations Contribute to/Extend ZenML Debug and solve issues Overview Orchestrators Artifact Stores Container Registries Data Validators Experiment Trackers Model Deployers Step Operators Alerters Image Builders Annotators Model Registries Feature Stores llms.txt Python Client Global settings Environment Variables API reference How do I...? Community & content FAQ Powered by GitBook When to use it Orchestrator Flavors How to use it zenml orchestrator flavor list zenml orchestrator flavor list python file_that_runs_a_zenml_pipeline.py python file_that_runs_a_zenml_pipeline.py from zenml.client import Client from zenml.client import Client pipeline_run = Client().get_pipeline_run("") pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value PreviousOverview Previous Overview NextLocal Orchestrator Next Local Orchestrator Was this helpful? Orchestrating the execution of ML pipelines. The orchestrator is an essential component in any MLOps stack as it is responsible for running your machine learning pipelines. To do so, the orchestrator provides an environment that is set up to execute the steps of your pipeline. It also makes sure that the steps of your pipeline only get executed once all their inputs (which are outputs of previous steps of your pipeline) are available. Many of ZenML's remote orchestrators build Docker images in order to transport and execute your pipeline code. If you want to learn more about how Docker images are built by ZenML, check out this guide. The orchestrator is a mandatory component in the ZenML stack. It is used to store all artifacts produced by pipeline runs, and you are required to configure it in all of your stacks. Out of the box, ZenML comes with a local orchestrator already part of the default stack that runs pipelines locally. Additional orchestrators are provided by integrations: LocalOrchestrator local built-in Runs your pipelines locally. LocalDockerOrchestrator local_docker built-in Runs your pipelines locally using Docker. KubernetesOrchestrator kubernetes kubernetes Runs your pipelines in Kubernetes clusters. KubeflowOrchestrator kubeflow kubeflow Runs your pipelines using Kubeflow. VertexOrchestrator vertex gcp Runs your pipelines in Vertex AI. SagemakerOrchestrator sagemaker aws Runs your pipelines in Sagemaker. AzureMLOrchestrator azureml azure Runs your pipelines in AzureML. TektonOrchestrator tekton tekton Runs your pipelines using Tekton. AirflowOrchestrator airflow airflow Runs your pipelines using Airflow. SkypilotAWSOrchestrator vm_aws skypilot[aws] Runs your pipelines in AWS VMs using SkyPilot SkypilotGCPOrchestrator vm_gcp skypilot[gcp] Runs your pipelines in GCP VMs using SkyPilot SkypilotAzureOrchestrator vm_azure skypilot[azure] Runs your pipelines in Azure VMs using SkyPilot HyperAIOrchestrator hyperai hyperai Runs your pipeline in HyperAI.ai instances. Custom Implementation custom Extend the orchestrator abstraction and provide your own implementation If you would like to see the available flavors of orchestrators, you can use the command: You don't need to directly interact with any ZenML orchestrator in your code. As long as the orchestrator that you want to use is part of your active ZenML stack, using the orchestrator is as simple as executing a Python file that runs a ZenML pipeline: If your orchestrator comes with a separate user interface (for example Kubeflow, Airflow, Vertex), you can get the URL to the orchestrator UI of a specific pipeline run using the following code snippet: If your steps require the orchestrator to execute them on specific hardware, you can specify them on your steps as described here. If your orchestrator of choice or the underlying hardware doesn't support this, you can also take a look at step operators. Last updated 18 days ago Was this helpful? This site uses cookies to deliver its service and to analyse traffic. By browsing this site, you accept the privacy policy.

Prompts

Reviews

Tags

Write Your Review

Detailed Ratings

ALL
Correctness
Helpfulness
Interesting
Upload Pictures and Videos