Search

Search IconIcon to open search

Orchestration

Last updated Apr 16, 2025

Delving deeper, after selecting technology from the myriad available, you’ll inevitably confront the need to manage intermediate levels. This is particularly true when handling unstructured data, which necessitates transformation into a structured format. Orchestrators and cloud-computing frameworks play a crucial role in this process, ensuring efficient data manipulation across different systems and formats. In the following chapters, I’ll elucidate their role in completing the full architectural picture.

At their core, orchestrators:

Orchestrators excel in:

While traditional orchestrators are task-centric, newer ones like Dagster emphasize Data Assets and Software-Defined Asset. This approach enhances scheduling and orchestration, as discussed in Dagster. These advancements align with the Modern Data Stack concepts.

# What is an Orchestrator

An orchestrator is a scheduling and monitor workflows tool. For the different technologies and different file format working together, you need some orchestrator and processing engine that prepares, moves and wrangle the data correctly and to the right place.

Essentially it takes siloed data from multiple data storage locations, combines it, and unifies it within the orchestrator.

# What Language does a Data Orchestrator Speak?

See What language does an Orchestrator speak.

# The Role of Orchestration in Mastering Complexity

Explore the key features in RW Building Better Analytics Pipelines.

# Abstraction: Data Pipeline as Microservice

Abstractions let you use data pipelines as a microservice on steroids. Why? Because microservices are excellent in scaling but not as good in aligning among different code services. A modern data orchestrator has everything handled around the above reusable abstractions. You can see each task or microservice as a single pipeline with its sole purpose-everything defined in a  functional data engineering way. You do not need to start from zero when you start a new microservice or pipeline in the orchestration case.

More on Data Orchestration Trends: The Shift From Data Pipelines to Data Products.

# Data Orchestrator Tools

Simple Version


Image from dlthub.com.

Selecting a technology should be followed by choosing an Orchestrator. This crucial step often goes overlooked.

For more insights, read Data Orchestration Trends: The Shift From Data Pipelines to Data Products.

# History: The Data Orchestration Evolution

To understand the why better, let’s go back in time. In 2022 I wrote about the shift and trends in data orchestration, how we moved from pure tasks to more data orchestration. To understand the space, we need to understand the history as we went through different stages.

gantt
    title OSS Data Orchestration Evolution
    dateFormat  YYYY
    axisFormat  %Y
    todayMarker off

    section Command Line Era
    Cron :milestone, m1, 1987, 0d

    section ETL Tools Era
    Informatica PowerCenter :milestone, m4, 1993, 0d
    Oracle OWB :milestone, m2, 2000, 0d
    SQL Server Integration Services (SSIS) :milestone, m3, 2005, 0d

    section Python Orchestrators
    Apache Oozie :milestone, m5, 2011, 0d
    Luigi (Spotify) :milestone, m6, 2012, 0d
    Apache Airflow (Airbnb) :milestone, m7, 2014, 0d

    section Modern Orchestrators
    Dagster :milestone, m10, 2018, 0d
    Prefect :milestone, m8, 2019, 0d
    Kedro :milestone, m9, 2019, 0d
    Temporal :milestone, m11, 2019, 0d

    section Universal Orchestrators
    Kestra :milestone, m12, 2022, 0d
    Mage :milestone, m13, 2022, 0d

This chart shows that we have different stages of orchestration:

Today we talk more about the data assets, specific data tables, BI dashboards, or an S3 bucket. People don’t mind about the ETL in-between and its transformations. Frankly, business people don’t care about our well-crafted data pipelines.

But as a matter of fact, it’s where the heavy lifting takes place, and our data assets get created. Besides more focus on data-aware orchestration, it’s also important to get software engineering best practices in place for our central orchestration tool. Practices such as versioning our pipelines in a version control to go back in case of a faulty new version, CI/CD to detect bugs early in the lifecycle. Most importantly, to be able to do all of it, we need a declarative data orchestrator. An orchestrator and its pipelines that can be defined declaratively. How does that work?

Data-aware orchestration knows more about the data it runs, it can re-use existing technical implementations for different tasks, or can pass around data to the next task, without treating it like we know nothing about it.

And declarative means we can configure and specify what we want to orchestrate, and isolate the how (technical implementation). A declarative approach also lets you quickly update the pipeline with configurations, version it, rollback in case of error, test and automate and apply Software Engineering Best Practices to data pipelines.

Besides data awareness and declarative approach, the shift has gone further to simple tools that help consolidate the ecosystem. These integrate the strengths of different tools, and orchestrate them to get data governance and data quality under control. Especially if you have source connectors to major databases and REST APIs out of the box, this will help any team tremendously.

The full list of open-source data orchestrators
If you are curious and want to see the complete list of tools and frameworks, I suggest you check out the  Awesome Pipeline List on GitHub.

# An older version:

Orchestrators have evolved from simple task managers to complex systems integrating with the Modern Data Stack. Let’s trace their journey:

  1. 1987: The inception with (Vixie) cron
  2. 2000: The emergence of graphical ETL tools like Oracle Warehouse Builder (OWB), SSIS, Informatica
  3. 2011: The rise of Hadoop orchestrators like Luigi, Oozie, Azkaban
  4. 2014: The rise of simple orchestrators like Airflow
  5. 2019: The advent of modern orchestrators with Python like Prefect, Kedro Dagster, Temporal or even fully SQL framework dbt
  6. To declarative pipelines fully managed into Ascend.io, Palantir Foundry and other data lake solutions

For an exhaustive list, visit the Awesome Pipeline List on GitHub. More on the history on Bash-Script vs. Stored Procedure vs. Traditional ETL Tools vs. Python-Script - 📖 Data Engineering Design Patterns (DEDP).


Also check GitHub Star History, eventough they don’t tell you much.

# Another version

A nice illustration by dlt on On Orchestrators: You Are All Right, But You Are All Wrong Too | dlt Docs:

# When to Use Which Tools

# Control Plane

As of 2024-07-09:
Data orchestrators are the control pane that keeps the heterogeneous data stack together. I like dagster; even though it’s harder to start, it “forces” you, gently :), to use good practice. For example, technical code can be distinguished into resources for everyone to resource (typically data engineers maintain), and business logic can be written by domain experts that nowadays can be written directly close to the data assets. These are declarative and can easily be used to automate and version. Also, everything can run locally with a mocked spark cluster as on production with databricks, without changing any line of DAG config; the only thing is to define run-configs for each environment.

# Different Types of Orchestration

As of 2022-09-21:

Or said others in this Tweet - I’d use:

Also, explore insights from the podcast Re-Bundling The Data Stack With Data Orchestration And Software Defined Assets Using Dagster | Data Engineering Podcast with Nick Schrock.

# Comparing Dagster with Vim

Dagster is vim for orchestration. It has a steeper learning curve; you need to learn its concepts. Initially, it’s harder, but with complex/heterogeneous data infrastructure, these concepts can save you time and money.

Take vim motions; they are hard to learn but worth every minute if you write/code all day. Like orchestrating, if data and managing complexity is core to your business, it’s worth having a robust, battle-tested architecture in place, and you get it out of the box with dagster. Tweet.

Fun is another great analogy to vim. Vim to me is more fun to use than VS Code (see PDE). Dagster is also more fun for data engineers as it focuses heavily on data engineers and developer productivity.

# What language does an Orchestrator speak?

What language does an Orchestrator speak

# Further Reads

Here are two deep dives of mine about this very topic:


Origin:
References: Python What is an Orchestrator Why you need an Orchestrator Apache Airflow
Created: