Lakehouse as Code | 03. Data Pipeline Jobs

Welcome to the Lakehouse as Code mini-series! In this series, we'll walk you through deploying a complete Databricks lakehouse using infrastructure as code with Laktory. From setting up Unity Catalog to orchestrating data pipelines and configuring your workspace, we’ve got everything covered.

In this second part, we focus on configuring a Databricks workspace. You’ll learn how to:

  • Configure and deploy a simple Hello Job

  • Declare a multi-tables data pipeline

  • Define transformations using both SQL and Spark DataFrame

  • Develop and debug the pipeline from your IDE

  • Deploy as a Databricks Job

Read More

Previous
Previous

Lakehouse as Code | 04. Delta Live Tables Data Pipelines

Next
Next

Lakehouse as Code | 02. Workspace