Skip to content

mrjsj/blueno

Repository files navigation

blueno logo

A platform agnostic Python library for writing declarative data pipelines the bueno way

blueno

A collection of Python ETL utilities for working with data pipelines with built in orchestration.

Mainly focused on Data Engineering tasks utilising Polars and delta-rs.

It features blueprints as the central orchestration unit, and is inspired by the likes of dbt models, SQLMesh models and Dagster's software defined assets.

While it also has features for running on Microsoft Fabric, its utilities are decoupled in a way to let you choose your own storage and compute.

Demo of sequential run

blueno-demo-concurrency-1

Demo of parallel run

blueno-demo-concurrency-1

Table of contents

Installation

Regular installation:

pip install blueno

For Microsoft Fabric or Azure:

pip install "blueno[azure]"

Features

Local development first

Aim to provide a local development environment. This means you can develop, run code and store data locally as part of your development cycle. You can also read and write to Azure Data Lake or Microsoft Fabric lakehouses.

Remote code execution

While local development is favored, sometimes we need run our data pipelines a real setting. The library lets you run code directly in Microsoft Fabric.

ETL helper functions

  • Read from delta tables or parquet files with automatic authentication to Azure Data Lake or OneLake
  • Common transformations (add audit columns, reorder columns, deduplicate etc.)
  • Load delta tables with one of the provided load methods (upsert, overwrite, append etc.)

Blueprints

Blueprints is a feature to declaratively specify your entities in your data pipelines.

Configuration such as storage location, write behaviour and other configuration is set in the decorator, while the function itself only is concerned about business logic.

from blueno import blueprint, Blueprint
import polars as pl

workspace_name = ...
lakehouse_name = ...

lakehouse_base_uri = f"abfss://{workspace_name}@onelake.dfs.fabric.microsoft.com/{lakehouse_name}.Lakehouse/Tables"

@blueprint(
    table_uri=f"{lakehouse_base_uri}/silver/customer",
    primary_keys=["customer_id"],
    write_mode="overwrite",
)
def silver_customer(self: Blueprint, bronze_customer: pl.DataFrame) -> pl.DataFrame:
    
    # Deduplicate customers
    df = bronze_customers.unique(subset=self.primary_keys)

    return df

Given you also specified a blueprint for bronze_customer specified as a dependency for silver_customer, the blueprints will automatically be wired and executed in the correct order.

See the documentation for an elaborate example.

Documentation

For quick start and detailed documentation, examples, and API reference, visit our GitHub Pages documentation.

Contributing

Contributions are welcome! Here are some ways you can contribute:

  • Report bugs and feature requests through GitHub issues
  • Submit pull requests for bug fixes or new features
  • Improve documentation
  • Share ideas for new utilities