Launching Hercules in a Codespace is the easiest way to get started.
A make run
from inside your codespace will get things spinning.
Hercules generates Prometheus metrics by querying:
- Local files (parquet, json, csv, xlsx, etc)
- Object storage (GCS, S3, Azure Blob)
- HTTP endpoints
- Databases (PostgreSQL, MySQL, SQLite)
- Data lakes (Iceberg, Delta)
- Data warehouses (BigQuery)
- Arrow IPC buffers
Sources can be cached and periodically refreshed, or act as views to the underlying data.
Metrics from multiple sources can be materialized using a single exporter.
Metric definitions are yml
and use sql
in a number of supported dialects to aggregate, enrich, and materialize metric values.
Hercules supports Prometheus gauges, counters, summaries, and histograms.
Sources and metrics can be externally enriched, leading to more thorough, accurate (or is it precise?), properly-labeled metrics.
Integrate, calculate, enrich, and label on the edge.
Metric definitions can be kept DRY using SQL macros.
Macros are useful for:
- Parsing log lines
- Reading useragent strings
- Schematizing unstructured data
- Enriching results with third-party tooling
- Tokenizing attributes
Hercules propagates global labels to all configured metrics. So you don't have to guess where a metric came from.
Labels are propagated from configuration or sourced from environment variables.
Hercules extensions, sources, metrics, and macros can be logically grouped and distributed by the use of packages.
Examples can be found in the hercules-packages directory.
- Calculate prometheus-compatible metrics from geospatial data
- Coerce unwieldy files to useful statistics using full-text search
- Use modern pipe sql syntax or prql for defining and transforming your metrics
- You don't need to start queries with
select
.
More to come.