The talk recording can be found here.
The slide deck for this talk can be found here.
The code used in the talk can be found in the example folder, which is adapted from the Pipekit blog post How To Get the Most out of Hera for Data Science.
- docker-desktop with the local Kubernetes cluster running to be able to install and run Argo Workflows locally
- Poetry to install and run Python more easily
- Run
make install
- Port forward the argo service and the minio service (easiest with k9s)
- Run
make add-data
- Run
make run
- See the workflow at the localhost web address printed to the console
Pipekit is the control plane for Argo Workflows. Platform teams use Pipekit to manage data & CI pipelines at scale, while giving developers self-serve access to Argo. Pipekit's unified logging view, Workflow Metrics dashboards, enterprise-grade RBAC and multi-cluster management capabilities lower maintenance costs for platform teams while delivering a superior devex for Argo users. Sign up for a 30-day free trial at pipekit.io/signup.
Learn more about Pipekit's professional support for companies already using Argo at pipekit.io/services.