YuniKorn is a light-weighted, universal resource scheduler for container orchestrator systems. It was created to achieve fine-grained resource sharing for various workloads efficiently on a large scale, multi-tenant, and cloud-native environment. YuniKorn brings a unified, cross-platform scheduling experience for mixed workloads consists of stateless batch workloads and stateful services. Support for but not limited to, YARN and Kubernetes.
Following chart illustrates the high-level architecture of YuniKorn.
YuniKorn consists of the following components spread over multiple code repositories.
- Scheduler core: Define the brain of the scheduler, which makes placement decisions (Allocate container X on node Y) according to pre configured policies. See more in current repo yunikorn-core.
- Scheduler interface: Define the common scheduler interface used by shims and the core scheduler. Contains the API layer (with GRPC/programming language bindings) which is agnostic to container orchestrator systems like YARN/K8s. See more in yunikorn-scheduler-interface.
- Resource Manager shims: Built-in support to allow container orchestrator systems talks to scheduler interface. Which can be configured on existing clusters without code change. Currently, yunikorn-k8shim is available for Kubernetes integration.
- Scheduler User Interface: Define the YuniKorn web interface for app/queue management. See more in yunikorn-web.
Here are some key features of YuniKorn.
- Features to support both batch jobs and long-running/stateful services
- Hierarchy queues with min/max resource quotas.
- Resource fairness between queues, users and apps.
- Cross-queue preemption based on fairness.
- Customized resource types (like GPU) scheduling support.
- Rich placement constraints support.
- Automatically map incoming container requests to queues by policies.
- Node partition: partition cluster to sub-clusters with dedicated quota/ACL management.
The current road map for the whole project is here, where you can find more information about what are already supported and future plans.
The simplest way to run YuniKorn is to build a docker image and then deployed to Kubernetes with a yaml file, running as a customized scheduler. Then you can run workloads with this scheduler. See more instructions from here.
We welcome any form of contributions, code, documentation or even discussions. To get involved, please read following resources.
- Before you contributing code or documentation to YuniKorn, please read our Developer Guide.
- Please read How to Contribute to understand the procedure and guidelines of making contributions.
YuniKorn demo videos are published to a Youtube channel, watch these demo videos may help you to get started with YuniKorn easier.
Blog posts