Skip to content
forked from openebs/openebs

Leading Open Source Container Attached Storage, built using Cloud Native Architecture, simplifies running Stateful Applications on Kubernetes.

License

Notifications You must be signed in to change notification settings

niladrih/openebs

 
 

OpenEBS

Releases Slack channel #openebs Twitter PRs Welcome FOSSA Status CII Best Practices

https://openebs.io/

Read this in 🇩🇪 🇷🇺 🇹🇷 🇺🇦 🇨🇳 🇫🇷 🇧🇷 🇪🇸 🇵🇱 other languages.

OpenEBS is the most widely deployed and easy to use open-source storage solution for Kubernetes.

OpenEBS is the leading open-source example of a category of cloud native storage solutions sometimes called Container Attached Storage. OpenEBS is listed as an open-source example in the CNCF Storage Landscape White Paper under the hyperconverged storage solutions.

Some key aspects that make OpenEBS different compared to other traditional storage solutions:

  • Built using the micro-services architecture like the applications it serves. OpenEBS is itself deployed as a set of containers on Kubernetes worker nodes. Uses Kubernetes itself to orchestrate and manage OpenEBS components.
  • Built completely in userspace making it highly portable to run across any OS/platform.
  • Completely intent-driven, inheriting the same principles that drive the ease of use with Kubernetes.
  • OpenEBS supports a range of storage engines so that developers can deploy the storage technology appropriate to their application design objectives. Distributed applications like Cassandra can use the LocalPV engine for lowest latency writes. Monolithic applications like MySQL and PostgreSQL can use the ZFS engine (cStor) for resilience. Streaming applications like Kafka can use the NVMe engine Mayastor for best performance in edge environments. Across engine types, OpenEBS provides a consistent framework for high availability, snapshots, clones and manageability.

OpenEBS itself is deployed as just another container on your host and enables storage services that can be designated on a per pod, application, cluster or container level, including:

  • Automate the management of storage attached to the Kubernetes worker nodes and allow the storage to be used for Dynamically provisioning OpenEBS PVs or Local PVs.
  • Data persistence across nodes, dramatically reducing time spent rebuilding Cassandra rings for example.
  • Synchronization of data across availability zones and cloud providers improving availability and decreasing attach/detach times for example.
  • A common layer so whether you are running on AKS, or your bare metal, or GKE, or AWS - your wiring and developer experience for storage services is as similar as possible.
  • Management of tiering to and from S3 and other targets.

An added advantage of being a completely Kubernetes native solution is that administrators and developers can interact and manage OpenEBS using all the wonderful tooling that is available for Kubernetes like kubectl, Helm, Prometheus, Grafana, Weave Scope, etc.

Our vision is simple: let storage and storage services for persistent workloads be fully integrated into the environment so that each team and workload benefits from the granularity of control and Kubernetes native behaviour.

Scalability

OpenEBS can scale to include an arbitrarily large number of containerized storage controllers. Kubernetes is used to provide fundamental pieces such as using etcd for inventory. OpenEBS scales to the extent your Kubernetes scales.

Installation and Getting Started

OpenEBS can be set up in a few easy steps. You can get going on your choice of Kubernetes cluster by having open-iscsi installed on the Kubernetes nodes and running the openebs-operator using kubectl.

Start the OpenEBS Services using operator

# apply this yaml
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml

Start the OpenEBS Services using helm

helm repo update
helm install --namespace openebs --name openebs stable/openebs

You could also follow our QuickStart Guide.

OpenEBS can be deployed on any Kubernetes cluster - either in the cloud, on-premise or developer laptop (minikube). Note that there are no changes to the underlying kernel that are required as OpenEBS operates in userspace. Please follow our OpenEBS Setup documentation. Also, we have a Vagrant environment available that includes a sample Kubernetes deployment and synthetic load that you can use to simulate the performance of OpenEBS. You may also find interesting the related project called Litmus which helps with chaos engineering for stateful workloads on Kubernetes.

Status

OpenEBS is one of the most widely used and tested Kubernetes storage infrastructures in the industry. A CNCF Sandbox project since May 2019, OpenEBS is the first and only storage system to provide a consistent set of software-defined storage capabilities on multiple backends (local, nfs, zfs, nvme) across both on-premise and cloud systems, and was the first to open source its own Chaos Engineering Framework for Stateful Workloads, the Litmus Project, which the community relies on to automatically readiness assess the monthly cadence of OpenEBS versions. Enterprise customers have been using OpenEBS in production since 2018 and the project supports 2.5M+ docker pulls a week.

The status of various storage engines that power the OpenEBS Persistent Volumes are provided below. The key difference between the statuses are summarized below:

  • alpha: The API may change in incompatible ways in a later software release without notice, recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
  • beta: Support for the overall features will not be dropped, though details may change. Support for upgrading or migrating between versions will be provided, either through automation or manual steps.
  • stable: Features will appear in released software for many subsequent versions and support for upgrading between versions will be provided with software automation in the vast majority of scenarios.
Storage Engine Status Details
Jiva stable Best suited for running Replicated Block Storage on nodes that make use of ephemeral storage on the Kubernetes worker nodes
cStor beta A preferred option for running on nodes that have Block Devices. Recommended option if Snapshot and Clones are required
Local Volumes beta Best suited for Distributed Application that need low latency storage - direct-attached storage from the Kubernetes nodes.
Mayastor beta A new storage engine that operates at the efficiency of Local Storage but also offers storage services like Replication. Development is underway to support Snapshots and Clones.

For more details, please refer to OpenEBS Documentation.

Contributing

OpenEBS welcomes your feedback and contributions in any form possible.

Show me the Code

This is a meta-repository for OpenEBS. Please start with the pinned repositories or with OpenEBS Architecture document.

License

OpenEBS is developed under Apache License 2.0 license at the project level. Some components of the project are derived from other open source projects and are distributed under their respective licenses.

OpenEBS is part of the CNCF Projects.

CNCF Sandbox Project

Commercial Offerings

This is a list of third-party companies and individuals who provide products or services related to OpenEBS. OpenEBS is a CNCF project which does not endorse any company. The list is provided in alphabetical order.

About

Leading Open Source Container Attached Storage, built using Cloud Native Architecture, simplifies running Stateful Applications on Kubernetes.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published