Consider the case of a virtual machine. A VM is a machine that is running inside a machine and is used widely for development purposes. VMs are so popular because several people can have their own "machines" that are completely separated from one another, allowing them to work separately, but also use the same pool of shared resources. So a team could have a machine that runs non-stop builds, while other teams could have individual machines that they do development work on. The machines can shut down, startup, receive updates, and have software installed, all of this independently of each other. Now, consider the same use case, but for clusters.
In a large organization, you would generally have several teams that use various clusters for different stages of development. The biggest problem here is that each team will need a different cluster, meaning that each team would end up managing its own Kubernetes cluster. This is inefficient and costly since there is a lot of repetition when creating and maintaining clusters, and there are a lot of costs involved as well. The common solution that organizations employ here is to create a single cluster that is maintained by dedicated DevOps teams. The cluster is then broken down into many different isolated namespaces and assigned to different teams.
However, there are several problems with this approach. The main problem is access restriction. Large organizations generally deal with sensitive data that only certain individuals should have access to. Not complying with these data protection rules can cost the company millions. But when everything is in a shared cluster, the problem of restricting access to various teams or persons within the team comes up. Even giving access to people in a team can be a hassle. The next problem with a shared cluster is that it also shares resources. This could mean that a handful of the teams could use the majority of cluster resources while other teams don't have the required computational power to run their application. In a dev environment, people can expect code to break the system frequently, while in a prod environment, nothing should ideally go wrong. These are two extremes that the cluster might not be able to handle, since the resources of various teams can get mixed up.
Now that we've listed out the problems, and you can see the title of this section, I think you should be able to figure out the solution: virtual clusters.
To observe how virtual clusters work, we will be taking Loft into consideration. Similar to how one would manually do it, Loft creates a namespace within the cluster. However, Loft also then deploys a lightweight Kubernetes cluster to the namespace based on k3s, which is a lightweight implementation of k8s. This cluster contained within the namespace will now have a fully featured API server + controller, meaning that it is essentially a cluster of its own. Every time a new cluster is deployed into a namespace, a new virtual cluster is created, and each team or individual can be assigned their own separate cluster. The most interesting part is that you don't have to keep a separate DevOps team to create and destroy the clusters, since Loft allows developers to easily do this themselves. However, it is likely that in large organizations, a team will be there to monitor and handle resource usage, and you will need DevOps engineers to initially set up Loft in your company cluster.
Setting up Loft in your existing cluster isn't particularly difficult. If you have multiple clusters hosted in various cloud service providers, you only have to install Loft on one of those clusters. You can then network the other clusters to it and create a larger self-service cluster. The next step is to handle access by creating users that have various permissions. This is something we couldn't do previously when using only namespaces to segregate the clusters. Once that is done, developers can create their own clusters to suit their needs.
Next, let's go into a hands-on lab.