You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Devpod provides the ability to build workspaces by taking a devcontainer.json and a git repository to compile an OCI compliant image with everything you need to develop
It does this by parsing the devcontainer.json, extracting the "features" and appending them as build stages to the base Dockerfile. The container is then built, depending on the driver
15
+
this could be docker, buildkit or kaniko and deployed with the configuration defined by your context. Optionally once the container is built, it can be pushed to a registry to cache for
16
+
other developers or in case you rebuild your workspace later. See #tutorials/reduce-build-times.
DevPod works the same with kubernetes as with Machines, the key difference is the secure tunnel is set up using the kubernetes control plane (e.g. kubectl ...) so an agent is not necessary
7
+
to be run on the kubernetes node. Instead devpod-provider-kubernetes simply wraps the appropriate `kubectl` commands to start and connect a workspace using a devcontainer.
DevPod often has to build workspaces even when an "image" is specified in .devcontainer.json. This is because the devcontainer can contain "features" the cause the Dockerfile to be extended.
15
+
When this happens, or simply when "build" is used in .devcontainer.json, DevPod deploys an init container to the workspace that uses kaniko to first build your image (see #tutorials/reduce-build-times
16
+
for more details on kaniko) then executes the container's entrypoint in the pod's main container. While building, if REGISTRY_CACHE has been specified in the context options, kaniko will download
17
+
existing build layers from the registry to reduce the overall build time.
Machines to DevPod are the infrastructure that ultimately run your devcontainer. Some providers such as gcloud, aws, digitalocean are "machine" providers since
7
+
they first setup a VM to deploy you container to.
8
+
9
+
When DevPod starts a workspace, such as `devpod up`, DevPod uses the provider selected and starts your devcontainer. If using a machine provider, DevPod will check if it should create a VM first.
10
+
If so it uses your local environments credentials and the associated CLI tool, such as `aws` or `az` to create the infrastructure. Once started DevPod connects to the VM using the provider's specified tunnel, below
11
+
are some examples of providers and there secure tunnels.
12
+
13
+
14
+
- AWS: Instance connect
15
+
- GCloud:
16
+
- Azure:
17
+
18
+
The dedvpod agent starts a SSH server using the STDIO of the secure tunnel in order for you local DevPod CLI/UI to forward ports over the SSH connection. Once this is done DevPod starts your local
Devpod deploys workspaces using the "up" command, when executed DevPod builds a devcontainer, if not already available, then uses the provider to deploy the devcontainer to a workspace. Below is a sequence diagram
7
+
of the main stages of the "up" command.
8
+
9
+
<figure>
10
+
<imgsrc="/docs/media/up_sequence.png"alt="DevPod Up Sequence" />
11
+
<figcaption>DevPod up - Sequence Diagram</figcaption>
12
+
</figure>
13
+
14
+
First DevPod checks if we need to create/start a machine to deploy the devcontainer to. Next we pull the source code and .devcontainer.json source from git or a local file and use this with the local environment
15
+
to build the workspace. Building is done by the agent since we need access to build tools such as buildkit or kaniko, i.e. `devpod workspace build`. The workspace now contains everything needed,
16
+
so DevPod sets up a SSH connection to the DevPod agent running alongside the container's control plane.
17
+
18
+
The agent recieves "devpod agent workspace up" with the workspace spec serialised as workspace-info and uses the control plane (kube api server for k8s, docker daemon for anything else) to start the devcontainer.
19
+
Once started DevPod deploys a daemon to monitor activity, optionally sets up any platform access for pro users then optionally retrieves credentials from the local environment before launching the IDE. Once the
20
+
IDE has started the deployment process has complete, DevPod's agent daemon will continue to monitor the pod, if deployed, to put the machine or container to sleep when not in use.
Devpod provides the ability to provision workspaces on any infrastructure. It does so by wrapping your conventional CLI tools such as kubectl, docker, gcloud etc to deploy your development environment
7
+
and set up everything required to run the dev container. While creating the workspace DevPod deploys an agent to the machine running the container as well as to the container itself to provide useful
8
+
functions such as port forwarding, credential forwarding and log streaming. Doing so it provides a control plane across your development environment.
9
+
10
+
Devpod uses a client-agent architecture, where the client deploys it's own agent to host various servers, such as a grpc server or SSH server.
11
+
In this regard the system is not unlike a browser server architecture where the front end is deployed and executed on a remote host. There are several improvements this brings to our specific context:
12
+
- There can be no conflict of versions between client and server, since you install only one version of the client
13
+
- There is no infrastructure to manage for users
14
+
15
+
To simplify debugging, DevPod connects your local shell with the agent's STDIO so you can see what's happening locally and in the container at all times.
16
+
17
+
Below is a high level overview of how DevPod uses your local environment, a source repo and a devcontainer to deploy your workspace to the cloud.
Devpod establishes a connection to the workspace using a vendor specific API. This vendor specific communication channel is referred to as the "tunnel". When you run a `devpod up` command, DevPod selects a
25
+
provider based on your context and starts your devcontainer. If using a machine provider, DevPod will check if it should create a VM first. Once the devcontainer
26
+
is running DevPod deploys an agent to the container. The way in which DevPod communicates with the workspace depends on the provider, this is known as the "tunnel". For AWS this could be instance connect, kubernetes uses
27
+
the kubernetes control plane (kubectl), this connection is secured based on this tunnel. The DevPod agent starts a SSH server using the STDIO of the secure tunnel in order for you local DevPod CLI/UI to forward
28
+
ports over the SSH connection. Once this is done DevPod starts your local IDE and connects it to the devcontainer via SSH.
29
+
30
+
If you developer environment requires any port forwarding, then your IDE or an SSH connection must be running. That's because devpod needs the SSH server running on the agent to perform the forwarding,
31
+
which is deployed when starting the IDE or SSH session.
This purpose of this page is to outline any known issues with using devpod on Linux and provide known workarounds / fixes.
7
+
8
+
### Using FISH shell
9
+
10
+
Custom configurations in config.fish file run every time a fish -c command is called, so this processes somewhat get on the way of devpod agent workspace up.
11
+
12
+
The solution is to move the customizations inside the if status is-interactive case.
13
+
14
+
From this
15
+
16
+
```
17
+
if status is-interactive
18
+
# Commands to run in interactive sessions can go here
If you are running SELinux and try to start a workspace with a mounted volume, you may recieve a "Permission Denied" even if the ownership of the files are correct. To resolve
A new window appears showing DevPod starting the workspace. After the workspace was created, VS Code should open automatically connected to the DevContainer.
0 commit comments