If you're a site admin or a site manager, you can enable container-based virtual machines (CVMs) as a workspace deployment option. CVMs allow users to run system-level programs, such as Docker and systemd, in their workspaces.
hostPathmounts. Read more about why this is still secure here.
You can use any cloud provider that supports the above requirements, but we have instructions on how to set up supported clusters on AWS and Google. Azure-hosted clusters will meet these requirements as long as you use Kubernetes version 1.18+.
Coder doesn't support legacy versions of cluster-wide proxy services such as Istio, and CVMs do not currently support NFS as a file system.
NVIDIA GPUs can be added to CVMs on bare metal clusters only. This feature is not supported on Google Kubernetes Engine or other cloud providers at this time.
Support for NVIDIA GPUs is in beta. We do not support AMD GPUs at this time.
The following sections show how you can set up your K8 clusters hosted by Google, Azure, and Amazon to support CVMs.
To use CVMs with GKE, create a cluster using the following parameters:
node-version = "latest"
image-type = "UBUNTU"
gcloud beta container clusters create "YOUR_NEW_CLUSTER" \ --node-version "latest" \ --cluster-version "latest" \ --image-type "UBUNTU" ...
If you're using Kubernetes version 1.18, Azure defaults to the correct Ubuntu
node base image. When creating your cluster, set
1.18.x or newer for CVMs.
Define your config file in the location of your choice (we've named the file
coder-node.yaml, but you can call it whatever you'd like):
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: version: "1.17" name: <YOUR_CLUSTER_NAME> region: <YOUR_AWS_REGION> nodeGroups: - name: coder-node-group amiFamily: Ubuntu1804
Create your nodegroup (be sure to provide the correct file name):
eksctl create nodegroup --config-file=coder-node.yaml
Coder first launches a supervising container with additional privileges. This container is standard and included with the Coder release package. During the workspace build process, the supervising container launches an inner container using the Sysbox container runtime. This inner container is the user’s workspace.
The user cannot gain access to the supervising container at any point. The isolation between the user's workspace container and its outer, supervising container is what provides strong isolation.
Please note that CVM-enabled workspaces cannot be created using images hosted in a private registry unless you permit unauthenticated access to the images.
The following sections show how you can configure your image to include systemd and Docker for use in CVMs.
If your image's OS distribution doesn't link the
systemd init to
you'll need to do this manually in your Dockerfile.
The following snippet shows how you can specify
systemd as the init in your
FROM ubuntu:20.04 RUN apt-get update && apt-get install -y \ build-essential \ systemd # use systemd as the init RUN ln -s /lib/systemd/systemd /sbin/init
When you start up a workspace, Coder checks for the presence of
your image. If it exists, then Coder uses it as the container entrypoint with a
PID of 1.
To add Docker, install the
docker packages into your image. For a seamless
experience, use systemd and register the
docker service so
dockerd runs automatically during initialization.
The following snippet shows how your image can register the
docker services in
FROM ubuntu:20.04 RUN apt-get update && apt-get install -y \ build-essential \ git \ bash \ docker.io \ curl \ sudo \ systemd # Enables Docker starting with systemd RUN systemctl enable docker # use systemd as the init RUN ln -s /lib/systemd/systemd /sbin/init