Coder is deployed onto Kubernetes clusters, and we recommend the following resource allocation minimums to ensure quality performance.
For the Coder control plane (which consists of the
coderd pod and any
additional replicas) allocate at least 2 CPU cores, 4 GB of RAM, and 20 GB of
In addition to sizing the control plane node(s), you can configure the
pod's resource requests/limits and number of replicas in the Helm chart. The
current defaults for both CPU and memory are the following:
resources: requests: cpu: "250m" memory: "512Mi" limits: cpu: "250m" memory: "512Mi"
By default, Coder is a single-replica deployment. For production systems, consider using at least three replicas to provide failover and load balancing capabilities.
If you expect roughly ten or more concurrent users, we recommend increasing these figures to improve platform performance (we also recommend regular performance testing in a staging environment).
For each active developer using Coder, allocate additional resources. The specific amount required per developer varies, though we recommend starting with 4 CPUs and 16 GB of RAM, then iterating as needed. Developers are free to request the resource allocation that fits their usage:
We also recommend monitoring your usage to determine whether you should change your resource allocation. Accepting a utilization of RAM of around 50% and CPU of around 70% is a good way to balance performance with cost.
We recommend the following throughput:
You must enable the following extensions on your K8 cluster (check whether you
have these extensions enabled by running
kubectl get apiservices):
Use an up-to-date browser to ensure that you can use all of Coder's features. We currently require the following versions or newer:
If you're using Remote IDEs, allow pop-ups; Coder launches the Remote IDE in a pop-up window.
Coder requires the use of a persistent volume in your Kubernetes cluster to store workspaces data. More specifically, the persistent volume claim (PVC) requires the block storage type (the PVC is created when you create the workspace to mount the requested block storage).
Files stored in the
/home directory of a workspace are persisted in the PVC.
All files that live outside of the
/home directory are written to the node's
disk storage (the node's disk storage is shared across all workspaces on that
node). If there's insufficient node disk storage, Coder cannot create new
workspaces (and, in some cases, workspaces may be evicted from the node). To
avoid this, we recommend creating nodes with a disk size of at least 100 GiB.
Additionally, you must enable
dynamic volume provisioning
so that Coder can mount the PVC to the workspace (if you're using a custom
StorageClass, be sure that it supports DVP. Otherwise, Coder cannot provision
Coder requires a PostgreSQL database to store metadata related to your deployment.
By default, Coder deploys a TimescaleDB internal to your Kubernetes cluster. This is included for evaluation purposes only, and it is not backed up. For production deployments, we recommend using a PostgreSQL database external to your cluster. You can connect Coder to your external database by modifying the Helm chart with information regarding your PostgreSQL instance.
Coder requires, at minimum, PostgreSQL 11 with the
contrib package installed.
Coder uses Kubernetes NetworkPolicies to enforce network segmentation and tenant isolation within your cluster.
Coder's network isolation policy blocks all ingress traffic to workspaces except traffic from the control plane (this ensures that you can audit all traffic). However, the control plane does not specify egress rules; by default, it allows outbound traffic. However, you can still enforce a more specific network policy.
Container network interface (CNI) plugins implement network segmentation and tenant isolation in the Kubernetes cluster. They enforce network boundaries between pods, preventing users from accessing other workspaces.
If your container network interface (CNI) plugin does not support NetworkPolicy enforcement, traffic between workspaces, and other containerized workloads within the same cluster will be permitted to communicate without restriction. Consider testing your container networking after installing Coder to ensure that the behavior is as expected.
If you're not sure which CNI plugin to use, we suggest Calico.
The use of Coder deployments requires a license that's emailed to you.
Deployments using the free trial of Coder:
The above requirements do not apply to potential customers engaged in our evaluation program.