Installation

Architecture

Coder has these components:

  • Manager - Serves as the central authority, providing authentication, the dashboard, and an API to create and interact with environments.
  • PostgreSQL - Stores general data, such as session tokens, environment info, etc.

In the basic deployment, every management component runs within their own pod.

The control pod uses the in cluster Kubernetes credentials to manage environments.

Kubernetes Node Requirements

Coder runs on any Kubernetes cluster that supports the following requirements.

We require that you run at least Kubernetes version 1.13.7 with the following extensions enabled:

  • apps/v1
  • rbac.authorization.k8s.io/v1
  • metrics.k8s.io
  • storage.k8s.io/v1
  • networking.k8s.io/v1
  • extensions/v1beta1

We recommend at least 2 cores and 4GB of RAM for the basic control services. A disk as small as 20GB will suffice.

The following throughput is recommended:

  • read: 3000 IOPS at 50MB/s
  • write: 3000 IOPS at 50MB/s

Furthermore, for each active developer we require 1 core, 1 GB of RAM, and 10 GB of storage.

We’re happy to look into support for different versions and configurations.

Kubernetes NGINX Ingress

Coder relies on the NGINX kubernetes ingress controller to allocate and route requests to services.

Once your cluster is setup, you'll need to follow the kubernetes NGINX installation instructions before installing Coder onto your cluster.


Deploying Kubernetes

If you already have a kubernetes cluster running with the requirements mentioned above, you can skip down to Installing Coder onto Kubernetes.

Deployment Options

Google Kubernetes Engine

The following steps will guide you through setting up a GKE cluster that Coder can deploy onto.

There are options to create it through the cloud console or through the gcloud command.

Through the cloud console

  1. Navigate to the Kubernetes Engine page in the google cloud console
  2. Click "CREATE CLUSTER"
  3. Adjust the name, number of nodes, zonal or regional fields to your desired values
  4. For the Master version, Coder has been tested to work with versions greater than "1.14.6"
  5. Select a "Machine configuration" that will work appropriately for your expected workloads as long as it's greater than the resource requirements mentioned above
  6. At the bottom of the page, select the "Availability, networking, security, and additional features" dropdown
  7. Scroll to the "Load balancing" section and select "Enable HTTP load balancing" to disable the GCE ingress controller
  8. Scroll to the "Network security" section and select "Enable network policy" to enable the networking.k8s.io/v1 kubernetes extension
  9. Click "Create" to create your cluster

You can now proceed to post deployment to finish setting up your local environment to deploy Coder.

Through the command line

This deployment option requires that the gcloud CLI is installed on your machine.

The following will spin up a kubernetes cluster using the gcloud command, replace the parameters and environment variables with the values that you see fit.

PROJECT_ID="MY_PROJECT_ID" CLUSTER_NAME="MY_CLUSTER_NAME" \
    gcloud beta container --project "$PROJECT_ID" \
    clusters create "$CLUSTER_NAME" \
    --zone "us-central1-a" \
    --no-enable-basic-auth \
    --cluster-version "1.14.7-gke.14" \
    --machine-type "n1-standard-4" \
    --image-type "COS" \
    --disk-type "pd-standard" \
    --disk-size "100" \
    --metadata disable-legacy-endpoints=true \
    --scopes "https://www.googleapis.com/auth/cloud-platform" \
    --num-nodes "2" \
    --enable-stackdriver-kubernetes \
    --enable-ip-alias \
    --network "projects/${PROJECT_ID}/global/networks/default" \
    --subnetwork "projects/${PROJECT_ID}/regions/us-central1/subnetworks/default" \
    --default-max-pods-per-node "110" \
    --addons HorizontalPodAutoscaling,HttpLoadBalancing \
    --enable-autoupgrade \
    --enable-autorepair \
    --enable-network-policy \
    --enable-autoscaling --min-nodes "2" --max-nodes "8"

You can now proceed to post deployment to finish setting up your local environment to deploy Coder.

Post Deployment

After deploying you will need to configure kubectl to point to your cluster.

  1. Ensure the gcloud CLI is installed.
  2. Ensure kubectl is installed through the gcloud command:
gcloud components install kubectl
  1. Initialize kubectl with the cluster credentials:
gcloud container clusters get-credentials [CLUSTER_NAME]

You should now be setup and ready to deploy Coder onto your cluster.

Self Hosted

Kubernetes node dependency installation instructions:

  • docker
    • If docker is already installed, make sure the daemon is configured as specified on the page.
  • kubeadm

The base of every Kubernetes cluster is the control plane. It can be deployed multiple ways: as a single node or with high availability. Even if you don't currently plan on deploying it with high availability, it may make sense to allocate it a DNS name that points to the machine. This allows you to update it later to point to a load balancer that sits in front of multiple control planes, thus offering high availability. Once a cluster is initialized without high availability it cannot be migrated to it in the future without tearing down the entire cluster.

To initialize our control plane, we start with kubeadm init:

kubeadm init \
  # only required if you want to run the cluster in high availability mode.
  # this ip must point to a load balancer or the current VM.
  --control-plane-endpoint my-k8s-control-plane.dev.coder.com \
  # this is the IP range for the pod network.
  # can be any internal ip range that you like.
  --pod-network-cidr 10.255.0.0/16

If successful, you should see something like this:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join <control_plane_address> --token ztkj2o.ndr96txsddqs084v \
    --discovery-token-ca-cert-hash sha256:d85f75347cb20205eefd41aac0c3696508763265d249d47fd47181aa0c4b639d

The last command is important to save for later if you plan to add more than one node to your cluster.

To get kubectl working, run these commands as a non-root user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Confirm that kubectl works by running kubectl get nodes. You should see something like this:

$ kubectl get nodes
NAME                     STATUS     ROLES    AGE   VERSION
colin-test-k8s-install   NotReady   master   67s   v1.16.0

Next, we need to add a pod network. We suggest flannel.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

If you only plan to run Kubernetes on one host, you must allow scheduling of pods on the control plane. It can be done like so:

kubectl taint nodes --all node-role.kubernetes.io/master-

Your cluster should now be ready. Run kubectl get nodes again to confirm.

$ kubectl get nodes
NAME                     STATUS   ROLES    AGE     VERSION
colin-test-k8s-install   Ready    master   3m34s   v1.16.0

To add more nodes, begin the installation process again. This time instead of running kubeadm init, paste in the kubeadm join command you saved earlier. If successful, you should now see your new node when running kubectl get nodes.

$ kubectl get nodes
NAME                     STATUS   ROLES    AGE    VERSION
colin-k8s-node-1         Ready    <none>   129m   v1.16.0
colin-test-k8s-install   Ready    master   142m   v1.16.0

You're now ready to install Coder onto the cluster.

Installing Coder onto Kubernetes

Dependency installation instructions:

Ensure that your user has access to the Docker daemon and has kubectl set up to point to the cluster you would like to deploy Coder to.

To access Coder on your cluster, you'll need to ensure that the kubernetes NGINX ingress controller is installed. The kubernetes NGINX installation instructions will guide you through getting this setup on your cluster if it isn't already.


Coder will distribute an archive that contains everything that is needed to start using the product.

The following steps can be used to download, unpack, and deploy Coder to your kubernetes cluster. The first two steps can take a bit of time depending on your network speed and machine capabilities.

  1. Download and extract the archive using the link provided by our team

    curl -L [PROVIDED_LINK_URL] | tar xzvf -
    

    The extracted archive should contain a README.md that gives more details about the archive and scripts contained

  2. Deploy Coder to your kubernetes cluster

    CLUSTER_USER=myname@myorganization.com ./up.sh --registry my.docker.registry.io
    

    The CLUSTER_USER environment variable should be set to your organization email, and the registry flag should point to the registry that is used for your kubernetes cluster. In the case of a GKE deployment, the registry is typically located at gcr.io/[PROJECT_ID].

  3. After the pod is created, start tailing the logs to find the randomly generated password for the admin user

    kubectl logs --namespace coder services/cemanager cemanager -f
    

    When the manager is finished setting up, you will see something like this:

    ----------------------
    User:     admin
    Password: kvhvm43nq8k3
    ----------------------
    

    These are the credentials you will use to setup the platform on the Web UI.

  4. Run the following to find the IP address used to access the Web UI by listing your NGINX ingress service.

    kubectl --namespace ingress-nginx get services
    NAME            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                       AGE
    ingress-nginx   LoadBalancer   10.181.8.228    35.239.169.212   80:32534/TCP,443:31916/TCP    3h55m
    
  5. Navigate to the external IP of ingress-nginx and use the credentials to login to the platform

  6. After logging in, you will be required to create a new password for the system admin user

Configuration

After setting a permanent password, you will be brought to the configuration dashboard where you can specify the authentication method, database URLs, and additional settings. The default values should be fine for an evaluation deployment. Once the configuration is completed, you will be brought to the main dashboard where you can create new users, images, and environments.

After going through the configuration process, it's recommended that you create an admin user for yourself to create additional users and resources. The system admin should only be used for the initial configuration.

Table of Contents