New
Discover how Dropbox streamlined dev environments & cut costs by switching 1000 developers to Coder

Read the success story

Home
/
Architecture

Architecture

Architecture

The Coder deployment model is flexible and offers various components that platform administrators can deploy and scale depending on their use case. This page describes possible deployments, challenges, and risks associated with them.

Learn more about our Reference Architectures and platform scaling capabilities.

Primary components

coderd

coderd is the service created by running coder server. It is a thin API that connects workspaces, provisioners and users. coderd stores its state in Postgres and is the only service that communicates with Postgres.

It offers:

  • Dashboard (UI)
  • HTTP API
  • Dev URLs (HTTP reverse proxy to workspaces)
  • Workspace Web Applications (e.g for easy access to code-server)
  • Agent registration

provisionerd

provisionerd is the execution context for infrastructure modifying providers. At the moment, the only provider is Terraform (running terraform).

By default, the Coder server runs multiple provisioner daemons. External provisioners can be added for security or scalability purposes.

Agents

An agent is the Coder service that runs within a user's remote workspace. It provides a consistent interface for coderd and clients to communicate with workspaces regardless of operating system, architecture, or cloud.

It offers the following services along with much more:

  • SSH
  • Port forwarding
  • Liveness checks
  • startup_script automation

Templates are responsible for creating and running agents within workspaces.

Service Bundling

While coderd and Postgres can be orchestrated independently, our default installation paths bundle them all together into one system service. It's perfectly fine to run a production deployment this way, but there are certain situations that necessitate decomposition:

  • Reducing global client latency (distribute coderd and centralize database)
  • Achieving greater availability and efficiency (horizontally scale individual services)

Workspaces

At the highest level, a workspace is a set of cloud resources. These resources can be VMs, Kubernetes clusters, storage buckets, or whatever else Terraform lets you dream up.

The resources that run the agent are described as computational resources, while those that don't are called peripheral resources.

Each resource may also be persistent or ephemeral depending on whether they're destroyed on workspace stop.

Deployment models

Single region architecture

Architecture Diagram

Components

This architecture consists of a single load balancer, several coderd replicas, and Coder workspaces deployed in the same region.

Workload resources
  • Deploy at least one coderd replica per availability zone with coderd instances and provisioners. High availability is recommended but not essential for small deployments.
  • Single replica deployment is a special case that can address a tiny/small/proof-of-concept installation on a single virtual machine. If you are serving more than 100 users/workspaces, you should add more replicas.

Coder workspace

HA Database

  • Monitor node status and resource utilization metrics.
  • Implement robust backup and disaster recovery strategies to protect against data loss.
Workload supporting resources

Load balancer

  • Distributes and load balances traffic from agents and clients to Coder Server replicas across availability zones.
  • Layer 7 load balancing. The load balancer can decrypt SSL traffic, and re-encrypt using an internal certificate.
  • Session persistence (sticky sessions) can be disabled as coderd instances are stateless.
  • WebSocket and long-lived connections must be supported.

Single sign-on

  • Integrate with existing Single Sign-On (SSO) solutions used within the organization via the supported OAuth 2.0 or OpenID Connect standards.
  • Learn more about Authentication in Coder.

Multi-region architecture

Architecture Diagram

Components

This architecture is for globally distributed developer teams using Coder workspaces on daily basis. It features a single load balancer with regionally deployed Workspace Proxies, several coderd replicas, and Coder workspaces provisioned in different regions.

Note: The multi-region architecture assumes the same deployment principles as the single region architecture, but it extends them to multi region deployment with workspace proxies. Proxies are deployed in regions closest to developers to offer the fastest developer experience.

Workload resources

Workspace proxy

  • Workspace proxy offers developers the option to establish a fast relay connection when accessing their workspace via SSH, a workspace application, or port forwarding.
  • Dashboard connections, API calls (e.g. list workspaces) are not served over proxies.
  • Proxies do not establish connections to the database.
  • Proxy instances do not share authentication tokens between one another.
Workload supporting resources

Proxy load balancer

  • Distributes and load balances workspace relay traffic in a single region across availability zones.
  • Layer 7 load balancing. The load balancer can decrypt SSL traffic, and re-encrypt using internal certificate.
  • Session persistence (sticky sessions) can be disabled as coderd instances are stateless.
  • WebSocket and long-lived connections must be supported.

Multi-cloud architecture

By distributing Coder workspaces across different cloud providers, organizations can mitigate the risk of downtime caused by provider-specific outages or disruptions. Additionally, multi-cloud deployment enables organizations to leverage the unique features and capabilities offered by each cloud provider, such as region availability and pricing models.

Architecture Diagram

Components

The deployment model comprises:

  • coderd instances deployed within a single region of the same cloud provider, with replicas strategically distributed across availability zones.
  • Workspace provisioners deployed in each cloud, communicating with coderd instances.
  • Workspace proxies running in the same locations as provisioners to optimize user connections to workspaces for maximum speed.

Due to the relatively large overhead of cross-regional communication, it is not advised to set up multi-cloud control planes. It is recommended to keep coderd replicas and the database within the same cloud-provider and region.

Note: The multi-cloud architecture follows the deployment principles outlined in the multi-region architecture. However, it adapts component selection based on the specific cloud provider. Developers can initiate workspaces based on the nearest region and technical specifications provided by the cloud providers.

Workload resources

Workspace provisioner

  • Security recommendation: Create a long, random pre-shared key (PSK) and add it to the regional secret store, so that local provisionerd can access it. Remember to distribute it using safe, encrypted communication channel. The PSK must also be added to the coderd configuration.

Workspace proxy

Managed database

  • For AWS: Amazon RDS for PostgreSQL
  • For Azure: Azure Database for PostgreSQL - Flexible Server
  • For GCP: Cloud SQL for PostgreSQL
Workload supporting resources

Kubernetes platform (optional)

  • For AWS: Amazon Elastic Kubernetes Service
  • For Azure: Azure Kubernetes Service
  • For GCP: Google Kubernetes Engine

See how to deploy Coder on Azure Kubernetes Service.

Learn more about security requirements for deploying Coder on Kubernetes.

Load balancer

  • For AWS:
    • AWS Network Load Balancer
      • Level 4 load balancing
      • For Kubernetes deployment: annotate service with service.beta.kubernetes.io/aws-load-balancer-type: "nlb", preserve the client source IP with externalTrafficPolicy: Local
    • AWS Classic Load Balancer
      • Level 7 load balancing
      • For Kubernetes deployment: set sessionAffinity to None
  • For Azure:
    • Azure Load Balancer
      • Level 7 load balancing
    • Azure Application Gateway
      • Deploy Azure Application Gateway when more advanced traffic routing policies are needed for Kubernetes applications.
      • Take advantage of features such as WebSocket support and TLS termination provided by Azure Application Gateway, enhancing the capabilities of Kubernetes deployments on Azure.
  • For GCP:
    • Cloud Load Balancing with SSL load balancer:
      • Layer 4 load balancing, SSL enabled
    • Cloud Load Balancing with HTTPS load balancer:
      • Layer 7 load balancing
      • For Kubernetes deployment: annotate service (with ingress enabled) with kubernetes.io/ingress.class: "gce", leverage the NodePort service type.
      • Note: HTTP load balancer rejects DERP upgrade, Coder will fallback to WebSockets

Single sign-on

Air-gapped architecture

The air-gapped deployment model refers to the setup of Coder's development environment within a restricted network environment that lacks internet connectivity. This deployment model is often required for organizations with strict security policies or those operating in isolated environments, such as government agencies or certain enterprise setups.

The key features of the air-gapped architecture include:

  • Offline installation: Deploy workspaces without relying on an external internet connection.
  • Isolated package/plugin repositories: Depend on local repositories for software installation, updates, and security patches.
  • Secure data transfer: Enable encrypted communication channels and robust access controls to safeguard sensitive information.

Learn more about offline deployments of Coder.

Architecture Diagram

Components

The deployment model includes:

  • Workspace provisioners with direct access to self-hosted package and plugin repositories and restricted internet access.
  • Mirror of Terraform Registry with multiple versions of Terraform plugins.
  • Certificate Authority with all TLS certificates to build secure communication channels.

The model is compatible with various infrastructure models, enabling deployment across multiple regions and diverse cloud platforms.

Workload resources

Workspace provisioner

  • Includes Terraform binary in the container or system image.
  • Checks out Terraform plugins from self-hosted Registry mirror.
  • Deploys workspace images stored in the self-hosted Container Registry.

Coder server

  • Update checks are disabled (CODER_UPDATE_CHECK=false).
  • Telemetry data is not collected (CODER_TELEMETRY_ENABLE=false).
  • Direct connections are not possible, workspace traffic is relayed through control plane's DERP proxy.
Workload supporting resources

Self-hosted Database

  • In the air-gapped deployment model, Coderd instance is unable to download Postgres binaries from the internet, so external database must be provided.

Container Registry

  • Since the Registry is isolated from the internet, platform engineers are responsible for maintaining Workspace container images and conducting periodic updates of base Docker images.
  • It is recommended to keep Dev Containers up to date with the latest released Envbuilder runtime.

Mirror of Terraform Registry

  • Stores all necessary Terraform plugin dependencies, ensuring successful workspace provisioning and maintenance without internet access.
  • Platform engineers are responsible for periodically updating the mirrored Terraform plugins, including terraform-provider-coder.

Certificate Authority

  • Manages and issues TLS certificates to facilitate secure communication channels within the infrastructure.

Dev Containers

Note: Dev containers are at early stage and considered experimental at the moment.

This architecture enhances a Coder workspace with a development container setup built using the envbuilder project. Workspace users have the flexibility to extend generic, base developer environments with custom, project-oriented features without requiring platform administrators to push altered Docker images.

Learn more about Dev containers support in Coder.

Architecture Diagram

Components

The deployment model includes:

  • Workspace built using Coder template with envbuilder enabled to set up the developer environment accordingly to the dev container spec.
  • Container Registry for Docker images used by envbuilder, maintained by Coder platform engineers or developer productivity engineers.

Since this model is strictly focused on workspace nodes, it does not affect the setup of regional infrastructure. It can be deployed alongside other deployment models, in multiple regions, or across various cloud platforms.

Workload resources

Coder workspace

  • Docker and Kubernetes based templates are supported.
  • The docker_container resource uses ghcr.io/coder/envbuilder as the base image.

Envbuilder checks out the base Docker image from the container registry and installs selected features as specified in the devcontainer.json on top. Eventually, it starts the container with the developer environment.

Workload supporting resources

Container Registry (optional)

  • Workspace nodes need access to the Container Registry to check out images. To shorten the provisioning time, it is recommended to deploy registry mirrors in the same region as the workspace nodes.
See an opportunity to improve our docs? Make an edit.