We use cookies to make your experience better.
Learn how to set up an Amazon EKS cluster for your Coder deployment.
This deployment guide shows you how to set up an Amazon Elastic Kubernetes Engine cluster on which Coder can deploy.
Please make sure that you have the following utilities installed on your machine:
Before you can create a cluster, you'll need to perform the following to set up and configure your AWS account.
Go to AWS' EC2 Console; this should take you to the EC2 page for the AWS region in which you're working (if not, change to the correct region using the dropdown in the top-right of the page)
In the Resources section in the middle of the page, click Elastic IPs.
Choose either an Elastic IP address you want to use or click Allocate Elastic IP address. Choose Amazon's pool of IPv4 addresses and click Allocate.
Return to the EC2 Dashboard.
In the Resources section in the middle of the page, click Key Pairs.
Click Create key pair (alternatively, if you already have a local SSH key you'd like to use, you can click the Actions dropdown and import your key)
Provide a name for your key pair and select pem as your file format. Click Create key pair.
You'll automatically download the keypair; save it to a known directory on your local machine (we recommend keeping the default name, which will match the name you provided to AWS).
Now that you have the .pem
file locally extract the public key portion of
the keypair so that you can use it with the eksctl CLI in later steps:
ssh-keygen -y -f <PATH/TO/KEY>.pem >> <PATH/TO/KEY/KEY>.pub
Note: if you run into a bad permissions error, run sudo before the command above.
When done, you should have a .pem and .pub file for the same keypair you downloaded from AWS
The following will spin up a Kubernetes cluster using the eksctl
; replace the
parameters and environment variables as needed to reflect those for your
environment.
CLUSTER_NAME="YOUR_CLUSTER_NAME" \
SSH_KEY_PATH="<PATH/TO/KEY>.pub" REGION="YOUR_REGION" \
eksctl create cluster \
--name "$CLUSTER_NAME" \
--version 1.17 \
--region "$REGION" \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 2 \
--nodes-min 2 \
--nodes-max 8 \
--ssh-access \
--ssh-public-key "$SSH_KEY_PATH" \
--managed
Please note that the sample script creates a
t3.medium
instance; depending on your needs, you can choose a larger size instead.
When your cluster is ready, you should see the following message:
EKS cluster "YOUR_CLUSTER_NAME" in "YOUR_REGION" region is ready
Once you've created the cluster, adjust the default Kubernetes storage class to support immediate volume binding.
Make sure that you're pointed to the correct context:
kubectl config current-context
If you're pointed to the correct context, delete the gp2 storage class:
kubectl delete sc gp2
Recreate the gp2 storage class with the volumeBindingMode
set to
Immediate
:
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
volumeBindingMode: Immediate
EOF
To create clusters allowing you to enable container-based virtual machines (CVMs) as an environment deployment option, you'll need to create a nodegroup.
Define your config file (we've named the file coder-node.yaml
, but you can
call it whatever you'd like):
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
version: "1.17"
name: <YOUR_CLUSTER_NAME>
region: <YOUR_AWS_REGION>
nodeGroups:
- name: coder-node-group
amiFamily: Ubuntu1804
Create your nodegroup (be sure to provide the correct file name):
eksctl create nodegroup --config-file=coder-node.yaml
At this point, you're ready to proceed to Installation.
Our docs are open source. See something wrong or unclear? Make an edit.