Senior customer success engineer Mike Terhar shows how you can make what he calls “comfortable" Coder workspaces that contain all the tools you need and the configurations you expect.
Hi, this is Mike from Coder and I'm going to talk through some of the best methods for creating comfortable workspaces. By comfortable I'm referring to having all the tools you need, the configurations you expect, and the ability to refine and improve continuously.
In the Coder workspace lifecycle we have a few big blocks with leverage points between them. Each leverage point optimizes for different aspects of the workspace. Any layer can do anything, but based on caching information, availability, and other factors there's usually a best place for each sort of customization.
The general flow is you define an image, you build it, and add it to a registry. Then when a workspace starts it runs a setup script called configure, and then it runs a setup script called personalize.
Configure and personalize both run later in the runtime environment circled here in green. The specification for the Docker image and the configure script all happen within a Dockerfile, so those would be part of a Git repository or some other version control system.
So you can see we have kind of two paradigms. One is defined centrally, managed, security scanned, and it spans all the way from Docker image to configure. And then we have the runtime information available within the configure and personalize scripts, and that overlap there is what makes these different things useful and perfect for different tasks.
We're going to start with talking about the Dockerfile part kind of in isolation since it's the foundation. They are text, so you can version control them. They typically refer to other scripts, applications, things like that, and so those would get version controlled together in a repository.
Build pipelines are a fantastic way to have these things periodically rebuilt because if you have a command like apt install some dependency every time that runs it will pull the newest version with the latest patches and update itself. And as people rebuild their workspaces, they'll bring in all these new images with all these fixed things without a person having to submit a change request or a merge request against these repositories.
And you can control incoming changes by using a merge request workflow just like any other repository.
So for the anatomy of a Dockerfile, there's tons of information out on Docker docs for how to how to define and build them. In the Coder context, we want certain things to exist in these Docker images for them to work -- things like git, bash, curl, wget, vim, sudo, ca certificates. Those may not be necessary for a running application, but we want those to be available for our workspaces for developers.
The other cool thing Docker lets you do is on the left side here you can see that we're using from Coder.com enterprise node, so this gives you, you know, 90% of what you need to create a workspace.
And then in this example Dockerfile we're just adding TypeScript as an installed global tool and we're adding the angular cli
So these things you know create a more capable image more specific to the project that's using TypeScript and angular and prevents workspace rebuilds from losing those tools whenever that happens
So we're going to take a look at an example repositor. We'll go through the whole process customizing Dockerfile, building the image. The build process runs as root which is cool so you can install things, but then when the image actually runs later it's not root so you can lock down things
There are some limitations though. You don't have access to any of the Coder runtime environment variables. You don't have access to the code or cli. You don't know the workspace name. You don't know the username, and you can't do anything with the home slash Coder directory because it doesn't exist yet. It will be mounted when the workspace runs, which basically eliminates or hides anything that was in that directory. Once you build the image and push it to a registry then you can import the tags into Coder.
So here is my workspace angular repository. This is stored on gitlab.com but you can use github or local get labs or any sort of Git repository for this sort of thing. I like that there's pipelines and things enabled here by default so that's pretty cool.
In our Dockerfile just like the example on the previous slide, very simple. All it's doing is installing two tools. We can see that our pipeline passed when I committed this change in, and from looking at the pipeline it does a Docker build, it does the container scan, and then it does a release.
Predictably the Docker build process pulls the logs into the registry, builds everything it needs to build, pulls things that it already knows of. So it's starting from this Coder.com enterprise node, so it doesn't have to build all that stuff. It can just pull that image in, and then it adds layers on top of that for building or for installing the node applications globally for TypeScript and the angular cli. And then we have it get to the end and it succeeds.
I use a very ugly tagging mechanism here, so that we don't actually overwrite the image every time a new one is built. We start with the image name and then the commit ref slug which in this case is the branch it's on so this was main. As you can see here. And then it tags it with the commit sha so that you can specify the image if you want to test it but it doesn't update everyone's latest tag right away. We can look at our release in the same pipeline, and this job all it does is re-tag it. The new tag is just the ci registry name, which defaults to the latest default tag. And you can see in my pipeline definition here this one's a manual job, so it's got the play button. So it won't run until you tell it to.
So after you run this release job you can look at our container registry. You'll see that we have a few different images for a few different branches I've been working on, but we want to look at the root image and we'll see that the latest tag was updated five days ago or whenever that pipeline ran.
So in the Coder side, you'll take your image that you've created, break it into components, we have our registry registry.gitlab.com at the top, and then the image path is from that container registry image path here. So you can see it's published to here. So we take that string, we put it in our repository, and then the tag whatever you want it to be if you want to build one that's not the latest one you can put in that image that's already built, or you can use your own tagging mechanisms. This is just my my pipeline that I'm comfortable with give. It a description and then including the source url here helps other people find where the source of your registry is, so that they can go in and say I need this tool changed, I need this tool upgraded ,or I need an older version of it so I'm going to cut a temporary old one so that I can do some fixing you know version two of this application it's now on version seven.
We'll go to new workspace we'll do custom workspace. We'll name it ng. We'll look in here for angular. We'll pull the latest tag. We'll disable this just for speed purposes. Leave all the other defaults alone and create the image. Now you can see that it goes through the regular build process here. We have our angular image reference up here in the status bar. If we wanted to get more information on that we could go into our images list.
So now that our image is done being built into a workspace, we can see that all of the the building steps worked fine. We'll jump in here. We'll run ng version. See our angular stuff is installed. Run a tsc version. You can see that 4.3 is installed
Now, we'll move on to configuring with a startup script.
So you saw that we at the end of that process we had the tools we wanted, but we didn't have anything to work on. It was just installed and ready to go. The next leverage point is what to do on startup of the image.
Some examples for configure would be things like setting up proxies, retrieving credentials, configuring microservices, stuff like that. The scripts have access to all of the running environment variables. If we look at env you can see that there's things like the Kubernetes cluster information. There's some Coder image tags, dot file, repo info, extension stores, things like that. So if you're configuring your image after it starts up, and you want to know or access some of these other things, you can do that. The other thing that's already in here is the Coder cli, so you can do Coder envs ls and look at all the different running environments that the user has available.
Using that tool you can kind of manage the environment from within the environment while it's starting up. In the same angular workspace repository I have a separate branch to add the configure to our repository or to our image, so it will run when it starts up. This line 7 is all you need to do in the Dockerfile in order to get that script included. The the only thing you have to do to the script is chmod it with executable ability so that when it's cloned down it can actually be run. This is a security measure so that people don't try to put weird things in and then have a process automatically chmodding them to be executable.
You can see that our angular script creates a new application. Normally this would be a Git clone or something that you already have. It could go into it and run some pre-compile step. In this example I'm using ng-build and then exiting friendly and if it notices this whole wrapper here at the top, if it notices that this has already been done and the Coder example directory exists, it will skip all that and say it's already initialized so that you don't have your configure script stomping on your work in progress.
So let's create a new workspace called ngconfigure which will be based on our configure image with that Git hash tag as I mentioned before. We don't want this; we're not going to do that; and we're going. So the image that it's building now comes from this repository where we have it install these two things, but the branch we're looking at now has this configure step added to it.
And so if we go and look at our container registry, you'll see that our add configure branch has its own set of tags.
And we'll see the last time that I built this tag was a couple of days ago, and so it's going to build either this 8d one or this ef-1 whichever was more recent. So when we go look at already material it's the ef one.
So you can see that it did all the startup stuff like normal; it injected all the things it needs to run, and then it configured our Git provider. So this is based on the Git oauth integration that Coder offers and once it's configured it will -- once the Git stuff is configured it runs the configure script so the configure script can do stuff like a Git clone. It can do fetches. If you have a big repository cached in the image it will just download the more recent stuff or fresh branches, things like that. You can reduce the amount of network traffic and hard drive usage, and things like that, and speed up the whole process. And then I did not configure the personalized thing so we don't have to worry about that.
So if we go in here and we take a look around we'll see our Coder example directory exists already, even though we have never been in this environment and I've never cloned anything. If we go into the Coder example, we can see all these things have already been built. And we have our node modules folder, things like that. Those all happened during that build stage.
So next on our list of things to look at, we'll take a look at the personalized script, which is the last thing that runs before the environment is handed over to the user. It is typically only user specific configurations.
The theory of dot files is quite old. It would have, you know, people switching from one system to another, or you know rebuilding their laptop, they'd have like a dot rc dot vmrc those old like linux configuration files, and because hidden files started with a dot the dot repos dot file repos were born. There's lots of docs on how to set them up if you google dot files. There's also mechanisms for managing them for you, automation to roll them out. Some people use different tools for that. We at Coder have not specified a tool. You can use any dot files tools you have available. If it clones a repository that doesn't have the install.sh, it'll just link all the files that start with a dot from your home directory in the workspace to that dot files repository that was cloned. And kind of automate a little bit of that for you. Most people if they've gone to the trouble of creating a dot files repo can do an install.sh that actually handles most of configuration.
One thing of note is that I've made this repository private. Because I can clone it using my already embedded credentials, I don't have to make it public. The benefit there is if I accidentally commit something here, like an ssh key or something, it doesn't immediately blow it up for the whole world to see. Though I do try to be careful and avoid that sort of thing if possible since I'm using a public cloud provider on gitlab.com for this.
The install sh file, you know, it's just a bash script that runs all of the things that I want to run like. I like the fzf so I can hit ctrl r and find previous commands that I've run. I like to use vim for editing configuration files, so I have a bunch of configurations around the vimrc and creating a vim backup folder. I like zsh so I have that in here. This big chunk down here at the bottom is for gpg forwarding. If I have an environment set up that uses cvms, I can use my ubi key to sign commits using this forwarded socket approach. So generally that script runs and does all the things. The other stuff in here is kind of the same premise of a Dockerfile, where you know I have certain fonts that I like, I have the gpg configs, I have some code server configurations that I want to always follow me around, and then I use a pipeline to validate to make sure that the zsh and vimrc are functional and don't slow me down too much.
And then I have a way to curl it as a tarball so you don't have to actually clone the repository these are things that are unnecessary. Mainly the install sh and a few dot files are really all you need to have a comfy environment.
Okay so we had our ngconfigure environment with no dot files. So if we go in here and we set it to use get at gitlab.com calling terror dot files and then we rebuild it.
Okay, now that our ng configure workspace is done building, we can see that under this personalized step we can actually go back here and check this already initialized is what our configure script did because it already saw that directory existed and decided not to take any further action. So that saves time, and this doesn't stop on our work. Then it runs the personalized script it gets to the zsh not installed not configured, no cvm ton time because I put in a typo there so it doesn't configure gpg.
But it does have things like if I run my vim command I can see all my specified file type things there.
We'll look for curl sh so we can see some highlighting. So another quick thing that I'd like to show you is what it looks like in an environment where it can actually configure the shell.
So this image has the zsh shell installed. So I can say ls based dashed al. I'll see everything in color. I've got my Kubernetes context over here on the side since I can do a cube ctl get all and it'll show me what's in the Coder pen test cluster. First time you run it it lazy loads the Kubernetes completion, so that it doesn't have to do that every time the shell starts. And now I can do like qctl logs and then start looking for a pod with t and I can hit tab and it'll auto complete. And I can get my tab so there's lots of cool stuff you can do with with your shell if you have zsh stuff.
The other nice thing, if I look at my dot files, it loads the get information here so I can see what commit I'm at; I can see what branch I'm on. Go into dot local share it shows that I'm in local share here it shows the last couple of levels of navigation here so I know when I'm doing dot dot dot dot where I'm going to end up.
Oh and it has exit codes. So if I do exit one well that kills the whole thing.
Let's just type in a random command. It'll get mad. It'll say I can't do that. There will show the exit code on the next line. If you do a a no-op, it'll be happy and show you a little checkbox.
So these are you know again shell sorts of quality of life things that people like to do. If you like to do them, you can bring those configurations with you into Coder using this personalized script.
So some final thoughts, just to recap. If you have security conscious teams you can have some approved base images that you use for from instead of using our Coder images. Developers can submit changes to enhance the images or create forks or use the lego building blocks available to them to install the tools they need. And all of those things can be container scanned. As you saw my repository had that container scanning enabled and periodic rebuilding to update caches, update versions, anything that's not version locked will be dragged forward.
And then enabling Git integration, of course, allows these things to be used with secrets in them, so you can still, you know, it's not a get best practice I'll admit that, but if you have an internal Git repository and it's set to private and your user is the only one who can access it, having some script in there that isn't perfectly acceptable for public consumption is probably a net benefit to the organization and the user experience.
So developers are happy empowered. They can create and customize their workspaces and rebuild them and bring that those customizations anywhere that they go.