back to the resources
25 Jun 2021

Coffee and Coder (June 2021)


In this episode, host Ben Potter is joined by Kyle Carberry, CTO of Coder, to discuss why Coder migrated our networking from a traditional reverse proxy architecture over to WebRTC (Web Real-Time Communication). Typically used for delivering audio and video conferencing applications using native web technologies, WebRTC can also tunnel arbitrary data. The results are an even faster editing experience, less latency, and end-to-end encryption. More detailed information about Coder's use of WebRTC is availble in this blog post.

Also in this episode, Ben demonstrates Coder's workspace provider feature that allows a single Coder deployment to provision and manage workspaces across multiple Kubernetes clusters and namespaces, including those located in other geographies, regions, or clouds.

Finally, Ben and Kyle discuss plans to enable Coder to be deployed on any type of compute resource (e.g. EC2) rather than just being tied to Kubernetes.


Coffee and Coder June 2021 Transcript

BEN: Welcome to Coffee and Coder, Kyle.

KYLE: Thank you for having me, Ben

BEN: Thank you for being here. Do you drink coffee? Do you have coffee with you?

KYLE: I generally drink too much coffee actually. I have mason jars I purchased an extreme amount of mason jar from Amazon a couple weeks ago to be that iconic kind of like TikTok-esque person that makes iced coffee in a mason jar. So yes, I drink like half a thing of Starbucks like the big bottles of it probably like every two days. So at least probably a jug a week which is gross.

BEN: That's awesome. The mason jars are definitely key. Do you like to like like mix things like get kind of fancy with like brown sugar and caramel and things like that or is it just black? How do you take it?

KYLE: I make my coffee extremely unhealthy. I add a bunch of caramel syrup. I do the thing where you rim the mason jar with caramel syrup, and then I add vanilla and then sometimes whole milk but generally [inaudible]

BEN: Awesome ,awesome. Well thanks all for joining. We are on Twitch and YouTube and Zoom, so if you have questions you've got a few different places you can ask. I'll be looking at all three, but it may take a while for me just to kind of finish the demo take a look and stuff like that. So definitely feel free to like reach out, unmute if you have questions on Zoom. Really anything.

We have a lot that we're pretty excited to share today. The first thing and the main topic for today is WebRTC in Coder, and I'm super pumped that you're joining us, Kyle, because you're gonna be able to talk a lot about this because you developed it.

Can you all can you see my screen? It should just say like Coffee and Coder. Awesome, this is our first time streaming to a few different platforms, so it's gonna be a bit janky to start. We were kind of hoping for some music and animation and stuff, but I do want to give a big shout out to our designer for making this really cool—it's really cool—graphic. And with that, we have a new blog that I want to share, so if you just go to, go to our blog. You’ll see a lovely post by Kyle and Jonathan—it was co-written—just discussing a lot of the things that we're going to be talking about today. Especially what Coder’s architecture used to be like and what kind of prompted the transition into WebRTC.

For those who aren't familiar, Kyle, would you be able to give us a brief explanation of what WebRTC is?

KYLE: Yes, so WebRTC essentially is like a protocol that enables peer-to-peer connectivity, and really kind of transparently forms a tunnel between peers. One thing that's magical what WebRTC is built into the protocol is proxying, so that if peer-to-peer isn't allowed you can always proxy through a middle server, which is frequent in say enterprise deployments.

BEN: And how does that proxy work? Is that something that ... how would that proxying typically work?

KYLE: Yeah, so what happens during a WebRTC connection is one peer essentially requests information from another, and you'll have some sort of middleman—I call it negotiator—generally that will exchange peering information with both of the clients and then, if possible, will try to directly connect to each other. But if they can't, you'll use what's called a turn server in the middle, and that will essentially mediate the connection and connect the peers.

BEN: Got it. Very cool. We have some demos that we'd like to show, but before we get started, in the blog post we have this beautiful diagram of how Coder used to work or how Coder in previous versions was able to connect to the workspace. I kind of want to decouple this a little bit, and go over each of these different parts.

So here we're looking at a Kubernetes cluster, and we see a control plane and a data plane. What is envproxy in here and what are you doing?

KYLE: So end proxy is essentially in charge of environment access and proxying any data to the workspace itself. So some of that data could be—let's say you're loading code-server via the web browser—that would be hitting envproxy and serving that data directly from the environment. Or if you're accessing via SSH, SSH hits port 22 on envproxy which then proxies it to the environment.

BEN: And then essentially envproxy would have to have all of these these ports open to be listening for incoming connections?

KYLE: Correct, exactly. And when we proxy in that way, we hit a weird world where now we have to care about all the protocols that people are running on each of these ports. So say—like I mentioned proxying SSH—let's say someone wanted to directly proxy a Postgres instance. We're kind of in charge of saying whether you can or can't do that and, of course, you always could by SSH, but …

BEN: And moving forward, to a new world with WebRTC we can kind of go down to this diagram here. Using your same example, say like someone in the data plane would then want to open up a Postgres server, would that be how it how … how would that work exactly?

KYLE: So the idea of a connection now is kind of like you're plugging an ethernet cable into a different computer, and the idea that you can really forward anything, and you're kind of in the same network namespace as the workspace when you connect to it.

So you have a control plane, which is Coderd in this instance, and then the data plane. Really, it can kind of sit anywhere, and especially since you can have p2p enabled it really could sit anywhere and your clients will experience the lowest latency possible. If that makes sense.

So instead of the data being proxied through some stateful server all the time without any hopes of no longer having that, it's more like a connection is just formed. An example of that is SSH. Before, we used to expose port 22 on envproxy and that's how people would access their environments. Now when you SSH, we actually use an option in SSH called SSH command to override the SSH command with a Coder CLI command that lets you essentially just tunnel into that namespace directly and then access the port.

BEN: Cool, and going into a demo, I think that's something that we can just demo real quick. So I have this preview cluster, and some of these features we're going to be looking at you'll notice are not actually released in 1.20 of Coder. So these are some preview features, just a kind of warning. Opening up a terminal here, I have this node.js workspace. If I just do coder sh nodejs and I have the Coder CLI, so … oops... I had the Coder CLI.

KYLE: The danger of live demos.

BEN: I know, yeah.

Well, what I can do is use an older version of the CLI real quick because I'm using one that's not built, but it should still work fine.

KYLE: Yeah, one thing I can add on while you're doing that, one of the biggest changes foundationally to the way this works is the requirement of inbound networking on a workspace. You used to require inbound networking with the old setup where there's an envproxy, and it's kind of a classical reverse proxy setup which we dive into in the blog. And with this setup, what's really, really important is that there is inbound exposed to the internet. Before we'd have a service on a pod inside of Kubernetes that would essentially enable it—like if you had interpod communication allowed pods could connect to other pods—someone could technically access someone else's inbound for their workspace, which is, I don't want to say an unnecessary security hole, but one that's definitely not desired.

BEN: Okay yeah, so I'm good to go. But I do actually want to get into what you were talking about within networking a little bit. I actually want to put you on the spot. We have a kind of sketch pad that we're thinking about using some of these concepts with. So what I'm going to do is I'm just going to create a Coder cluster and kind of create these two like workspaces on the cluster. Maybe I can make these like green or something, maybe. I want to make two new green workspaces. There's one. There's two.

Would you be able to kind of describe what the difference between an inbound connection and an outbound connection would be with maybe one cluster with our old networking? I'm kind of struggling to use this tool here. Should have practiced a bit more. Let's see, I can delete this guy, and we'll move this back. There we go.

So there's one cluster and then let's just make another cluster using the old model. So I want to unlock it soon, and then I kind of want to move out... oh I see ...I'm using the max thing here, so we can make this guy .... well anyway, let's just have this one cluster and let's talk about the differences between how an inbound connection or an outbound connection would be to a client, for example. So this would be like a client.

KYLE: It's very out of the [inaudible] but no I got you ...

BEN: Yeah, yeah.

KYLE: So this we'll call this just the ws. I'm going to label this as inbound. My apologies for my poor writing... ws outbound , and what's really important is on an inbound workspace generally and this is true for essentially any server really running SSH you're gonna have something like port 22 exposed to the internet. This happens actually all the time when people scan the internet and they try to find servers that have really poor passwords and will enter them and let's say I had my password at like ‘dog’ or something.

BEN: Okay.

KYLE: What really could happen there is someone could be scanning the internet, find port 22, and just try the password ‘dog’ because it's one of the top 100 most popular passwords, and suddenly get access to my workspace. So there we have a problem just generally that this workspace could be brute forced, for example, and just broken into. Obviously, SSH keys completely eliminate the idea of something as simple as this happening, but simple stuff like this happens all the time for security breaches.

Another thing that's important to mention with this is with inbound networking it becomes a lot harder to fully encrypt the peer end-to-end insert protocols, so maybe if we proxied everything over SSH, which we'd get a little bit of a speed reduction but that's obviously not amazing. With outbound what we have really is an HTTP server, and this you can think of as Coderd. I'll just call it Coder to be simple. What this does is it actually dials Coderd and it always has a persistent connection open. What happens is this client might request and say “hey, I want a new connection open.” Then this communicates over a websocket here and then this wso band will actually end up connecting to this peer. So they negotiate over this pipe right here, and then essentially they start communicating.

BEN: Cool. Something else I want to get into a little bit, and this is something I don't understand as well, but this essentially all kind of … the HTTP server lived inside the Kubernetes cluster as well as the workspaces. One advantage of this approach is that workspaces no longer really necessarily have to live in the same space that the Coderd does and that makes that kind of distribution a lot more simple. So, getting rid of this diagram … I'm sure there's an easier way to do this. This is a tool I just found.

KYLE: I gotcha. I gotcha.

BEN: Oh there, you go, cool. Perfect, if we were to have Coderd out here, and then two workspaces … maybe we'll say on another virtual machine for example, or maybe this is another cluster or say we just have a few different clusters whether they're virtual machine clusters or different Kubernetes clusters, but just basically different areas of where workspaces can live.

How does WebRTC allow a client to connect to these to these different clusters without a lot of configuration? Yeah, yeah, perfect.

KYLE: One thing we get a lot of requests for and we've had to solve for in the past is geo-distribution. People have workspaces all over the world because they'll have developers all over the world and needing to replicate Coder every single time you want a new workspace to go live in a new region is a decently big — we never want that to be an impediment I guess do you even like say getting a developer, like hiring a developer in Ireland or something. We never want there to be a crazy thing where it's like, “well, if we want them to use Coder and have a good experience then we need to set up something in Ireland.” You know, Coder can do all the ops work related to that.

So with this system now, what you can actually have is people in Europe developing against the Europe cluster. You could have a single instance of Coder set up in the U.S. but since everything can be peered together, the EU people are actually getting the lowest latency they possibly can to the EU workspaces without any effort needed operationally to geo-distribute Coder beyond your single instance, for example. And you maintain end-to-end encryption with that because everything is still brokered at the initial point through this Coder instance, so all the connections are trusted and verified through there to make sure that there's no malicious actors. Since there's not even any inbound . . . you know, there's no ports exposed on any of these pods or workspaces, it's a pretty nice closed-loop system.

BEN: Yeah and then going into peer-to-peer, for example, say here's a developer laptop, for example, who is looking to develop on an EU cluster. How would they be able to get a peer-to-peer connection to this workspace, for example?

KYLE: Essentially... I'll remove my kind of poorly drawn arrow here... I remove that object one second...

What they would do is they would first hit Coder. They would first open a connection to Coder. Kind of like we talked about before, there is a bi-directional socket open, just like a websocket but it's open from these workspaces. So the workspaces initiate the connection, call it an agent, that just lives on each of these servers that tells us when we can communicate with them. When this person connects what happens is they essentially let this socket know that they want a new connection, and then say this pod, for example, they'll both negotiate IP addresses, and then this pod for example will directly connect to that person's laptop.

It works in a method called NAT punching. Really it happens when … it works over UDP and what really happens is when you hit any server with UDP what you do is you open a port from your router and it kind of all traverses back to your computer. And UDP is not … there's not really any asks, so there's not really confirmation of a source. So when something is sent to you that's kind of just what you get and what really happens is it kind of essentially tunnels both ports for each client and then exchange those ports that were exposed on the NAT and then tunnels between them.

Kind of a poor explanation, but you can read the blog post to get more info on that.

BEN: No, that makes a ton of sense. I think going back to the inbound versus outbound analogy, right now we are we're on a Zoom call and I'd assume WebRTC has a type of similar technologies happening where my laptop, for example, isn't publicly listening on a port to make this call possible. Instead it's kind of meeting in the middle with Zoom servers to make this call possible. Is it kind of … am I explaining this right or is there something I'm missing?

KYLE: So yes and no. So right now your laptop, for example, does have a port open, but you didn't have to expose it. If that makes sense. It's not like a normal service where you're like here's my hostname and this is Ben's laptop and this is how Zoom can hit me. It's much more like you go to and then your router handles all the magic for you in the background.

BEN: Okay right, but it wasn't something I had to configure. It was just kind of in an available space is that … ?

KYLE: Exactly, yeah. And that's one of the largest reasons we actually took this as a big initiative. We had a lot of customers that misconfigured connections between workspaces and that would leave them exposed. And we had customers that were extremely concerned about security and cared a lot about end-to-end encryption, and we did not want to even have the chance that that's something that they could misconfigure. The idea that someone would use Coder and not have their workspace traffic encrypted is crazy to us. So that's kind of why we … it's a big inspiration for making this change.

BEN: Yeah, yeah, that makes a ton of sense. Cool. So going into demo time, I'm using an older version of the CLI so I'm hoping it'll still work, but if I just do coder SSH node.js and I'll just add this flag yeah oh …

KYLE: That's for config SSH.

BEN: Sorry, yeah oh, coder sh … let me do config then ….

So what this is going to do is basically configure my local SSH host to be able to point to Coder, but instead of just using port 22, for example, oh networkingV2's not enabled …

This might just be a side effect of running two different … yeah it's just a side effect of me not having like the latest CLI … but essentially instead of SSHing into a workspace on port 22 I would be doing it through or port 5432 or something like that, which would be like Coder’s or the default turn server, correct? Am I … I think I'm close there.

KYLE: Yep, yep. That's correct.

BEN: Cool awesome. The next thing I wanted to demo and I had to pop in there for a second is this new provider’s page. This is coming in a future version of Coder. it might not be exactly like this, but we're seeing that I have one built-in Kubernetes provider. We have had the concept of workspace providers for a while in Coder, but going back to Kyle's diagram, Coderd essentially would have to be inside each provider and that involves network configuration, that involves setting up a domain, just a lot of things that are no longer necessary really when we set up a new provider. I can go ahead and just hit create new, and it's going to ask me for a cluster address. It'll just be like the address of the Kubernetes cluster and give it a name. I'm going to actually pop into gke, and I already have a cluster here coder-gcp-asia. This is the one I'm going to be using. You'll notice that it's in the Asia east 1b zone, but just to kind of show you in gcp this is essentially how easy it would be to just hook up a new zone for Coder. I need to create a new cluster. I'll just name this gcp-coder and I'll just do east. I'll use us east 1b. I want to do a little bit of node configuration. I want to make sure there's enough room to run Coder effectively. I just like to use … let's say there's only a few developers so this one should actually this one should look fine and that's basically it. I can just create this cluster and oh … I my gcp limits are exceeded, but essentially those would only be the steps necessary to create the cluster. We already have this one. It's empty, and I'm just going to use Google's cloud shell to interface with it.

Let's just make sure I'm inside. let me just kubectl get nodes

Awesome. So I'm in coder-gcp-asia I'm just going to create a new namespace.

And I'll just name it the same thing. Created the namespace. Now I just want to get the address of the Kubernetes cluster. To do that I have a cluster address script that I wrote just because I didn't want to remember the command, which is just taking a look at my kube config and getting the address of it. So let me just run that now.

I can actually make this a bit larger. There we go. So I just want this here. I can name this asia. I can … yeah this would be fine... I can give it the cluster address and I want to use coder-gcp-asia as a namespace, which I just created. It actually gives me the command to do so here if the namespace didn't exist. Now I'm basically just giving a command to create a service account on behalf of Coder.

Kyle, how does the service account model differ from the way we used to do things?

KYLE: Yeah, so one is, I would say, more passive and easier to update, which is this version. The way it was before is you would have to deploy … what we had before it's called envproxy, which kind of like we talked about before handled inbound traffic for all the workspaces and did all the routing magic. Now you actually don't need to deploy any infrastructure to add new clusters around the world. So you'll see this cluster that Ben has going. He's just going to add an Asia region without really much effort at all, without needing to deploy any infrastructure, with purely just getting the authorization and we'll handle all the networking, the routing, and making sure it's fast for you.

BEN: Right, so previously every cluster would essentially just be a copy of this, right? Cool. And now it's kind of more just like this data plane could essentially just be in …

KYLE: Exactly, yeah, it goes anywhere. And we don't deploy actually anything in any additional clusters.

BEN: So I’ve created a service account. Now I just need to grab a secret to give back to Coder.

KYLE: Oh that was slightly too much. Yeah, you got it.

BEN: Oh oops. Yeah, I got the command in... good catch.

There we go. It's a little bit of fumbling. Let's just make sure we got that off … swee. And now I can just create the provider that way and then what's super nice about this is now I can just go into workspaces, for example, create a new workspace, I'll just name this node workspace-far-away I'll just use a Java image, and I can change the workspace provider from where this is created to Asia.

Something else worth noting is that with something that we are introducing called workspace templates and with maybe some more policy features that are coming up in the future we can basically assign specific workspaces to specific workspace providers. Part of that might be just like a geo-location thing like this team is based out of the east coast, for example, so they could be assigned there or there might be maybe a more practical benefit like with either security or this is our GPU workspace provider where we do data science or something like that.

Kyle, what are some use cases that you're seeing with people wanting to use different workspace providers. I mean, I named a few but I'm curious what is most common.

KYLE: Yeah, so we like Kubernetes. We understand the leverage points it gives us with scale and making it simple, but we also understand that like a raw VM is nice or being able to use Mac instances is awesome, or being able to use you know a Windows VM too. So we want Coder to be applicable to every type of development, not really just isolated to Linux or Linux containers itself. We want it to be a more globalized thing….

BEN: I saw build error again.

KYLE: That is because of ends box because you're running as a CVM.

BEN: Oh yeah, I didn't even notice that. Yeah this is, again, this is an unreleased feature, so still some things for you … there we go, not CVM. Yeah, I guess I must have checked by default.

KYLE: Yeah, yeah.

BEN: I'll just make this a bit smaller, too.

KYLE: Yeah that's really the world we imagine is we want to be able to tackle all types of development. What we really care about is capturing the user space and providing great access for developers to get there, you know. We want to liberate that so if someone inside of any kind of large enterprise could easily request compute, get access, and apply policy to ensure their developers get a fantastic experience.

BEN: Yeah, yeah, absolutely. I guess one thing to note that's pretty special about this is I'm connecting still from the same domain. This is just the one entry point in and it's essentially able to create this workspace for me as a developer in Asia without really anything more than just kind of a toggle between workspace providers. In this case, it's done manually, but it can be automated. We're definitely looking for more ways to automate in the future. Cool.

I guess one last thing I'd love to get into and then we'll see if we have any questions is just going back to this provider's page. I mentioned this a little bit, but we're noticing that there's a type Kubernetes here. How can we expect Coder to support different providers in the future and what's that world going to look like?

KYLE: Yeah we're working on becoming more abstracted from the idea of, I guess, tightly coupling or reducing any features from a provider. We really want to empower people to deploy on whatever compute they want. It's the underlying backing and we'll provide helpers. So what that might look like is having an EC2 provider that really just runs an agent on an EC2 machine you know and that could be Mac, Windows, or Linux or a Docker provider to make it even simpler than having to get a Kube cluster, things like that.

BEN: Yeah and I think that it makes a lot like easier to access specific resources, right? So say you're doing Mac development, for example. You could still use Coder to hook into a Mac mini server if you needed to, or something like that.

KYLE: Exactly. We imagine a world where if you're a developer — let's say you're running a project that requires running tests on Mac, Windows, and Linux — then you just have three workspaces that feel uniform and essentially are. You can run tests between them and hop between them because that's your workspace, now. You know, it can even consist of multiple components.

BEN: Yeah and the SSH itself isn't going to work, but I mean something really interesting to show is just with the Coder CLI if I just do coder envs ls for example, I'll see these two workspaces. Then I can essentially just SSH into them with another command. I think this is something that's typically pretty difficult, right. If you're managing different machines on different clouds as a developer, you can have your SSH key in every one, but then still you have the problem of accessing different... managing different hosts, how do you get through them, is this being proxied properly, things like that. The ability to do coder sh and then the name of the workspace, to me, is very valuable in itself That's definitely something that I'm excited about. Again, like if I was to use VS Code for remote, for example, and I wanted to go into this workspace because we're using something like WebRTC that connection can be just peer-to-peer, which is just really exciting.

KYLE: Awesome

BEN: Cool. Well with that, thanks everyone for tuning in. There are links in the Twitch to join our Slack community, but if you're tuning in from Zoom or something … actually I can just go ahead … and if you just go to, go to community, there's just a nice...

KYLE: I think you were showing the wrong screen.

BEN: Yes, that's awful.

Let's go here. We'll try it again. If you go to, pop over to community, and then there's just a nice little link to join us on Slack and there you can tune in if you have any more questions.