Feb 20 2026

Giving OpenClaw a Secure Workspace Using the Rabbit R1

Chris Boone
Chris Boone

If you haven’t heard, OpenClaw, a powerful open-source AI tool bringing agentic action and LLM reasoning to software development, has taken the AI world by storm. Builders are using the agentic gateway to create completely autonomous workflows for software development, personal productivity, and business automation across functions.

While it might sound like a dream come true, there are serious security concerns that come from using a stock version of OpenClaw, leaving users unknowingly vulnerable. As a tinkerer already contributing to multiple open-source projects and self-hosting an entire smart home ecosystem, I immediately knew I wanted to find a safe, practical way to experiment with OpenClaw.

Another thing that piqued my interest was the recent release of Rabbit R1. Rabbit added the ability to pair the R1 with OpenClaw, connecting the powerful open-source agent gateway and LLM reasoning to a local device that can trigger real-world actions through extensible skills, multi-channel messaging, and browser control. I wired OpenClaw up to it, and what started as a simple experiment to see what was possible turned into an official Coder Tasks and Workspaces Skill for OpenClaw, giving the AI agent a secure, governed place to operate.

Here’s how I uncovered a practical pattern for running agents inside reproducible development environments instead of directly on my machine.

Sizing up OpenClaw for local, self-hosting

I’m fairly opinionated about self-hosting. I like running my own software, not just for privacy reasons, but because it lets me shape systems around how I actually work. I already have a self-hosted home ecosystem, including things like my own 2FA setup, Coder, and Home Assistant. so my first instinct with OpenClaw was: how does this fit into my environment?

Naturally, I started by writing a script to optimize token usage so it would play nicely with my OS-based, self-hosted home assistant. That was the easy part. The harder and more interesting question came next: what should I actually use this thing for?

I started going through both my personal workflows and my day job, trying to find places where an agent like this could genuinely help. I briefly entertained some obviously bad ideas like wiring it up to Notion, which I don’t even have API access to, but was a useful exercise.

And that’s when the friction became obvious.

OpenClaw meets reality (and Coder)

At the time, OpenClaw was using Claude Code under the hood. That worked well for reasoning and code generation, but there were two things that didn’t sit right with me:

  1. Claude Code wasn't being spun up in a fully isolated, secure sandbox. Then again, I was about to run it on a home server whose uptime is measured in "days since I last broke something tinkering."

  2. There was no clean way to predefine the tools, repositories, and environment that the agent would automatically have access to on startup.

These weren't dealbreakers, but they got me thinking. If an agent is going to touch code, or even suggest changes, wouldn't it be better if it was already running in a clean environment with my tooling and repos ready to go?

Then the obvious thought hit me: I work at Coder.

Coder Tasks and Workspaces already solve exactly this problem by letting you spin up secure, isolated dev environments with predefined tooling, repositories, and guardrails on demand. So the question became less “what’s missing from OpenClaw?” and more “what happens if I glue these two things together?”

OpenClaw’s architecture made that possible. It exposes functionality through Skills, which are essentially discrete capabilities the agent can invoke. ClawHub already had access to skills like Home Assistant, and I’d already been writing a lot of skills for Coder internally. The path forward was pretty clear.

So I wrote a new Skill: a Coder Workspace Skill, hosted on ClawHub, that lets OpenClaw create Coder Tasks and Workspaces programmatically.

Now the agent wasn’t just reasoning about code—it could ask for a real, isolated environment to run on.

What the integration actually does

Once wired up, the OpenClaw ↔ Coder integration looks something like this:

  • OpenClaw receives an instruction (spoken or typed).

  • The Coder Skill allows it to:

    • Translate user intention into user prompt for task
    • Create a Coder Task
    • Spin up a Workspace with predefined repos, tooling, and configuration
  • That Workspace is fully isolated, secure, and reproducible—just like any other Coder environment.

  • A human (me) can then jump into that Workspace and continue the work.

My first test was intentionally boring: “write a test document.”

I always do this first. It’s a quick sanity check that verifies all the plumbing—auth, repo access, filesystem writes, task creation—without getting distracted by complexity. Once that worked, I moved on to something more realistic.

The next test went like this:

  1. I talked to the Rabbit and gave a loose verbal description of a security issue.

  2. OpenClaw interpreted that and found a relevant pull request.

  3. Using the Coder Skill, it spun up a new Workspace and Task to track the work.

  4. I then dropped into the Workspace myself and started working on the fix.

At no point did the agent need blanket access to my system. It did not combine autonomous reasoning, broad tool access, and unrestricted execution in my home environment, the kind of “lethal trifecta” that makes agent systems risky. Instead, it operated inside an isolated, predefined Coder Workspace with scoped tooling and repositories. And at no point did I need to manually bootstrap an environment. The agent handled setup; I handled judgment.

That division of labor felt right.

What made this click for me

What I like about this setup is that the agent does the boring stuff I don't want to do, but it doesn't try to be me. Say I have a big task where most of the changes are straightforward. The agent can grind through all of that, and I can skip to the hard part, the stuff that actually needs me to troubleshoot and clean up. Sometimes I let it run longer, sometimes I jump in early. The flexibility of that back-and-forth is what I actually wanted.

The agent can:

  • Interpret intent

  • Do reconnaissance

  • Set up environments

  • Handle repetitive scaffolding

And then it hands things back to me, in a Workspace that already looks exactly like how I want to work.

There’s also something deeply satisfying about the self-hosting angle here. I’m not just consuming an AI feature—I’m composing a system. I get to decide where it runs, what it can touch, and how it fits into my broader ecosystem, whether that’s my home assistant or my day job.

It also neatly ties together a bunch of threads in my own career: experimenting with agents, caring about developer environments, writing skills and integrations, and generally enjoying the moment where an idea goes from “huh, that’s annoying” to “wait, I can fix this.”

While this workflow isn’t mainstream yet, its underlying pattern is pragmatic: agents provisioning governed workspaces, humans applying judgment, infrastructure enforcing the boundaries.

And for me, that’s reason enough to keep building on it.

Try it yourself

Install the skill from ClawHub:

clawhub install DevelopmentCats/coder-workspaces

Set your CODER_URL and CODER_SESSION_TOKEN, then ask your assistant to list workspaces or create a task.

This is a personal workflow, but it is built entirely on production Coder primitives that teams already trust.

What started as a side project turned into a glimpse of how development workflows change when infrastructure gets out of the way.

Resources:

Agent ready

Subscribe to our newsletter

Want to stay up to date on all things Coder? Subscribe to our monthly newsletter and be the first to know when we release new things!