Apr 27 2026

Coder on Red Hat OpenShift: AI Agents and Governance for Regulated Environments

Suman Bisht
Suman Bisht

Organizations running Red Hat OpenShift have already solved the hardest parts of enterprise infrastructure: RBAC, network isolation, security context constraints, runtime scanning, and audit logging. The next challenge is what happens when developers start running AI coding agents inside that infrastructure: which tools and models agents can reach, what network access they have, and how to produce an audit trail that helps with troubleshooting and auditing.

Coder runs natively on OpenShift and adds a governance layer to the AI development environment. With the latest updates, Coder adds AI governance and agent-based automation (Coder Agents in early access), making it much easier to support enterprise AI workflows without losing oversight.

This post covers how Coder and Red Hat together deliver a self-hosted, governed platform for running AI agents.

Coder AI Governance 101

Coder AI Governance gives platform and security teams visibility into how AI tools and agents are used across development environments. It provides centralized control over model access, network behavior, and audit logging. It has two key capabilities:

AI Gateway is a centralized LLM gateway that sits between AI tools such as Claude Code, Codex, Aider, custom agents, and upstream model providers like Anthropic, OpenAI, AWS Bedrock, or self-hosted models on OpenShift AI.

  • Centralized authentication: Manage access to AI models, reducing security risk and operational overhead. Eliminate LLM credential sprawl and brittle key rotation while also supporting BYOK with authentication and observability.
  • Full prompt audit trails: Groups intercepted requests into sessions and threads to show the causal relationships between human prompts and agent actions, providing auditors with clear provenance over who initiated what, when, and why.
  • Cost visibility: Routes all the traffic through the governance layer, enabling per-user token attribution and spend visibility. Applies quotas and workspace limits to control and optimize token spend.
  • Provider flexibility: route to Anthropic, OpenAI, Bedrock, or self-hosted models on OpenShift AI.

Agent Firewall is a process-level firewall that restricts and audits what AI agents can access over the network. Unlike traditional network policies that apply to the entire pod, Agent Firewall operates at the process level, giving platform teams granular control over agent behavior without affecting developer workflows.

  • Domain allowlisting: define exactly which domains and HTTP methods an agent can reach. Everything else is blocked.
  • Centralized audit logs: every agent HTTP request (allowed or denied) is streamed to the Coder control plane and queryable via Grafana Loki and ingestible by enterprise SIEMs such as Splunk.
  • Landjail mode: Agent Firewall runs entirely without elevated permissions or added capabilities, so it works out-of-the-box on hardened OpenShift clusters. A privileged mode exists for non-OpenShift environments where capability-based enforcement is preferred.

Agent Firewall enforces allowlist-only network policies. Admins define which domains agents can reach. All network requests are logged and auditable, helping reduce risk of unauthorized access, package misuse, and data exposure. Security teams can audit what happened and why.

Here is an example Agent Firewall rules file for a DoD / public-sector deployment:

Two ways to run AI agents with Coder

  • Third-party agents: Claude Code, Codex, etc., installed in the Coder workspace itself, governed by AI Gateway and Agent Firewall.
  • Coder Agents: Coder’s native agent capability (Early Access). The agent loop runs in the control plane; developers interact via a built-in chat UI or API for autonomous agentic workflows. Workspaces are only spun up when the agent needs to make a code change.

Note: Teams can choose to run AI agents both ways mentioned above. Admins can standardize on Coder Agents as the default experience for most developers, while still enabling Claude Code or Codex inside workspaces for specific workflows. Same Coder deployment, same templates, same identity and access controls.

Coder Agents is a self-hosted AI coding agent that runs the agent loop in the Coder control plane rather than inside the workspace. This architecture has specific security and operational properties:

  • Self-hosted model support with consistent governance: Coder Agents works with Anthropic, OpenAI, Google, Azure, AWS Bedrock, or any OpenAI-compatible endpoint including self-hosted models on RHOAI. Sub-agents spawned for parallel work each run in their own context window and are all governed by the same firewall as the root agent.
  • No LLM API Keys in workspaces: The control plane makes all outbound requests to model providers. Provider credentials are managed centrally, separate from the workspace environment where source code and developer tooling live.
  • No privilege escalation: the agent operates with the exact same permissions as the user who submitted the prompt. No shared service accounts.
  • User identity always attached: every action the agent takes (PRs, code pushed, commands run) is tied to the user who submitted the prompt.
  • Workspace isolation preserved: the agent can only access workspaces owned by the submitting user.

This differs from third-party agents, which execute inside the workspace and require additional controls for network access and credentials.

The Red Hat + Coder stack

The integration between Red Hat and Coder spans infrastructure, model serving, and governance:

Red Hat ProvidesCoder ProvidesTogether
OpenShift Container Platform (RHOCP): Enterprise standard Kubernetes based platformIsolated Workspaces with AI Governance Add-On (AI Gateway, Agent Firewall)Governed AI agent workspaces on hardened infrastructure, auditable end-to-end
OpenShift AI (RHOAI): LLM model serving via vLLM, llm-d distributed inferenceAI Gateway: routes developer/agent traffic to RHOAI inference endpointsSelf-hosted AI coding pipeline: models served on RHOAI, governed by Coder, no data leaves the perimeter
RHEL UBI base images: hardened container foundations suitable for regulated environmentsWorkspace templates: Terraform-defined environments on OCP namespacesCompliant-by-default developer and agent environments that pass ATO review
Advanced Cluster Security (RHACS): runtime container security, vulnerability scanningAgent Firewall: process-level domain allowlistsDefense in depth: container-level security from ACS + process-level agent boundary from Coder

Put together, the stack above + Coder Agents keeps the control plane, agent orchestration, and source code inside the customer's infrastructure, and when models are served on OpenShift AI, inference stays there too.

What you get running Coder on OpenShift

Organizations in regulated industries require that source code, model usage, and agent activity remain within controlled environments.

AI agents introduce new risks: network access, unmanaged credentials, and lack of auditability. OpenShift secures the infrastructure. Coder governs how AI operates within it. Running Coder on OpenShift addresses these risks:

  • AI Gateway centralizes model access and logging
  • Agent Firewall enforces network boundaries for agents
  • Coder Agents keeps agent orchestration and credentials in the control plane, with workspaces used only for code execution

The result is a system where AI agents operate on enterprise infrastructure with user-level attribution, controlled network access, and complete audit trails.

Getting started

Coder runs on OpenShift under the default restricted SCC. No custom SCCs, no privileged containers, and Agent Firewall's Landjail mode operates without elevated permissions, so it works out-of-the-box on hardened clusters. Deployment is a standard Helm install with a few OpenShift-specific values (Project-assigned UID/GID ranges, Route configuration for workspace access); the full walkthrough lives in the Coder on OpenShift install guide and stays current there as OpenShift versions evolve.

To try Coder Agents during Early Access, enable the agents experiment flag, assign the Coder Agents User role, and then configure any supported LLM provider in the Admin settings. See the Coder Agents documentation for specifics.

Together, Coder and OpenShift provides the infrastructure and governance layer needed for AI to move from test to production. The Coder AI Governance install guide to get a deployment running, and the AI Governance documentation covers the full architecture for teams ready to go deeper. For more details request a demo.

Subscribe to our newsletter

Want to stay up to date on all things Coder? Subscribe to our monthly newsletter for the latest articles, workshops, events, and announcements.