
Organizations running Red Hat OpenShift have already solved the hardest parts of enterprise infrastructure: RBAC, network isolation, security context constraints, runtime scanning, and audit logging. The next challenge is what happens when developers start running AI coding agents inside that infrastructure: which tools and models agents can reach, what network access they have, and how to produce an audit trail that helps with troubleshooting and auditing.
Coder runs natively on OpenShift and adds a governance layer to the AI development environment. With the latest updates, Coder adds AI governance and agent-based automation (Coder Agents in early access), making it much easier to support enterprise AI workflows without losing oversight.
This post covers how Coder and Red Hat together deliver a self-hosted, governed platform for running AI agents.

Coder AI Governance gives platform and security teams visibility into how AI tools and agents are used across development environments. It provides centralized control over model access, network behavior, and audit logging. It has two key capabilities:
AI Gateway is a centralized LLM gateway that sits between AI tools such as Claude Code, Codex, Aider, custom agents, and upstream model providers like Anthropic, OpenAI, AWS Bedrock, or self-hosted models on OpenShift AI.
Agent Firewall is a process-level firewall that restricts and audits what AI agents can access over the network. Unlike traditional network policies that apply to the entire pod, Agent Firewall operates at the process level, giving platform teams granular control over agent behavior without affecting developer workflows.
Agent Firewall enforces allowlist-only network policies. Admins define which domains agents can reach. All network requests are logged and auditable, helping reduce risk of unauthorized access, package misuse, and data exposure. Security teams can audit what happened and why.
Here is an example Agent Firewall rules file for a DoD / public-sector deployment:
Note: Teams can choose to run AI agents both ways mentioned above. Admins can standardize on Coder Agents as the default experience for most developers, while still enabling Claude Code or Codex inside workspaces for specific workflows. Same Coder deployment, same templates, same identity and access controls.
Coder Agents is a self-hosted AI coding agent that runs the agent loop in the Coder control plane rather than inside the workspace. This architecture has specific security and operational properties:
This differs from third-party agents, which execute inside the workspace and require additional controls for network access and credentials.
The integration between Red Hat and Coder spans infrastructure, model serving, and governance:
| Red Hat Provides | Coder Provides | Together |
|---|---|---|
| OpenShift Container Platform (RHOCP): Enterprise standard Kubernetes based platform | Isolated Workspaces with AI Governance Add-On (AI Gateway, Agent Firewall) | Governed AI agent workspaces on hardened infrastructure, auditable end-to-end |
| OpenShift AI (RHOAI): LLM model serving via vLLM, llm-d distributed inference | AI Gateway: routes developer/agent traffic to RHOAI inference endpoints | Self-hosted AI coding pipeline: models served on RHOAI, governed by Coder, no data leaves the perimeter |
| RHEL UBI base images: hardened container foundations suitable for regulated environments | Workspace templates: Terraform-defined environments on OCP namespaces | Compliant-by-default developer and agent environments that pass ATO review |
| Advanced Cluster Security (RHACS): runtime container security, vulnerability scanning | Agent Firewall: process-level domain allowlists | Defense in depth: container-level security from ACS + process-level agent boundary from Coder |
Put together, the stack above + Coder Agents keeps the control plane, agent orchestration, and source code inside the customer's infrastructure, and when models are served on OpenShift AI, inference stays there too.
Organizations in regulated industries require that source code, model usage, and agent activity remain within controlled environments.
AI agents introduce new risks: network access, unmanaged credentials, and lack of auditability. OpenShift secures the infrastructure. Coder governs how AI operates within it. Running Coder on OpenShift addresses these risks:
The result is a system where AI agents operate on enterprise infrastructure with user-level attribution, controlled network access, and complete audit trails.
Coder runs on OpenShift under the default restricted SCC. No custom SCCs, no privileged containers, and Agent Firewall's Landjail mode operates without elevated permissions, so it works out-of-the-box on hardened clusters. Deployment is a standard Helm install with a few OpenShift-specific values (Project-assigned UID/GID ranges, Route configuration for workspace access); the full walkthrough lives in the Coder on OpenShift install guide and stays current there as OpenShift versions evolve.
To try Coder Agents during Early Access, enable the agents experiment flag, assign the Coder Agents User role, and then configure any supported LLM provider in the Admin settings. See the Coder Agents documentation for specifics.
Together, Coder and OpenShift provides the infrastructure and governance layer needed for AI to move from test to production. The Coder AI Governance install guide to get a deployment running, and the AI Governance documentation covers the full architecture for teams ready to go deeper. For more details request a demo.
Want to stay up to date on all things Coder? Subscribe to our monthly newsletter for the latest articles, workshops, events, and announcements.