Nov 20 2025

Every Cursor Needs a Coder: Unblocking AI Coding Tools in the Enterprise

Josh Epstein
Josh Epstein

Picture this: Your developers started using Cursor last month. Productivity shot up 40%. Pull requests doubled. The platform team started planning a company-wide rollout.

Then your security team stepped in.

Not because they're innovation killers. Not because they don't understand the value. Because when AI agents run on local laptops with unrestricted access to your private repositories, your APIs, and the open internet, you've created a governance nightmare that no CISO can defend.

So the tools get blocked. Innovation stalls and developers go back to fighting their local environments while your competitors figure out how to ship faster with AI.

There's a better way.

Why enterprise security teams are blocking AI coding tools

AI coding assistants like Cursor, Claude Code, and GitHub Copilot Workspace represent a fundamental shift in how software gets built. We’re way past autocomplete. Modern AI assistants are autonomous agents that read codebases, call APIs, execute commands, and make architectural decisions.

Your developers need them. Your security team can't allow them. Thus an unstoppable force met an immovable object, at least on local machines.

Why do security teams block AI coding tools? It’s because they present what Simon Willison refers to as the lethal trifecta for AI agents.

First, they need access to private data. Real work requires context from your proprietary codebases, internal documentation, and production systems. No sandbox. No abstractions. Direct access.

Second, they must communicate externally. These agents call LLM APIs, pull dependencies, query documentation sites, and interact with development tools. They need internet access to function.

Third, they cannot access untrusted context. Prompt injection attacks, poisoned dependencies, and hallucinated suggestions mean agents need boundaries. Without isolation, a compromised agent can exfiltrate your entire codebase.

On a local laptop, you can't solve all three simultaneously. You either give agents the access they need and lose control, or you lock them down so tightly they become useless.

Enterprise security teams are choosing control. What we’re seeing in our conversations with platform engineering leaders across financial services, healthcare, and technology companies is that AI coding tools are being blocked at an accelerating rate—not because they don't work, but because local execution is fundamentally ungovernable.

The hidden costs of uncontrolled AI agents in development

When AI agents run on local developer machines, enterprises lose three things they can't afford to lose.

Zero visibility into AI agent activity

No audit trail means no answers. When something goes wrong, you need to know exactly what the agent did, which prompts triggered which actions, what data was accessed, and where the output went.

On a local machine, you get none of that. For regulated industries, including financial services, healthcare, and government agencies, this isn't just inconvenient. It's disqualifying. Compliance teams can't sign off on tools they can't audit.

No control over AI agent permissions and access

Developers are trusted. AI agents aren't. Not yet.

When an agent has the same level of access as the developer running it—full repository access, production credentials, unrestricted network access—the blast radius of a mistake or a compromised prompt is enormous.

You need boundaries. Network policies that define exactly which external services an agent can reach. Resource limits that prevent runaway compute costs. Access controls that separate human permissions from agent permissions.

None of this exists when agents run locally.

Unmanaged compute costs and resource constraints

AI agents are expensive to run. Not just API costs—compute costs. Modern agents execute tasks in parallel, spin up multiple environments, run extensive test suites, and consume significant CPU and memory.

On developer laptops, this manifests as maxed-out machines, poor performance, and frustrated engineers. At scale, it becomes an infrastructure problem with no good solution. You can't allocate GPU resources to laptops. You can't bin-pack multiple agents onto optimized instances. You can't measure or control spend.

One enterprise customer we spoke with estimated they were spending $25 million annually on shadow VMs that developers created just to circumvent laptop performance limitations. And that was before AI agents entered the picture.

Remote development infrastructure is becoming mandatory for AI

The more agent-driven your workflow becomes, the less viable local development is. Not just for security. For performance, cost, and developer experience.

This isn't a controversial take anymore. GitHub is building Copilot Workspace as a cloud-native experience. Anthropic assumes Claude Code runs in remote environments. Even Cursor's most advanced features work best with cloud compute. The shift is happening whether enterprises are ready or not.

Enterprises need something different. Something self-hosted. Something that provides the governance and observability that security teams demand, while delivering the performance and experience that developers expect.

How Coder enables secure, governed AI development at scale

Coder provides self-hosted development environments that let AI coding tools run safely at enterprise scale. Not by replacing developer tools. By enabling them.

When Cursor runs in a Coder workspace instead of on a laptop, this is what you can expect:

Self-hosted infrastructure you control and audit

Agents run on infrastructure you control. Your cloud. Your data center. Your GPU fabric. Most importantly, your access controls and policies.

Unlike SaaS development platforms, Coder runs entirely in your infrastructure. This matters for regulated industries that can't send proprietary code to third-party platforms, for government contractors with air-gapped requirements, and for any enterprise that takes data sovereignty seriously.

When auditors ask where your development happens and how you govern it, you can point to infrastructure you control, with policies you define, running in environments you monitor.

Standardized, reproducible environments for every developer

With Coder Workspaces, you define development environments as code using Terraform. Every developer gets the same tools, the same access controls, the same security policies.

When AI agents run in these standardized environments, you eliminate configuration drift. You know exactly what each agent has access to because you defined it. You can update security policies centrally and have them apply to every workspace instantly.

Enterprise-grade compute without laptop limitations

AI agents need serious compute. Coder Workspaces run on cloud or on-premises infrastructure that can scale to meet demand.

Need GPU access for AI workloads? Provision it. Need to run multiple agents in parallel? Bin-pack them efficiently. Need to control costs? Set resource limits and track usage.

Your developers get powerful development environments. Your platform team gets visibility into resource consumption and the ability to optimize spend.

Integration with your existing security stack

Coder Workspaces integrate with the tools you already use. SSO for authentication. Your existing VPN and network policies. Your monitoring and observability stack. Your secret management system.

You don't need to rebuild your security posture around a new platform. You extend what you already have to cover development environments, including the AI tools running inside them.

What's next: Deeper AI governance and observability

Today, Coder Workspaces provides the infrastructure foundation for secure AI development. Tomorrow, we're building capabilities specifically designed for the AI agent era.

Centralized observability for every AI tool interaction. Network-level boundaries that let you define exactly which external services agents can reach. Complete audit trails that answer the questions compliance teams actually ask. These capabilities are coming to give enterprises the granular control and visibility that AI governance demands, built directly into the infrastructure layer where it belongs.

Dec. 9-11 Coder introduces the context and guardrails that enterprises need to get AI out of the lab: secure, scalable, and self-hosted.
Dec. 9-11 Coder introduces the context and guardrails that enterprises need to get AI out of the lab: secure, scalable, and self-hosted.

Tool-agnostic infrastructure for any AI coding assistant

Coder doesn't care which AI tools you use. Cursor, Claude Code, Copilot, your own custom agents—they all work. Your developers keep their tools. They just run in environments that are standardized, secured, and centrally managed.

This matters because the developer experience tooling landscape is changing rapidly. Betting on a single vendor means betting they'll win the long-term interface battle. Coder doesn't require that bet.

You get infrastructure that works with any developer tool, any AI agent, and any workflow—while maintaining the governance and security posture your enterprise requires.

Moving from blocked AI tools to governed AI development

Here's what we're seeing across our customer base: enterprises that try to govern AI coding tools at the laptop level fail. Policies don't scale. Enforcement doesn't work. Shadow IT proliferates.

Enterprises that shift AI development to remote, governed infrastructure succeed. Developers get better tools. Security teams get visibility and control. Platform teams get infrastructure they can actually manage.

The shift is coming. Security teams are already blocking local AI agent usage. Developers are already demanding better solutions. The question isn't whether your development infrastructure needs to evolve—it's whether you'll lead the change or be forced into it reactively.

Spin up at speed. Control at scale.

That's what Coder enables. Self-hosted development infrastructure where your developers can use the AI tools they need, with the governance and security your enterprise demands.

This is why…

Every Cursor needs a Coder.

FYI, this was stolen with permission from one of our customers who told us they were denied access to Cursor until Coder was in production.

Agent ready

Subscribe to our newsletter

Want to stay up to date on all things Coder? Subscribe to our monthly newsletter and be the first to know when we release new things!