Aug 19 2025

Inside AI Adoption: Lessons from Enterprise Software Development Teams

Ben Potter
Ben Potter

Most organizations we work with at Coder, from Fortune 500 companies to government agencies to high-tech SaaS providers, heavily invest in bringing AI into their software development lifecycles. While every team has different approaches and experiences, we have identified common patterns (and anti-patterns) around AI adoption.

This post is meant to start a conversation, not to offer a comprehensive market analysis or a step-by-step playbook. We plan to create additional materials and continue to share the best-in-class reference materials. In the meantime, we want to hear from you! Your feedback, experiences, and comments help us build better products and content around this topic.

Assistants like GitHub Copilot didn’t meet expectations

Many developers’ first introduction to AI in the enterprise was GitHub Copilot (or a similar code assistant), with autocomplete-like features in the IDE. Enterprises, at an above-average pace, formed teams to procure and roll out AI assistants, followed by custom dashboards, surveys, and tooling in order to properly measure ROI and adoption metrics.

The results? Underwhelming, given the investment.

The customers and analysts I speak with will cite roughly a 10% productivity gain. Autocomplete suggestions were often reported as distracting or inapplicable to the codebase or language at hand.

Despite the disappointment from this initial top-down push, developers are not rejecting AI. In fact, many organizations have reported a bottom-up push to pilot IDEs like Cursor or CLI agents like Claude Code. Over the past two years, tools like GitHub Copilot have become the status quo. Now, developers are asking: what’s next?

Lesson learned: AI needs to be native to the developer experience, not bolted on.

Dedicated teams are forming rebranding

Enterprises continue to make a significant investment in AI adoption, particularly around software development. Because of this, many Developer Productivity (DevProd) and Developer Experience (DevEx) teams are rebranding or splitting to include “AI” in their name. These teams are responsible for safe, efficient and impactful introduction of AI tools. One of our partners, DX, spends a lot of time researching, writing, and podcasting about this exact topic.

At most organizations, this cannot just be attributed to a top-down mandate around AI adoption. Most developers enjoy using AI in their workflows, proven by the widespread adoption of IDEs like Cursor and Windsurf.

At Coder, we have felt this too. Over the past two years, our conversations have shifted from provisioning standard tooling to AI assistants like Copilot and IDEs like Cursor. Now, many teams are looking for additional ways to introduce AI into the software development lifecycle beyond the IDE. Enterprises look to leverage remote coding agents for pull request reviews, code discovery and onboarding, prototyping, small bug fixes, documentation, on-call issue triage, and debugging.

Lesson learned: The developer experience (DevEx) profession now encompasses AI tooling.

Leading enterprises prioritize AI efficiency proficiency

Despite all the hype, AI is still in its infancy and should be approached with a “crawl, walk, run” approach. We’ve seen many strategies that jump straight to “run” fail, despite being tied to clear business outcomes. Dedicated task forces around running “teams of AI agents for tech debt remediation” tend to miss the point when developers across the org lack day-to-day AI proficiency in their workflows.

On the other hand, we see organizations with a strong learning and experimentation culture seeing great results. By introducing new tools regularly and sharing findings widely, they build organizational AI literacy before attempting larger transformations.

Lesson learned: AI success comes from tightly scoped use cases with broadly socialized wins.

Enterprise developers rarely “vibe code”, but their peers do

Business analysts, QA engineers, technical managers, security engineers, UX designers, support engineers, and other coding-adjacent roles are generating massive amounts of code for prototyping, writing tests, and accomplishing small tasks between meetings.

On the other hand, many enterprise software engineers we speak with remain skeptical that today’s autonomous coding agents (or “vibe coding”) can be used to augment real work in day-to-day software engineering workflows. Software engineers often cite that agents need to be heavily supervised, and therefore are unproductive for long periods of time in the background. Additionally, many report that agents do not work without extremely clear requirements or simply write code that is not well-suited for production systems.

That said, most agree the potential is clear: as models improve and gain access to private, enterprise-specific context, AI will become more effective for everyday tasks. The real shift will come when it is deliberately integrated into cross-functional workflows, such as helping QA generate targeted tests for developers, enabling security teams to prepare actionable fixes, and allowing product managers to deliver code-ready requirements. In these contexts, AI’s contributions become clearer, more relevant, and more impactful.

Lesson learned: Vibe coding isn’t for software engineers, it’s for citizen developers - and useful.

IT leaders are optimistic for compliance “change agents”

Interestingly, many leadership roles in IT, security, and architecture are often the most optimistic about the benefits of coding agents to improve code quality, tests, documentation, and security practices in the enterprise. We often see agents being discussed alongside the context of promoting best practices or modernization.

This is likely due to many past failed attempts to introduce DevOps and platform engineering efforts in the enterprise, combined by the challenging dependency on software engineers to patch vulnerabilities, document their services, and other “chores.” Instead of manually bugging developers and PMs to prioritize chores in a sprint, IT leaders are hopeful that fleets of AI agents can help “nudge” projects into better compliance.

Lesson learned: Coding agents can happily do the grunt work that detracts from a good DevEx.

DevEx and security concerns hold back AI coding agents

Most teams we speak with are still looking for advice on how to adopt and secure autonomous coding agents, primarily due to unclear developer experience and security concerns.

On the developer experience side, organizations are hesitant to introduce coding agents without clear and practical use cases within their software development lifecycle. For example, many coding agents struggle with monorepos or cannot fully understand requirements for most issues. Developers need a clear understanding of the capabilities of AI agents and when and how to use them. Without clear expectations, environments, and third-party integrations, many AI agents end up feeling impractical for day-to-day use beyond a simple demo.

Security and governance are another concern, especially with agents running on remote servers. Topics such as permissions/identity, environment sandboxing, configuring network firewalls, token cost controls, and MCP security are top-of-mind for security teams. The two most common risks enterprises identify are:

  1. Data leaks: Source code or intellectual property is accidentally leaked to the public by an autonomous AI agent
  2. Destruction of property: Databases, repos, or additional proprietary data are accidentally deleted by an autonomous AI agent

Organizations that are successfully operationalizing agents, such as Anthropic, are evaluating these topics from first principles. Other enterprises are waiting for vendors such as Devin and Coder (or the agents themselves) to improve their security postures.

Lesson learned: Getting agents working in the enterprise is mostly about security and governance, not AI maturity.

Enterprise reference architectures are emerging

Many enterprises we speak to have already selected an enterprise-grade LLM provider (e.g., AWS Bedrock) and procured several AI tools such as GitHub Copilot, Cursor, and CodeRabbit with established policies for risk management and compliance.

In the coding agent landscape, thought leaders such as Simon Willison write about security models (e.g., the lethal trifecta for AI agents). Companies like ChatGPT Codex, Google Jules, and Anthopic are documenting best practices for securing their respective agents, both locally and for remote environments.

While AI is nowhere near ubiquitous in the enterprise software development lifecycle, DevEx teams and vendors alike are rushing to fill functional gaps around developer experience, security, identity management, sandboxing, and access to private data/tools.

For our part, Coder provides teams a single place to set up and manage development environments with clear access controls, isolated workspaces, and secure connections to internal code and any popular agent. It works with existing logins, applies company security rules, and helps developers work in a consistent, reliable setup.

Lesson learned: Successful enterprises design purposeful AI development infrastructure.

Next steps

Check out our recent webinar with the DevEx team at Anthropic where we discuss how aggressive AI adoption and their security-by-design approach is shaping the next phase of enterprise coding agents. We discuss real-world patterns like multi-agent workflows, “docs to demos,” and embedding AI across the development lifecycle from prototyping to on-call playbooks for production.

Subscribe to our newsletter

Want to stay up to date on all things Coder? Subscribe to our monthly newsletter and be the first to know when we release new things!