Mar 6 2026

When AI Feels Closer Than It Really Is

David Fraley
David Fraley

The conference.

I just got back from DeveloperWeek where fellow Coder PM Stephen Kirby and I presented about secure infrastructure for AI agents - material that we talk about every day with customers and constantly refine with other PMs and our engineering team.

Walking up to the stage and seeing the crowd there with the energy that everyone carried with them was a little nerve wracking. Then we were on stage, giving our presentation and all was going well until we hit the first demo.

Our recorded video couldn’t run because of connection issues, so I was left standing in front of ~100 people trying to verbally describe Coder’s product’s experience. I’m a good story teller, but not that good…

The audience was a cross-section of people that companies like ours work with every day. Engineers and PMs trying to learn how to effectively and securely rollout AI to their teams. Founders thinking through governance at scale. Leaders who could immediately effect organizational change sitting next to individual contributors who were trying to figure out how they could positively influence the governance stance of their multi-thousand person organization. It was a good reminder of the full spectrum of people actually doing this work.

Come the next day, Kirby and I were back at it on a panel, with some real industry heavy-hitters. We quickly found common ground and started to dig into our own specialities of AI and what the future of AI looked like. The conversation was good. Genuinely good. And somewhere in the middle of it, I started to notice something.

Working at Coder, I see AI usage day-in day-out. Our research team pushing the limits of what AI-enabled development looks like. And our feature development teams constantly refine how to add AI into more process-heavy workflows. But this experience isn’t unique to me.

The bubble.

Everyone at DeveloperWeek was fluent in AI. Words like MCP, agentic workflows, and context window were known to everyone. One attendee was using agents to bucket, review, and surface PRs for maintaining a repository. They were also using OpenClaw to surface inconsistencies in their tax filings. It felt, for a moment, like we were already in some end-game version of AI integrating into every part of our lives.

That feeling didn't last.

A common thread kept surfacing across panels and hallway conversations: this isn’t the world’s status quo; how can I up-level my team to be there with me?. These were people who understood the technology deeply. But they worked inside organizations that were still wrestling with policy reviews, security approvals, and the basic question of whether AI belonged in their workflows at all. For these organizations, AI was still seen as an experiment, not a known workflow with clear use cases and implementation methods. AI is a company conviction without a playbook.

The conference had created an illusion. It compressed the vanguard of AI adoption into a single room and made it feel like the norm. But the data tells a different story. According to McKinsey's 2025 State of AI survey, only 23% of organizations are actively scaling agentic AI systems anywhere in their enterprise. BCG found that 74% of companies have yet to demonstrate any tangible value from AI investments. The room at DeveloperWeek was not representative. It was exceptional.

The dissonance.

At Coder, we evaluate AI fluency from low to high. On the lower end of AI fluency are organizations with very limited AI usage. The high end we refer to as multipliers or AI power users. I realized while at this conference, we were all “multipliers”. The real world, however, is filled with people still trying to equilibrate to AI.

That gap isn't ignorance. The work of adopting AI inside a large organization is genuinely hard. You have to identify the workflows where AI actually helps. You have to teach people to use new tools without disrupting what's already working. You have to move policy, earn trust, and manage the reality that culture moves slower than technology. Any one of those things takes real effort. All of them together take time.

It struck me while I was on stage that I was talking about securing agents and governing infrastructure at a conference where many of the attendee's own organizations have yet to figure out how to get Claude Code into their engineer’s hands.

AI adoption will take time, effort, and care. Coder works through this every day with customers, and is a chief reason why we segment customers by their AI fluency — the pressures, levers, and individuals involved change as AI adoption grows and matures. It’s a specialty focus that requires care to adequately address.

The future.

At DeveloperWeek, it felt like the future was already here. Agents were helping with taxes, reviewing pull requests, and surfacing inconsistencies. Everyone spoke fluent AI, and I felt momentarily behind the curve.

But conferences compress the future into 72 hours. They gather the early adopters and builders, and concentrate them into a single room. That concentration creates the illusion that transformation is universal. But it isn’t.

In reality, AI adoption will be uneven. That became obvious to me somewhere between my failed demo and the panel discussion.

But here's what I keep coming back to: that unevenness is an opportunity. Most companies are still early. They're still figuring out what good looks like, and they need guides who have actually navigated the path. The people sitting in that conference room who are ahead of their organizations, pushing for better tooling and smarter governance, those are the people who build reputations. Their success becomes the company's success. And the infrastructure decisions they champion now will define how their teams work for the next decade.

Outside those walls, most companies are still figuring out policy — that's my day-to-day. They're still navigating security reviews. Still deciding whether AI belongs in their workflow at all. I don't believe that to be ignorance, but rather something simpler. It's human. It's the reality of groups of people trying to do something larger than themselves. Technology moves fast. People move slower. Culture moves slower still.

Fitting AI into our processes and lives will be harder than the actual technology itself. This is not to say that AI is easy — it's a profoundly difficult, complex, and fantastical technology. But the real changes occur when people and organizations have to rewrite workflows, adjust incentives, retrain instincts, and renegotiate trust. That gap between capability and coordination is where most companies actually live. It's the gap worth taking seriously. And it’s why I find the work at Coder meaningful. Helping organizations navigate the human and organizational side of AI adoption, not just the technical one, is what we actually do. Not just for the multipliers who are already running, but for the teams still figuring out the path.

So if you aren’t building agentic workflows today, that doesn’t mean you’re behind. It means you’re somewhere along the AI fluency curve, but all the same learning.

The technology will keep moving. The harder question is whether we're building the right conditions to bring those around with us.

Agent ready

Subscribe to our newsletter

Want to stay up to date on all things Coder? Subscribe to our monthly newsletter and be the first to know when we release new things!