A case study in rapid AI agent development using Coder's open-source platform

Knowledge workers waste 3.2 hours per week searching for information across disconnected systems—that's over 166 hours per year, more than a full month of productivity lost to redundant searching.
The bottleneck isn't search technology. It's that traditional enterprise search requires users to know which tool to open, which query syntax to use, and how to interpret results. What if you could simply ask a question and get a sourced answer in seconds?
This is where agentic AI creates new value. But Gartner warns that over 40% of agentic AI projects will be scrapped by 2027 due to high costs and execution challenges. The missing piece is infrastructure that makes agent development fast, flexible, and production-ready.
Enter Blink: Coder's open-source platform for building and deploying AI agents. To solve real challenges in our own fast-growing business, we built Animus, a customer intelligence agent that unifies all of Coder's customer data into a single conversational interface. It went from concept to production in weeks, not months, and achieved immediate adoption across sales, engineering, product, and marketing teams.
Blink removes infrastructure friction from AI agent development. Developers build and test agents from the terminal with hot-reload, switching between chat mode and edit mode with a single keystroke. When it's time to ship, blink deploy handles everything: no infrastructure code, no container orchestration, no deployment manifests.
The platform provides first-class Slack support, with GitHub and other integrations coming soon. Threading, file uploads, typing indicators, slash commands: Blink's @blink-sdk/slack package handles all of it without boilerplate. The platform works with any LLM and lets you swap models without rewriting application code.
Unlike coding assistants or workflow automation tools, Blink is a complete development platform for building intelligent, event-driven agents that integrate with real-world systems.
Animus consolidates Coder's customer intelligence into a single conversational interface. The challenge was representative of what enterprises face: data scattered across Zoom recordings, Salesforce records, Zendesk tickets, internal documentation, and telemetry systems.
Animus combines three capabilities that work together to answer questions no single system could handle alone.
RAG retrieval from a unified knowledge base. All customer data flows into AWS Bedrock Knowledge Base backed by OpenSearch Serverless. Call transcripts from Zoom, Granola, and internal tools get automatically processed and indexed alongside CRM records covering accounts, opportunities, and 87K+ leads. Support tickets maintain customer context, internal documentation stays clearly separated from customer-stated information, and open-source telemetry exports hourly from Google BigQuery.
Structured queries against pre-aggregated summaries. Semantic search alone isn't sufficient for analytical questions. Animus maintains JSON summaries rebuilt by AWS Lambda functions: account-level data like ARR, renewal dates, and feature adoption; opportunity pipeline by stage, region, and owner with win rates; and lead records with conversion tracking.
Multi-persona reasoning. Different questions require different expertise. The agent applies domain knowledge through specialized lenses: Account Executive perspective for deal status and competitive positioning, Sales Engineer thinking for technical requirements and POV frameworks, Product lens for feature adoption signals, and PMM analysis for messaging effectiveness.
What typically requires months of infrastructure work was operational within weeks. The entire agent runtime lives in a single agent.ts file:
When the agent needed conversation compaction to handle long Slack threads exceeding Claude's 200K token context limit, the implementation shipped in hours: detect context limit errors, trigger summary generation, replace history with compressed context. Blink handled the infrastructure complexity.
The data pipeline runs on AWS Lambda with Terraform-managed infrastructure. Transcript sync fetches Zoom recordings every 6 hours, processors convert raw formats to searchable text, and indexers build pre-aggregated summaries. This entire pipeline was operational within days of starting development.
Animus achieved organization-wide adoption because it solved real problems with zero training required.
Sales teams transformed deal preparation and account management. Reps ask "Summarize the last three calls with Goldman Sachs" and get chronological context that would take 30 minutes to compile manually. "Customers with renewals in Q1 and recent support escalations" surfaces at-risk accounts immediately. "What are customers saying about [competitor]?" aggregates mentions across hundreds of calls.
Sales engineering gets leverage on technical discovery and reference matching. "What does Dropbox’s tech stack look like?" searches transcripts and CRM simultaneously. "Active technical evaluations over $100K" returns deals with SE involvement. Finding reference customers for specific use cases takes seconds instead of email threads.
Product teams stay close to the customer voice without sitting in on every call. "Top 10 customers using Cursor" ranks by mention frequency and context. Synthesizing enterprise feature requests across hundreds of calls happens with a single question.
Leadership gets real-time visibility into pipeline and customer health. Commit deals by region, accounts with declining engagement, feature requests from enterprise customers over $500K ARR: the questions that inform strategic planning now get immediate, sourced answers.
1. Treat infrastructure as a first-class concern from day one.
Most failed AI projects don't fail because the AI doesn't work. They fail because teams spend months on authentication, scaling, state management, and deployment before the agent does anything useful. By the time infrastructure is ready, budgets are exhausted and stakeholders have lost patience.
Blink exists because we learned this lesson building Animus. Choose tools that let you ship a working agent in days, then iterate. If your first version requires custom Kubernetes configs, you're solving the wrong problem first.
2. Design for trust through transparency.
Enterprise users won't adopt agents they can't verify. Every architectural decision in Animus prioritized auditability: source attribution on every answer, clear separation between customer statements and internal claims, full conversation history available for review.
When building your own agents, ask: can users trace any answer back to its source? Can they understand why the agent said what it said? If not, adoption will stall regardless of how accurate the underlying model is.
3. Ship the interface before perfecting the data.
The instinct is to build comprehensive data pipelines before exposing anything to users. Resist it. A conversational interface with incomplete data still provides value and generates feedback that prioritizes future work. A perfect knowledge base with no interface provides nothing.
Animus launched with transcript search and basic CRM integration. Users immediately asked for support ticket context and telemetry data. That feedback shaped the roadmap more effectively than any upfront planning could have.
Most companies have invested in search tools that require users to learn query syntax and manually synthesize results. They've deployed conversation intelligence platforms that capture calls but still require training and configuration. The tools are good but the friction remains.
Animus represents the next step: conversational interfaces that unify scattered data and provide answers with source attribution. Instead of learning five different tools, teams ask questions in natural language.
The infrastructure to build this exists today. Blink is available in Early Access following its October 2025 launch at Cloudflare Connect. The difference between abandoned proofs-of-concept and production deployments comes down to infrastructure friction. Teams that can iterate from concept to production in weeks will capture the value first.
That's what Blink enables.
Why start with Slack instead of a standalone interface?
Slack is where our teams already work. Building a separate app would require users to context-switch and remember to use it. Deploying to Slack meant Animus was accessible in the flow of work from day one.
How do you handle hallucination risk?
Source attribution is the primary safeguard. When every claim links to a specific transcript or record, users can verify. The multi-persona reasoning also helps: the agent frames answers through domain-specific lenses rather than generating unconstrained responses.
What happens when Slack threads get too long?
Automatic conversation compaction detects when context approaches Claude's 200K token limit, generates a summary of the conversation so far, and continues with compressed context. Users don't notice the handoff.
How much did this cost to build and run?
Development cost was primarily engineering time: one engineer for several weeks. Infrastructure costs are modest: AWS Lambda functions run on-demand, Bedrock Knowledge Base charges by query volume, and Blink Cloud handles deployment. The ROI was immediate given time savings across teams.
Want to stay up to date on all things Coder? Subscribe to our monthly newsletter and be the first to know when we release new things!