Skip to main content

5 posts tagged with "claude-code"

View All Tags

Claude Code Doesn't Index Your Codebase. Here's What It Does Instead.

· 21 min read
Vadim Nicolai
Senior Software Engineer

Last verified: March 2026

Boris Cherny's team built RAG into early Claude Code. They tested it against agentic search. Agentic search won — not narrowly. A Claude engineer confirmed it in a Hacker News thread: "In our testing we found that agentic search outperformed [it] by a lot, and this was surprising."

That thread is the clearest primary source on how Claude Code actually works — and why it works that way. Most articles on the topic paraphrase it from memory. This one starts from the source.

Q: Does Claude Code index your codebase? A: No. Claude Code does not pre-index your codebase or use vector embeddings. Instead, it uses filesystem tools — Glob for file pattern matching, Grep for content search, and Read for loading specific files — to explore code on demand as it works through each task. Anthropic calls this "agentic search."

Two Paradigms of Multi-Agent AI: Rust Parallel Agents vs Claude Code Agent Teams

· 28 min read
Vadim Nicolai
Senior Software Engineer
TL;DR

Three multi-agent coordination positions, one codebase. A static Rust/Tokio fan-out assigns 20 agents at compile time with zero coordination overhead. A team.rs library implements the full Claude Code agent-teams model in pure Rust — TaskQueue, Mailbox, PlanGate, ShutdownToken — and the study pipeline now uses it to run a 2-step search→write flow with inter-worker messaging. Claude Code agent teams invert every assumption of static fan-out: dynamic task claiming, file-locked concurrency, full bidirectional messaging. The decision rule is one question: do your agents need to talk to each other? If no, tokio::spawn + Arc<T>. If yes: build team.rs, or use TeamCreate.

Multi-agent AI engineering has become a core discipline in production software development. The interesting question is no longer whether to build multi-agent systems. It is how — and specifically, which architectural pattern to reach for given the nature of the work. The clearest demonstration is that multiple fundamentally different paradigms live inside the same codebase.

Why Do AI Agents Keep Making the Same Mistakes?

· 8 min read
Vadim Nicolai
Senior Software Engineer

Every Claude Code session leaves a trace — tool calls made, files read, edits applied, errors encountered, and ultimately a score reflecting how well the task was completed. Most systems discard this history. We built an agent that mines it.

The Trajectory Miner is the first agent in our six-agent autonomous self-improvement pipeline for nomadically.work, a remote EU job board aggregator. Its job: analyze past sessions, extract recurring patterns and reusable skills, and feed structured intelligence to the rest of the team. It writes no code. It produces raw material that other agents — the Codebase Auditor, Skill Evolver, and Code Improver — consume.

The design draws from four research papers, curated from the VoltAgent/awesome-ai-agent-papers collection. Here is what each paper contributes and how we translated academic ideas into a working system.

How I Built a UX Team with Claude Code Agent Teams

· 16 min read
Vadim Nicolai
Senior Software Engineer
TL;DR

Set CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 in .claude/settings.json. Write a command file in .claude/commands/ and spawn prompts in .claude/team-roles/. Type /ux-team and three agents — UX Lead, UX Researcher, UI Designer — run in parallel: researcher defines personas and journeys, designer builds the component system, lead synthesizes into a spec. File ownership is enforced by persona, not by filesystem. BMAD Method v6 provides the Sally persona and a quality-gate checklist that runs before the spec is marked complete.

BMAD Method + Langfuse + Claude Code Agent Teams in Production

· 16 min read
Vadim Nicolai
Senior Software Engineer

Running AI agents in a real codebase means solving three intertwined problems at once: planning and quality gates (so agents don't drift), observability (so you know what's working), and orchestration (so multiple agents divide work without clobbering each other). In nomadically.work — a remote EU job board with an AI classification and skill-extraction pipeline — these problems are solved by three complementary systems: BMAD v6, Langfuse, and Claude Code Agent Teams. This article explains how each works and how they compose.