What Claude Code's Task System Reveals About the AI-Native Engineer
Claude Code renamed Todo to Task. It looks like a small change, but it marks the beginning of a completely different system - one built for AI swarms.
Claude Code quietly renamed Todo to Task. The terminology shift looks minor. The architectural shift underneath it is not.
Todo was a list that Claude maintained by itself, a single agent’s personal memory. Task is a unit of work shared across multiple agents. That distinction changes what kind of tool Claude Code is trying to be.
Delegation, Not Automation
The old Claude Code was a single brain. Hand it a complex project and it would forget earlier steps midway through, forcing you to restart around the 60% mark over and over. This wasn’t a prompt engineering problem. It was a memory architecture problem.
The new Task system has a fundamentally different structure. You talk to a team leader that plans, delegates, and synthesizes rather than writing code directly. Once you approve the plan, specialist agents are spawned and work in parallel.
This is delegation. Automation means scripting a known sequence. Delegation means defining outcomes and trusting a structured team to figure out the execution path. The difference matters because the failure modes are different: bad automation produces wrong outputs reliably; bad delegation produces unpredictable outputs that are harder to catch.
Dependency Graphs Are the Real Mechanism
The critical feature of the Task system is inter-task dependencies (blockedBy). Task 3 cannot start until Tasks 1 and 2 are complete.
Previously, Claude had to hold the entire plan in its context window. As context grew longer, it naturally lost pieces of the plan. The longer the session, the more drift accumulated.
Now the plan itself is externalized and structured. Even when context gets compressed or an agent is swapped out, the plan survives. The dependency graph acts as a persistent coordination layer that outlives any individual agent’s memory.
Parallel Processing as a Byproduct
Assign seven to ten tasks and the system no longer processes them sequentially. Tasks without dependencies run concurrently. Fast searches go to Haiku, implementation goes to Sonnet, complex judgment calls go to Opus. Model allocation happens automatically based on task characteristics.
This parallelism is a direct consequence of structured task design. The more cleanly you decompose work and define dependencies, the more concurrency the system extracts. You don’t optimize for parallelism explicitly; you get it as a byproduct of clear task architecture.
Three Patterns From the Swarm Documentation
The Swarm documentation describes distinct workflow shapes:
Parallel Specialists run multiple experts simultaneously: security review, performance analysis, and type-checking all happen at once rather than in sequence. Pipeline structures work sequentially: research flows into planning, planning into implementation, implementation into testing, each stage depending on the previous. Self-Organizing Swarms have agents pull from a shared task pool, picking up whatever is unblocked and unassigned without central coordination.
The work of an engineer in this model is no longer writing code. It’s designing which agents do what, in what order, with what dependencies between them.
What Actually Determines Swarm Performance
Three levers matter for optimizing swarm performance.
Task granularity: smaller tasks increase the parallelization rate, but inter-agent communication overhead grows with each split. Role separation: specialization improves quality but creates potential bottlenecks at agents with disproportionate workloads. Dependency design: structuring what must finish before the next thing can start without unnecessary blocking.
From my own experiments, dependency design has the most significant impact, and it’s also the hardest to get right. Task granularity and role separation are relatively intuitive. Dependency design requires thinking about the shape of work itself: what can actually run in parallel, what genuinely requires sequencing, and where you’re creating phantom dependencies that serialize work unnecessarily.
I’ve gotten this wrong more than I expected. Overly conservative dependency graphs eliminate most of the parallelism benefit. The instinct to make everything sequential because it feels safer is exactly the instinct to resist.
What the Engineer’s Job Becomes
The progression is clear: first we wrote code, then we designed systems, now we design the structure of work itself.
The shift from Todo to Task is small on the surface. Underneath, it’s the scaffolding for a model where an engineer’s primary output is the architecture of coordination between agents, not the code those agents produce.
Join the newsletter
Get updates on my latest projects, articles, and experiments with AI and web development.