Claude Code Channels Changed How I Work Away From the Terminal
A month ago I couldn't leave my laptop during a build. Three features in four weeks fixed that.
Simple thoughts on building, designing, and shipping.
A month ago I couldn't leave my laptop during a build. Three features in four weeks fixed that.
I thought a single SKILL.md file was enough. Then I saw how Anthropic's own team structures theirs, and rebuilt everything.
I spent a weekend stuffing 100MB of PDFs into an agent. Performance got worse. Mapping what I was feeding into four categories finally showed me why.
I tested dozens of design skills for AI coding agents. Most didn't last a week. These 12 are the ones I still use.
I built skills, configured subagents, and set up slash commands. Then a single loop running overnight outperformed all of it. Three loop architectures that actually deliver.
I spent a year getting wildly inconsistent results from Claude Code and Codex. Three spec files, each with a distinct role, fixed it.
Agents writing code is just the start. To review PRs and explain architecture to teammates, you need visualization tools.
Subscribing puts you in the top 0.3%. These five configurations — agents, teams, MCP, monitoring, automation — push you into the top 0.01%.
I classified every term I kept encountering while using Claude Code and Codex daily. Five groups emerged, and they map the entire system these tools run on.
I dug into SDK type definitions and system prompts for both tools. The 29 vs 7 gap isn't about feature count. It's about two fundamentally different answers to the same question: how should an AI coding agent interact with your system?
Have a project in mind or just want to chat?
I'd love to hear from you.
I'm always open to a conversation.