Claude Code Channels Changed How I Work Away From the Terminal
A month ago I couldn't leave my laptop during a build. Three features in four weeks fixed that.
Deep coverage of AI agent architecture, context engineering, and developer workflows.
57 posts
A month ago I couldn't leave my laptop during a build. Three features in four weeks fixed that.
I thought a single SKILL.md file was enough. Then I saw how Anthropic's own team structures theirs, and rebuilt everything.
I spent a weekend stuffing 100MB of PDFs into an agent. Performance got worse. Mapping what I was feeding into four categories finally showed me why.
I tested dozens of design skills for AI coding agents. Most didn't last a week. These 12 are the ones I still use.
I spent a year getting wildly inconsistent results from Claude Code and Codex. Three spec files, each with a distinct role, fixed it.
Agents writing code is just the start. To review PRs and explain architecture to teammates, you need visualization tools.
Subscribing puts you in the top 0.3%. These five configurations — agents, teams, MCP, monitoring, automation — push you into the top 0.01%.
I classified every term I kept encountering while using Claude Code and Codex daily. Five groups emerged, and they map the entire system these tools run on.
I dug into SDK type definitions and system prompts for both tools. The 29 vs 7 gap isn't about feature count. It's about two fundamentally different answers to the same question: how should an AI coding agent interact with your system?
After a year of agent-assisted development, I found that structured spec files fixed the inconsistency problem better than any prompt technique.
I reverse-engineered how Codex handles context overflow compared to Claude Code. The answer involves AES encryption, session handover patterns, and KV cache tricks.
Shopify CEO Tobias built QMD, an open-source search engine. Connect it to Claude Code and every session gets persistent memory.
Anthropic's Claude Code team rebuilt their tools three times. Fewer tools made the AI perform better. Here are four hard-won design principles.
Your AI isn't getting dumber. Your main session is overloaded. Sub-agents keep it lean and accurate for over an hour.
A race condition between Auto Memory and context compaction in Claude Code v2.1.59–v2.1.61 broke prompt caching and corrupted sessions. Anthropic reset all weekly limits as compensation.
Agentation gives AI agents pixel-perfect visual feedback via CSS selectors. Readout replays Claude Code sessions like video. Together they eliminate the two biggest friction points in AI-assisted frontend development.
An open-source context engineering skillset just crossed 10k GitHub stars. After applying it to my own agent stack, I finally understand why agents fail.
When an agent repeats the same failing API call, code review won't help. Traces are the new source code for debugging AI agents.
New benchmark data shows AGENTS.md and CLAUDE.md context files actually hurt coding agent performance. Sometimes laziness is the best engineering decision.
Three companies updated their coding agents at the same time. The directions overlap. The real battleground isn't models; it's how fast they absorb developer workflows.
From the SaaSpocalypse to model-specific silicon, five bold predictions for where AI is heading in 2026, with roughly 50% confidence of getting them right.
My API costs jumped 10x when the cache broke in production. The same day, Anthropic engineers explained exactly why.
What LangChain's Terminal Bench results and the hashline format experiment revealed. The same model flipped leaderboard rankings, and the reasons came down to three things: prompts, tools, and middleware.
From Cloudflare and Vercel's Markdown for Agents to Google's WebMCP, reading and writing are being standardized simultaneously, ushering in the Agent-Native Web era.
Five SKILL.md body writing principles buried in Anthropic's official documentation. From separating description and body roles to embedding verification loops.
Peter Steinberger joining OpenAI isn't just a talent grab. It signals the dawn of AI-native messengers that could redefine how we communicate.
OpenAI's Codex team built a 1M-line codebase using only AI agents. Here are the five harness engineering principles they discovered along the way.
A practical guide to Claude Code's new multi-agent teams feature: activation, keyboard shortcuts, terminal compatibility, task management, and known limitations.
OpenAI and Google are racing to launch affordable AI plans while Chinese competitors shatter price floors. Here's why this moment is your best entry point.
Anthropic's Tariq Shihipar breaks down what it actually takes to build production-grade agents - from Bash-first tooling to file-system-driven context engineering.
Anthropic launches Cowork, an autonomous agent that reads, edits, and creates files on your local machine. Vibe coding meets vibe working.
Why $300B evaporated from SaaS stocks as ChatGPT and Claude race to become the AI app store - and what the 2008 mobile wars tell us about what comes next.
Boris Cherny's workflow hit 5K likes in 2 hours. His setup is simpler than you'd expect - parallel sessions, plan mode, CLAUDE.md, and verification loops.
An Anthropic hackathon winner's 10-month Claude Code configuration - context management, hooks, subagents, and the principles that actually matter.
Six Claude Code skill combinations that let a small team run a full-stack business - from marketing and video to UI design and code quality.
After installing hundreds of AI coding agent skills, only 4 made it into my daily workflow. Here's what survived the weekend audit.
Claude Code renamed Todo to Task. It looks like a small change, but it marks the beginning of a completely different system - one built for AI swarms.
A game-style status bar for Claude Code that shows context usage, active tools, sub-agents, and todo progress in real time.
Clawdbot proved that AI agents running locally on your own hardware can replace messenger apps. Here's why that threatens every chat platform.
Connecting Context7 via MCP floods your main context with docs. Skills and subagents isolate queries, keeping long coding sessions stable.
Why YC and OpenClaw leaders believe software is being rebuilt for agents - and what it means for developers building products right now.
With AI reading 50% of developer docs and bot traffic outpacing humans 3-to-1, services are racing to package their knowledge as agent skills. Here's what's driving the shift.
Andrej Karpathy admits he's never felt this behind as a developer. Here's the new AI agent abstraction layer he says you must master - or risk falling 10x behind.
The file-based memory system behind Manus's $2.5 billion valuation is now a free Claude Code skill. Here's why it matters for every AI agent builder.
Manus shared the hard-won lessons behind building production AI agents - from context rot to evaluation rethinking - in a joint presentation with LangChain.
Meta acquired Manus for $3.6 billion. The secret wasn't a bigger model - it was context engineering. Here's what most AI agents get wrong.
Not all multi-agent patterns are equal. Learn when subagents, skills, handoffs, and routers actually outperform a single agent - with real scenarios and numbers.
Orchestration patterns, communication methods, memory management, and production pitfalls - a practical breakdown of everything I struggled with when designing multi-agent systems.
A deep dive into Oh-My-OpenCode's multi-agent orchestration architecture - how programmatic context isolation, parallel execution, and evidence-based research are redefining what AI coding agents can do.
Opencode's open-source documentation doubles as an introductory guide to agent architecture. Here are the seven core concepts every developer should understand.
How a Claude Code plugin named after Ralph Wiggum is redefining autonomous coding through iterative loops, memory architecture, and stop hooks.
Six battle-tested AI agent patterns that emerged globally in one month - from persistent loops to multi-agent orchestration.
Context engineering took the world by storm in early 2026. Here are six battle-tested principles from Manus, Cursor, and Claude Code that define modern AI agent development.
Peter Steinberger, who built GitHub's fastest-starred project, shares 10 hard-won principles for working with AI coding agents.
Anthropic replaced TodoWrite with Tasks and Slash Commands with Skills in two days. Both changes point in the same direction - unhobbling the model.
Claude Code and AI avatar apps prove users want results, not complex interfaces. Here is what actually disappears when UI gets abstracted away, and what remains.
How I use Ghostty, Yazi, Fish, and LazyGit to run multiple AI agents in parallel - a lightweight terminal stack built for agentic workflows.