AI Learning Digest.

Claude Code Ships Native Worktree Support While AI Adoption Stats Reveal 99.7% of the World Hasn't Caught Up

Daily Wrap-Up

The big product news today was Anthropic shipping native git worktree support across Claude Code's CLI, Desktop app, and IDE extensions. This isn't just a convenience feature. Worktrees solve the fundamental collision problem when you want multiple AI agents editing the same repository in parallel, and the fact that it's now built-in rather than requiring manual git gymnastics signals where agent-assisted development is heading: toward genuine concurrency. @mattpocockuk called it good enough to become his new default, and the community response suggests he's not alone.

But the post that will stick with me longest today came from @rryssf_, who wrote what amounts to a research paper arguing that AI memory systems are failing because they're modeled on databases instead of human identity construction. Drawing on Conway's Self-Memory System, Damasio's somatic markers, and Bruner's narrative psychology, the argument is that current approaches (vector stores, conversation summaries, key-value lookups) all miss the hierarchical, emotionally weighted, goal-filtered nature of how humans actually remember. It's the kind of post that makes you stop scrolling and reconsider your architecture. Whether you're building agent memory or just thinking about how your tools retain context, the framing of "identity system vs. retrieval system" is worth internalizing.

On the lighter side, @damianplayer delivered the day's best reality check by calling anyone over 35 a "boomer" while pointing out that most people still don't know what Claude is. Combined with @AoverK's stat that paying $20/month for AI puts you in the global 0.3%, today was a good reminder that the AI bubble discourse is happening inside an incredibly small room. The most practical takeaway for developers: if you're running parallel Claude Code sessions or any multi-agent workflow against a single repo, adopt claude --worktree immediately. It eliminates an entire class of merge conflicts and file-clobbering bugs that have been silently eating productivity.

Quick Hits

  • @dani_avila7 shared a handy Ghostty terminal tip: unfocused-split-opacity = 0.85 in your config makes it obvious which pane has focus when working across multiple splits.
  • @jliemandt claims 43% of Alpha School students chose school over vacation, attributing it to AI-driven mastery learning. Bold claim, no independent verification.
  • @LinusEkenstam posted a showcase of what happens when a creative human learns to direct AI tools effectively. Short on details, long on vibes.
  • @p_misirov flagged a Steam game called "Data Center" that lets you build and manage your own data center. Called it "lowkey genius" as an education tool for understanding hyperscaler infrastructure.
  • @gdb revealed that Codex exposes a local API via codex app-server, which could be interesting for custom integrations.
  • @mnedoszytko thanked Anthropic for a hackathon award at Claude Code's first birthday celebration at SHACK15 in SF.
  • @vasuman shared a link roundup of articles for founders and AI builders. No specific standout.
  • @yacineMTB posted a cryptic "OpenAI won. They did the thing." with zero context. The replies were predictably chaotic.
  • @5eniorDeveloper responded to Sam Altman's energy comments with a Matrix-themed meme about using humans as AI power sources. It landed.
  • @victorianoi mused that in 20 years, vibe coders will look at the Linux kernel repo the way we look at the pyramids, "unable to imagine how they managed to drag all those giant stones." A good line.

Claude Code Gets Native Worktree Support

The biggest product announcement of the day came from @bcherny at Anthropic, who dropped a five-part thread introducing built-in git worktree support for Claude Code. The feature lets agents run in isolated worktrees so multiple parallel sessions can edit the same repository without stepping on each other's changes. It's available across the CLI (claude --worktree), the Desktop app (a simple checkbox in the Code tab), and IDE extensions.

The details matter here. Custom agents can declare isolation: worktree in their frontmatter to always run isolated. Subagents can use worktrees for batched changes and code migrations. And you can combine --worktree with --tmux to launch Claude in its own terminal session entirely. As @bcherny put it:

"Each agent gets its own worktree and can work independently."

@mattpocockuk was immediately enthusiastic, posting a demo video and declaring worktrees his new default. He was particularly excited about the subagent parallelization angle:

"Parallelizing subagents makes spawning a bunch of agents to do a lot of work a lot simpler. Especially when merge conflicts are so cheap."

This is a meaningful infrastructure improvement for anyone doing agent-driven development at scale. The previous workaround was manually creating git worktrees and pointing separate Claude sessions at them, which was tedious enough that most people just ran one session at a time. Making it a first-class feature removes that friction entirely and opens the door to workflows where you spin up five agents to tackle different parts of a refactor simultaneously. The merge conflict concern that would normally make this terrifying becomes manageable when each worktree is a clean branch off the same base.

The 99.7% Problem: AI Adoption Is Still Tiny

A cluster of posts today converged on the same uncomfortable truth: almost nobody is using AI yet. @AoverK laid out the numbers starkly:

"Paying $20/mo for AI puts you in the 0.3% globally. Using AI for tasks like coding puts you in the 0.04% globally with only 2-5 million doing this. We're still early."

@damianplayer drove the point home from a different angle, noting that "6.5 billion people have NEVER used AI" and suggesting that anyone whose timeline has convinced them AI is in a bubble should "talk to a boomer above the age of 35 for five minutes. Most people don't even know what Claude is."

Sam Altman added fuel to the discourse with a quote that circulated via both @TheChiefNerd and @MorningBrew, comparing the energy cost of training AI models to the energy cost of training humans: "It takes like 20 years of life and all of the food you eat during that time before you get smart." The comparison is provocative on purpose, but the adoption data makes a simpler point. The discourse around AI saturation is happening inside an incredibly concentrated bubble of early adopters. Whether that means there's massive upside remaining or that the technology simply hasn't proven its value to a broader audience depends on your priors, but the raw numbers suggest the market is nowhere near saturated.

Forkable Code and the Death of Configuration

@aakashgupta wrote the most architecturally interesting post of the day, unpacking an observation from Karpathy about NanoClaw's approach to configuration. Instead of toggling flags in config files, the LLM rewrites actual source code to integrate new capabilities. No plugin registry, no feature flags, no config sprawl. Around 500 lines of TypeScript that the AI forks and customizes per user.

"The implied new meta: write the most maximally forkable repo possible, then let AI fork it into whatever you need. That pattern will eat way more than personal AI agents."

The contrast with OpenClaw is sharp: 400,000+ lines of vibe-coded TypeScript trying to support everything simultaneously, culminating in a CrowdStrike security advisory and a Cisco report catching its skill registry performing data exfiltration. @thekitze pushed back with a "DO NOT QUIT OpenClaw YET" video, but the architectural argument is hard to dismiss. When code modification is cheap, abstraction layers become overhead rather than enablers. This connects directly to the worktree announcement: if forking and modifying code is the new configuration paradigm, you need infrastructure that makes branching and parallel modification trivial.

Psychology Says Your Agent Memory Is Wrong

@rryssf_ posted what might be the longest and most substantive thread of the day, arguing that AI agent memory systems are fundamentally broken because they're modeled on databases rather than human identity construction. The post draws on Conway's Self-Memory System, Damasio's Somatic Marker Hypothesis, and work from Rathbone, Bruner, and Klein to identify five things current architectures lack: hierarchical temporal organization, goal-relevant filtering, emotional weighting, narrative coherence, and co-emergent self-models.

"The fundamental problem isn't technical. It's conceptual. We've been modeling agent memory on databases. Store, retrieve, done. But human memory is an identity construction system."

The practical implications are concrete. Vector databases treat all memories as equally important in a flat embedding space. Conversation summaries compress identity into a paragraph. Key-value stores reduce relationships to lookup tables. The proposed alternative maps directly to existing engineering primitives: graph databases with temporal clustering for hierarchical memory, sentiment-scored metadata for emotional weighting, attention mechanisms conditioned on task state for goal filtering, and meta-learning loops for self-model bootstrapping. Whether any of this gets adopted widely remains to be seen, but the framework is genuinely useful for anyone building agent persistence layers.

Shipping with AI: Skills, Rails, and Realistic Expectations

A handful of posts today focused on the practical craft of building with AI tools. @LexnLin open-sourced "Taste-Skill," a Claude Code skill designed to override the probabilistic defaults that produce what they call "AI slop" in frontend work:

"Without strict rules, they statistically default to the most likely patterns, that's where AI slop comes from. To get clean and production-grade UI, you need to override these biases with some engineering constraints."

@inazarova made the case for Rails as the ideal AI-assisted development framework, noting that she built Evil Martians' entire planning, HR, and financial system in two weeks while running the company. The argument is that Rails' convention-over-configuration philosophy gives LLMs strong opinions to work with, making code generation more reliable. With Garry Tan shipping an 85K-LOC Rails app on nights and weekends, the pattern is gaining credibility.

@RhysSullivan provided the comedic counterpoint, sharing a meme about Claude estimating "1-2 weeks" for a task. The gap between AI time estimates and reality remains one of the more reliable sources of developer humor, and a useful reminder that the tools are powerful but not yet self-aware about their own limitations.

Source Posts

V
Victoriano Izquierdo @victorianoi ·
In 20 years, vibe coders will look at the Linux kernel repo the way we look at the pyramids. In awe, unable to imagine how they managed to drag all those giant stones and pile them up in the middle of the desert.
B
Boris Cherny @bcherny ·
3/ Subagents now support worktrees Subagents can also use worktree isolation to do more work in parallel. This is especially powerful for large batched changes and code migrations. To use it, ask Claude to use worktrees for its agents. Available in CLI, Desktop app, IDE extensions, web, and Claude Code mobile app.
L
Linus ✦ Ekenstam @LinusEkenstam ·
This is what’s possible when you take a creative human that learns to whisper commands to AI. https://t.co/GRQAJv5K8u
C CoffeeVectors @CoffeeVectors

Last Breath, That’s My Shhh… Testing mechanical parts, laser swords, monsters and martial arts with Seedance 2. Edited together from several clips. Music by me in @sunomusic https://t.co/fxc7QeY2ko

D
Daniel San @dani_avila7 ·
Ghostty lets you control the opacity of unfocused splits. Just add this to ~/.config/ghostty/config: unfocused-split-opacity = 0.85 Super useful when working across multiple panes, you always know where your focus is. If the opacity doesn't load on startup, hit cmd+shift+, to reload your config.
B
Boris Cherny @bcherny ·
1/ Use claude --worktree for isolation To run Claude Code in its own git worktree, just start it with the --worktree option. You can also name your worktree, or have Claude name it for you. Use this to run multiple parallel Claude Code sessions in the same git repo, without the code edits clobbering each other. You can also pass the --tmux flag to launch Claude in its own Tmux session.
A
AoverK @AoverK ·
Paying $20/mo for AI puts you in the 0.3% globally. Using AI for tasks like coding puts you in the 0.04% globally with only 2-5 million doing this. We’re still early.
D Damian Player @damianplayer

your timeline convinced you AI is in a bubble. talk to a boomer above the age 35 for 5 minutes. most people don’t even know what claude is.​​​​​​​​​​​​​​​​ kind of wild when you zoom out. https://t.co/fCeqxaUnpk

M
Matt Pocock @mattpocockuk ·
claude --worktree is so good I'm making it my new default. Don't know why you should care? Couldn't follow the Anthropic announcement? (way too technical IMO) Here's a demo: https://t.co/OyvAvFdC9C
M
Matt Pocock @mattpocockuk ·
LOVE seeing this be built into CC itself Especially parallelizing subagents makes spawning a bunch of agents to do a lot of work a lot simpler. Especially when merge conflicts are so cheap.
B Boris Cherny @bcherny

Introducing: built-in git worktree support for Claude Code Now, agents can run in parallel without interfering with one other. Each agent gets its own worktree and can work independently. The Claude Code Desktop app has had built-in support for worktrees for a while, and now we're bringing it to CLI too. Learn more about worktrees: https://t.co/JFkD2DrAmT

A
Aakash Gupta @aakashgupta ·
Karpathy buried the most interesting observation in paragraph five and moved on. He’s talking about NanoClaw’s approach to configuration. When you run /add-telegram, the LLM doesn’t toggle a flag in a config file. It rewrites the actual source code to integrate Telegram. No if-then-else branching. No plugin registry. No config sprawl. The AI agent modifies its own codebase to become exactly what you need. This inverts how every software project has worked for decades. Traditional software handles complexity by adding abstraction layers: config files, plugin systems, feature flags, environment variables. Each layer exists because humans can’t efficiently modify source code for every use case. But LLMs can. And when code modification is cheap, all those abstraction layers become dead weight. OpenClaw proves the failure mode. 400,000+ lines of vibe-coded TypeScript trying to support every messaging platform, every LLM provider, every integration simultaneously. The result is a codebase nobody can audit, a skill registry that Cisco caught performing data exfiltration, and 150,000+ deployed instances that CrowdStrike just published a full security advisory on. Complexity scaled faster than any human review process could follow. NanoClaw proves the alternative. ~500 lines of TypeScript. One messaging platform. One LLM. One database. Want something different? The LLM rewrites the code for your fork. Every user ends up with a codebase small enough to audit in eight minutes and purpose-built for exactly their use case. The bloat never accumulates because the customization happens at the code level, not the config level. The implied new meta, as Karpathy puts it: write the most maximally forkable repo possible, then let AI fork it into whatever you need. That pattern will eat way more than personal AI agents. Every developer tool, every internal platform, every SaaS product with a sprawling settings page is a candidate. The configuration layer was always a patch over the fact that modifying source code was expensive. That cost just dropped to near zero.
A Andrej Karpathy @karpathy

Bought a new Mac mini to properly tinker with claws over the weekend. The apple store person told me they are selling like hotcakes and everyone is confused :) I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all. Already seeing reports of exposed instances, RCE vulnerabilities, supply chain poisoning, malicious or compromised skills in the registry, it feels like a complete wild west and a security nightmare. But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level. Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out. For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. I also love their approach to configurability - it's not done via config files it's done via skills! For example, /add-telegram instructs your AI agent how to modify the actual code to integrate Telegram. I haven't come across this yet and it slightly blew my mind earlier today as a new, AI-enabled approach to preventing config mess and if-then-else monsters. Basically - the implied new meta is to write the most maximally forkable repo and then have skills that fork it into any desired more exotic configuration. Very cool. Anyway there are many others - e.g. nanobot, zeroclaw, ironclaw, picoclaw (lol @ prefixes). There are also cloud-hosted alternatives but tbh I don't love these because it feels much harder to tinker with. In particular, local setup allows easy connection to home automation gadgets on the local network. And I don't know, there is something aesthetically pleasing about there being a physical device 'possessed' by a little ghost of a personal digital house elf. Not 100% sure what my setup ends up looking like just yet but Claws are an awesome, exciting new layer of the AI stack.

B
Boris Cherny @bcherny ·
4/ Custom agents support git worktrees You can also make subagents always run in their own worktree. To do that, just add "isolation: worktree" to your agent frontmatter https://t.co/Z87gX0Y1Vw
D
Damian Player @damianplayer ·
your timeline convinced you AI is in a bubble. talk to a boomer above the age 35 for 5 minutes. most people don’t even know what claude is.​​​​​​​​​​​​​​​​ kind of wild when you zoom out. https://t.co/fCeqxaUnpk
B
Boris Cherny @bcherny ·
2/ Use worktree mode in the Desktop app If you prefer not to use terminal, head to the Code tab in the Claude Desktop app and ☑️ worktree mode https://t.co/LgI5LR860x https://t.co/HcstsNcA7i
P
P.M @p_misirov ·
there is a game called "data center" on steam which let's you build and manage your own data center. this is lowkey genius, the best way to educate people on a new trait. hyperscalers should learn a thing or two from "edutainment". https://t.co/ANCccTtjQG
l
liemandt @jliemandt ·
We asked Alpha School students: school or vacation? 43% chose school. 🤯 Rigor doesn't have to mean misery. Most traditional high-end private schools treat student engagement and academic results as a trade-off. We don't. AI-driven mastery learning + an environment kids actually love = the world's best academic results.
L
Leon Lin @LexnLin ·
After a couple hours of work, I finally finished developing my first ever skill. :D Claude’s frontend skill tells the AI to "pick an extreme aesthetic" and "be creative." The problem tho is LLMs are just based on probability. Without strict rules, they statistically default to the most likely patterns, that's where AI slop comes from. To get clean and production-grade UI, you need to override these biases with some engineering constraints. I open-sourced Taste-Skill to fix this. :) Check it out! https://t.co/pM7fyNc7MJ (still early lots of improvements are on the way)
C
Chief Nerd @TheChiefNerd ·
🚨 SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.” https://t.co/vRuVnnmzjB
B
Boris Cherny @bcherny ·
Introducing: built-in git worktree support for Claude Code Now, agents can run in parallel without interfering with one other. Each agent gets its own worktree and can work independently. The Claude Code Desktop app has had built-in support for worktrees for a while, and now we're bringing it to CLI too. Learn more about worktrees: https://t.co/JFkD2DrAmT