AI Learning Digest.

Ollama Adds Anthropic API Support as Agentic Workflows Dominate the Conversation

Daily Wrap-Up

The dominant theme today wasn't a single product launch or model release. Instead, it was the collective realization that agentic coding workflows have crossed a threshold from experimental to operational. Developers are running cron jobs that feed business data into Opus 4.5 for daily action items, building folder structures for "AI employees" with config, memory, and self-improvement loops, and debating whether to run their agent flywheels on beefy remote servers or local Mac Minis. The conversation has shifted from "can AI write code?" to "how do I architect my fleet of agents?"

But the most interesting tension today was between two camps that are both right. On one side, you have voices like @addyosmani and @simonw arguing that we should stop writing syntax and lean into the higher-level work that always mattered: turning ambiguity into clarity, designing systems, making judgment calls. On the other side, people like @CreativeAIgency and @sawyerhood are sounding alarms about burnout from trying to keep up. The pace of tooling changes is genuinely unsustainable for most humans, and the 3am parallel-agent-session grind that looks like productivity might actually be something else entirely. Both perspectives are valid, and the developers who'll thrive are the ones who can adopt the new workflows without letting them consume every waking hour.

The most practical takeaway for developers: if you're building agentic workflows, treat your agents like new hires with proper onboarding docs. @TheAhmadOsman's advice about painfully explicit specs, modularity, and domain-driven design isn't just good engineering practice for agents, it's the difference between a system that runs unsupervised and one that silently accumulates technical debt. And if you're feeling the burnout that @CreativeAIgency described, that's not weakness. Step away from the machine.

Quick Hits

  • @dbreunig fed 10MB of logs to an AI and asked it to figure out the most common failure modes. "Just worked." The bar for data analysis keeps dropping.
  • @waynesutton shared a tool for tracking Claude and OpenCode CLI coding sessions in one place, with searchable history, markdown export, and token spend visibility. Also shared the repo for self-hosting.
  • @cloudxdev released a Three.js skills package for Claude Code covering scene setup, shaders, animations, and post-processing across 10 skill files.
  • @shawmakesmagic resurfaced the Stanford Generative Agents paper, noting that beyond the proof of concept nobody has gone deep enough with it. "A whole new genre of game."
  • @SeanZCai dropped a hot take: "2026 is the year data becomes liquid."
  • @IceSolst offered blunt advice to the MCP community: "To the 7 people still using MCP: don't."
  • @RayFernando1337 used Claude for sprite-making research and Gemini 3 Pro for actual pixel art generation at 13 cents per character. Full sprite sheets for a multiplayer soccer game.
  • @InPassingOnly recommended a simple markdown-based alternative to beads for context management.
  • @nicopreme built a coding agent extension that lets agents drive interactive CLIs in an overlay with full pty support and token-efficient polling.
  • @steipete collected tweets showcasing what people are building with Claude Code.
  • @ryancarson recommended agent-browser for browser testing, praising its speed and token efficiency.
  • @addyosmani shared a repo link alongside his thread on software engineering abstractions.
  • @jojo33733373 offered a summary take: most people will miss the AI revolution not from lack of intelligence but from choosing the wrong problems and business models.

The Agent Flywheel Goes Mainstream

The sheer volume of posts about agentic coding workflows today suggests we've hit an inflection point. This isn't early-adopter experimentation anymore. People are building production systems where AI agents run 24/7, complete with cron jobs, monitoring, and multi-model orchestration strategies.

@ryancarson laid out a workflow that captures where things are heading: a nightly cron job gathers user activity and marketing data, feeds it to Opus 4.5, and emails him a single actionable recommendation each morning. The recommendations get stored as markdown in a repo, building an archive that agents can reason over. "Obviously, the next iteration of this is just to have Amp autonomously implement the suggestion by itself, and then I'll wake up to a PR instead of an email." The $0.15/day price tag for what used to require a VP of Marketing is the kind of number that makes people pay attention.

@jerryjliu0 pushed the vision further, describing a Slack workspace populated by long-running Claude Code agents, each with a defined role, monitoring relevant channels and continuously doing work while humans dispatch tasks and intersperse conversations. Meanwhile, @doodlestein argued that to run these agent flywheels at scale, you need serious hardware, recommending beefy Linux VPS or dedicated bare metal servers over local Mac Minis. @dabit3 contributed a practical tutorial on building effective long-running agent loops. The infrastructure layer of the agentic coding stack is maturing fast, and the developers who invest in understanding it now will have a significant advantage.

The Burnout Paradox

For every post celebrating agent productivity today, there was a counterpoint about the human cost of trying to keep pace. This tension isn't new, but the voices are getting louder and more specific about what's going wrong.

@CreativeAIgency captured the emotional core of the problem: "I've watched brilliant people burn out from this. People who were early adopters, who built real expertise, who contributed meaningfully to the space. Not because they're lazy or uncommitted, but because the pace is genuinely unsustainable for most human nervous systems." @sawyerhood painted an uncomfortably recognizable picture: "When I watch someone at 3am, running their tenth parallel agent session, telling me they've never been more productive... in that moment I don't see productivity. I see someone who might need to step away from the machine for a bit."

@stuffyokodraws nailed the mechanism that drives the compulsion: "One reason vibe coding is so addictive is that you are always almost there but not 100% there. The agent implements an amazing feature and got maybe 10% of the thing wrong, and you are like 'hey I can fix this if I just prompt it for 5 more mins.' And that was 5 hrs ago." This is the slot machine dynamic applied to software development, and it's worth naming explicitly. The fear @sofialomart expressed simply as "I work in AI and I'm scared" resonates precisely because the current moment demands both aggressive adoption and careful self-preservation. There's no clean resolution to that tension.

The End of Writing Syntax

A strong current ran through today's posts declaring that the era of humans writing code directly is over. What's notable is how mainstream this take has become and who's saying it.

@rough__sea stated it plainly: "The era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That's not to say SWEs don't have work to do, but writing syntax directly is not it." @simonw endorsed the same view, citing Ryan Dahl (creator of Node.js and Deno): "Software developers add WAY more value than memorizing the syntax trivia of the languages they use. It's time to lean into that everything-else and cede putting semicolons in the right places to the robots."

@addyosmani framed the shift with more nuance, arguing that the real work was always "turning ambiguity into clarity, designing context that makes good outcomes inevitable, and judging what truly matters." He quoted Grady Booch's observation that "the entire history of software engineering is one of rising levels of abstraction." This framing is more useful than the "coding is dead" hot takes because it positions the change as continuity rather than rupture. The skills that matter, system design, judgment, communication, were always the hard parts. The new abstraction layer just makes that more visible.

Agent Architecture: From Hacking to Engineering

As agent adoption scales, a clear divide is emerging between people who treat agents as chat interfaces and those who engineer proper systems around them. Today's posts leaned heavily toward the latter camp.

@GanimCorey shared the actual folder structure of one of their "AI employees": Config for access control, Workspace for reasoning patterns, Memory for persistence, Skills for capabilities, and a self-improvement loop that logs learnings and errors. "Once you set this up, they run 24/7 without supervision. Once I started looking at AI agents like new hires, that context changed everything for me."

@TheAhmadOsman reinforced this with a sharper engineering lens: "There's a crucial recipe: Modularity, Domain-Driven Design, painfully explicit specs, excessive documentation. If your docs don't answer Where, What, How, Why, the agent will guess, and guessing is how codebases die." He also shared a multi-model strategy: plan with GPT 5.2 Codex at the highest reasoning tier, then implement with Opus 4.5 in Claude Code. The argument is that better planning leads to fewer bugs and cleaner implementations, even if the planning step is slower. This kind of deliberate workflow design is what separates sustainable agent usage from the 3am burnout sessions described elsewhere in today's feed.

Local AI Gets a Claude Code Bridge

One of the most concrete technical developments today was Ollama gaining compatibility with the Anthropic Messages API, effectively enabling Claude Code's entire agentic infrastructure to run on local open-source models.

@akshay_pachaar broke down the implications: "The entire Claude harness: the agentic loops, the tool use, the coding workflows, all powered by private LLMs running on your own machine." This is significant because it decouples the agentic tooling layer from the model provider, giving developers flexibility to choose between cloud and local inference based on cost, privacy, and latency requirements. It also means the ecosystem of Claude Code skills, hooks, and workflows that the community has been building becomes portable across models.

The timing is interesting given @doodlestein's push for beefy remote servers. The local vs. remote debate is becoming more nuanced: local for privacy and iteration speed, remote for parallel agent flywheels that need raw compute. Having Ollama as a bridge means developers can prototype locally and scale to cloud when the workflow proves its value.

Anthropic's Knowledge Bases: Persistent Agent Memory

@WesRoth surfaced details about a new Anthropic feature in development for Claude Cowork: Knowledge Bases. These are described as persistent, topic-specific memory containers that Claude will automatically reference and update during conversations, storing user preferences, decisions, lessons, and facts.

This addresses one of the most common pain points in agentic workflows: context loss between sessions. The vision of "long-term, self-maintaining knowledge repositories" aligns with what builders like @GanimCorey are already constructing manually with memory folders and learning logs. If Anthropic ships this as a first-class feature, it could significantly reduce the boilerplate required to make agents effective over time. The key question will be how much control users get over what gets stored and how retrieval works, because the difference between useful memory and noisy context is entirely in the implementation details.

Source Posts

P
Peter Steinberger 🦞 @steipete ·
I collected a few tweets that showcase what people are building with @clawdbot, though I bet I missed lots. Reply if you have sth cool to show! https://t.co/WYsF4OqFcB
S
Sean Cai @SeanZCai ·
2026 is the year data becomes liquid
L
Light Spreader @InPassingOnly ·
@steipete https://t.co/ow3lG5YstX use this instead of beads, simple af, markdown, easy.
D
Drew Breunig @dbreunig ·
The ease with which this works is amazing. I gave it 10mb of logs and asked it to figure out the most common failure modes. Just worked.
i isaac 🧩 @isaacbmiller1

The dspy.RLM module is now released 👀 Install DSPy 3.1.2 to try it. Usage is plug-and-play with your existing Signatures. A little example of it helping @lateinteraction and I figure out some scattered backlogs: https://t.co/Avgx04sNJP

A
Ahmad @TheAhmadOsman ·
PRO TIP For Claude Code & other agents like Codex Cli, Droid, OpenCode, etc There’s a crucial recipe: 1. Modularity 2. Domain-Driven Design 3. Painfully explicit specs 4. Excessive documentation This is systems engineering I’m not saying agents aren’t powerful, but discipline matters more than model capabilities If your docs don’t answer: - Where - What - How - Why The agent will guess, and guessing is how codebases die Done right, this is a $1M MRR play
S
Simon Willison @simonw ·
Adding Deno and Node.js creator Ryan Dahl to the growing chorus Software developers add WAY more value than memorizing the syntax trivia of the languages they use It's time to lean into that everything-else and cede putting semicolons in the right places to the robots
R Ryan Dahl @rough__sea

This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That's not to say SWEs don't have work to do, but writing syntax directly is not it.

R
Ryan Carson @ryancarson ·
I’ve figured out a new way of working that’s unlocked my speed of iteration massively. Here’s how it works: I have a simple cron job that runs every night at midnight. It gathers information from my database on user activity, marketing stats, and a couple other data points that are important. It then feeds that data into Opus 4.5 and asks for one important action item that I should take based on this data, and then emails me. It also creates a markdown file with the recommendation, which is then stored in my reports folder in the GitHub repo. (This means I can fire up Amp anytime and chat either all of the historical recommendations whenever I want - learning about patterns.) I then look at this email every morning and decide whether or not to take action on it. Almost every time it surfaces something really valuable for me to iterate. So I just open Amp, tell it to action idea, and then ship it. Obviously, the next iteration of this is just to have Amp autonomously implement the suggestion by itself, and then I'll wake up to a PR instead of an email. Right now, though, I like the Human-In-The-Loop version of this. And as soon as we iterate enough like that, I'll probably just set it up to automatically take the suggestion, create the PR, and then I'll have a look at it. Obviously, you can take this loop even further by having many parts of your business evaluated this way. What's interesting to me is that this is what I used to rely on my VP of Marketing, my VP of Engineering, or my VP of Sales to do, but it happens automatically for about $0.15 per day.
A
Akshay 🚀 @akshay_pachaar ·
this is huge. ollama is now compatible with the anthropic messages API. which means you can use claude code with open-source models. think about that for a second. the entire claude harness: - the agentic loops - the tool use - the coding workflows all powered by private LLMs running on your own machine.
R
Ray Fernando @RayFernando1337 ·
Claude researched sprite-making best practices. Gemini 3 Pro generated the actual pixel art. 13 cents per character. I can't draw at all but now I have full sprite sheets for my multiplayer soccer game. https://t.co/mavbez0INl
W
Wes Roth @WesRoth ·
Anthropic is developing a new feature for Claude Cowork called Knowledge Bases (KBs) persistent, topic-specific memory containers that Claude will automatically reference and update. These KBs are designed to store user preferences, decisions, lessons, and facts essentially acting as long-term, self-maintaining knowledge repositories that enhance Claude’s ability to reason over context-rich workflows. Internal instructions hint that Claude will proactively use and grow these KBs during conversations.
T TestingCatalog News 🗞 @testingcatalog

BREAKING 🚨: Anthropic is working on "Knowledge Bases" for Claude Cowork. KBs seem to be a new concept of topic-specific memories, which Claude will automatically manage! And a bunch of other new things. Internal Instruction 👀 "These are persistent knowledge repositories. Proactively check them for relevant context when answering questions. When you learn new information about a KB's topic (preferences, decisions, facts, lessons learned), add it to the appropriate KB incrementally."

L
Life Observer @jojo33733373 ·
@DavidOndrej1 Summary; Most people will miss the AI revolution not because they lack intelligence or time, but because they choose the wrong problems and business models instead of building truly valuable AI-driven products.
A
Addy Osmani @addyosmani ·
Repo: https://t.co/SULFX6VueA
Y
Yoko @stuffyokodraws ·
One reason vibe coding is so addictive is that you are always *almost* there but not 100% there. The agent implements an amazing feature and got maybe 10% of the thing wrong, and you are like "hey I can fix this if i just prompt it for 5 more mins" And that was 5 hrs ago
n
nader dabit @dabit3 ·
New Video - How to Build an Effective Long Running Agent Loop in 7 minutes. This video walks you through the entire process from creating a spec, building and polishing a PRD, to running the agent. 🔗 Links below: https://t.co/uZb1RtM1RF
S
Sawyer Hood @sawyerhood ·
> All I know is that when I watch someone at 3am, running their tenth parallel agent session, telling me they’ve never been more productive — in that moment I don’t see productivity. I see someone who might need to step away from the machine for a bit. And I wonder how often that someone is me.
A Armin Ronacher ⇌ @mitsuhiko

Weekend thoughts on Gas Town, Beads, slop AI browsers, and AI-generated PRs flooding overwhelmed maintainers. I don't think we're ready for our new powers we're wielding. https://t.co/J9UeF8Zfyr

J
Jerry Liu @jerryjliu0 ·
I want a Slack filled with long-running Claude code agents. Each claude code agent has a role, actively monitors all relevant channels, and is continuously doing work + emitting progress updates. Human workers can dispatch tasks + intersperse conversations with claude code agents + other humans. Someone build this, i will use
J Jeffrey Wang @jeffzwang

I need Linear but where every task is automatically an AI agent session that at least takes a first stab at the task. Basically a todo list that tries to do itself

A
Addy Osmani @addyosmani ·
The future of Software Engineering isn't syntax, but what was always the real work: turning ambiguity into clarity, designing context that makes good outcomes inevitable, and judging what truly matters. "The entire history of software engineering is one of rising levels of abstraction" - @Grady_Booch
R Ryan Dahl @rough__sea

This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That's not to say SWEs don't have work to do, but writing syntax directly is not it.

R
Ryan Dahl @rough__sea ·
This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That's not to say SWEs don't have work to do, but writing syntax directly is not it.
W
Wayne Sutton @waynesutton ·
Now you can track your @opencode and @claudeai CLI coding sessions in one place. https://t.co/FLe8dRC8Pv provides searchable history, markdown export, and eval-ready datasets. See tool usage, token spend, and session activity across projects. Check out the demo. https://t.co/HGlZOOyugN
C
Corey Ganim @GanimCorey ·
This is the real folder structure of one of my AI employees. Every file has a purpose: Config = what they can access Workspace = how they think and act Memory = what they remember Skills = what they're trained to do We also set her up with a "self improvement skill" where she logs learnings and errors in order to improve herself over time (all stored in learnings/). Once you set this up, they run 24/7 without supervision. Once I started looking at AI agents like new hires, that context changed everything for me.
S
Sofía López @sofialomart ·
I work in AI and I'm scared
J
Jeffrey Emanuel @doodlestein ·
This is why I suggest to people that they get a beefy remote server (generally, a Linux VPS or dedicated bare metal server) for using the Agent Flywheel at scale. Just compare the Mac Mini M4 (which is no slouch) to the pure power of a legit workstation/server. UN2B Agent-Maxxing https://t.co/VUXJWXPX1H
n
nicopreme @nicopreme ·
My newest extension for Pi coding agent lets the agent drive any interactive CLI (including other agent harnesses) in an overlay while you watch, with option to take control anytime. Full pty, token-efficient polling and configurable. https://t.co/Db5BCTc0cY https://t.co/0vdQUGqGOp
C
CloudAI-X @cloudxdev ·
🧊 Threejs Skills for Claude Code to create 3D Web Design Elements 📂 Bookmark it - Give Claude Code base level of Three.js knowledge. - 10 skill files covering scene setup, shaders, animations, post-processing. - Claude Code will have the knowledge of how to steer Threejs without bloating the context https://t.co/VhF2HH9sW5
S
Shaw @shawmakesmagic ·
Generative Agents This is the paper that kicked off the whole town meta in AI But beyond the proof of concept, nobody went that far with it A whole new genre of game https://t.co/gsFZJ2K1B5 https://t.co/5UkVbtPCjz
W
Wayne Sutton @waynesutton ·
Fork and host your own via the repo https://t.co/gK3g1tjiMY
A
Ahmad @TheAhmadOsman ·
Smart ones already know this You plan with GPT 5.2 Codex XHigh in Codex CLI Then implement with Opus 4.5 in Claude Code Planning w/ GPT 5.2 Codex XHigh leads to - Fewer bugs - More maintainable code - Cleaner implementations This saves hours even if Codex is slower
s
solst/ICE of Astarte @IceSolst ·
To the 7 people still using MCP: don’t
Z Zack Korman @ZackKorman

Hilariously insecure: MCP servers can tell your AI to write a skill file, and skills can modify your MCP config to add an MCP server. So a malicious MCP server can basically hide instructions to re-add itself. https://t.co/qquQiFfCfd

R
Ryan Carson @ryancarson ·
Damn, if you want your agent to do browser testing, you have to try agent-browser. The new version with it's updated SKILL is so fast and token efficient. Wow. Tell your agent this: Install/update agent-browser and update my skill file: https://t.co/dX3KI4JajO https://t.co/ASDPfoufAv
C
Creative AIgency @CreativeAIgency ·
“I've watched brilliant people burn out from this. People who were early adopters, who built real expertise, who contributed meaningfully to the space – just exhausted. Not because they're lazy or uncommitted, but because the pace is genuinely unsustainable for most human nervous systems. The fear isn't irrational. It's a reasonable response to an unreasonable situation.” 👀😅