AI Digest.

Ollama Adds Anthropic API Compatibility as Agent Architecture Patterns Crystallize

The agent tooling ecosystem hit an inflection point with ollama gaining Anthropic Messages API support, Anthropic reportedly building persistent Knowledge Bases into Claude, and the community converging on folder-based architecture patterns for long-running agents. Meanwhile, a parallel thread of burnout anxiety ran through the timeline as developers debated whether humans should write code at all.

Daily Wrap-Up

The timeline today felt like two conversations happening simultaneously in the same room. In one corner, builders were excitedly sharing agent folder structures, session tracking tools, Three.js skill packs, and PTY extensions for driving nested CLIs. The energy was unmistakable: people are past the "can agents code?" phase and deep into "how do I architect persistent agent systems that run unsupervised?" The level of infrastructure being built around Claude Code alone is staggering. @GanimCorey sharing a real production folder structure for an "AI employee" complete with config, workspace, memory, and self-improvement loops felt like a watershed moment for how seriously people are treating agent orchestration.

In the other corner, people were quietly losing it. @sofialomart's four-word post "I work in AI and I'm scared" sat alongside @CreativeAIgency's long observation about brilliant early adopters burning out because the pace is "genuinely unsustainable for most human nervous systems." @sawyerhood painted the picture of someone running ten parallel agent sessions at 3am, mistaking compulsion for productivity. Even the optimists like @addyosmani and @simonw, while framing the shift positively, were essentially saying the same thing: writing syntax is no longer the job. The question is whether developers can adapt to what comes next fast enough to avoid the burnout that comes from trying to keep up with everything at once.

The most entertaining moment was easily @stuffyokodraws nailing the vibe coding trap: "The agent implements an amazing feature and got maybe 10% of the thing wrong, and you are like 'hey I can fix this if I just prompt it for 5 more mins.' And that was 5 hrs ago." Every developer who has used an agent recognized themselves in that post. The most practical takeaway for developers: stop treating agent architecture as an afterthought. The posts gaining the most traction today weren't about prompting tricks or model comparisons. They were about folder structures, memory systems, modular specs, and long-running loops. If you're still running one-shot agent sessions without persistent context, you're leaving most of the value on the table.

Quick Hits

  • @waynesutton shared a repo for self-hosting a CLI session tracker for Claude Code and opencode.
  • @addyosmani dropped a repo link (context unclear from the post alone, but Addy's repos are usually worth bookmarking).
  • @shawmakesmagic resurfaced the original Generative Agents paper that kicked off the simulated-town genre in AI, noting nobody went far enough with it beyond the proof of concept.
  • @SeanZCai declared "2026 is the year data becomes liquid," which sounds like a VC pitch deck slide but tracks with the structured-data-for-agents trend.
  • @IceSolst offered the spiciest take of the day: "To the 7 people still using MCP: don't." No elaboration. No mercy.
  • @jojo33733373 summarized the AI opportunity gap: most people will miss the revolution not from lack of intelligence or time, but from choosing the wrong problems and business models.
  • @InPassingOnly recommended a simple markdown-based alternative to Beads for context management.
  • @RayFernando1337 demonstrated a neat multi-model workflow: Claude researched sprite-making best practices, Gemini 3 Pro generated the actual pixel art, all for 13 cents per character sheet.

Agent Architecture Gets Serious

The single biggest theme today was the maturation of agent infrastructure. Not agents as a concept, but agents as engineered systems with real folder structures, memory persistence, and operational patterns. @GanimCorey shared the actual directory layout of a production "AI employee" and the design philosophy is immediately recognizable to anyone who has built software systems: "Config = what they can access. Workspace = how they think and act. Memory = what they remember. Skills = what they're trained to do." The inclusion of a self-improvement skill that logs learnings and errors is the kind of detail that separates toy demos from production systems.

@TheAhmadOsman laid out what he called the "crucial recipe" for making agents work reliably:

> "There's a crucial recipe: 1. Modularity. 2. Domain-Driven Design. 3. Painfully explicit specs. 4. Excessive documentation. This is systems engineering... If your docs don't answer Where, What, How, Why, the agent will guess, and guessing is how codebases die."

This is the right framing. The agent itself is not the hard part anymore. The hard part is giving it enough structured context to make good decisions autonomously. @dabit3 published a walkthrough on building effective long-running agent loops, and @nicopreme released a PTY extension that lets a coding agent drive interactive CLIs (including other agent harnesses) in an overlay while you watch and optionally take control. @doodlestein made the infrastructure case for running agents on beefy remote servers rather than local machines, comparing Mac Mini M4 specs unfavorably to dedicated workstations for parallel agent sessions.

The vision is converging: persistent agents with structured memory, running on serious hardware, supervised but not hand-held. @jerryjliu0 captured the aspiration perfectly: "I want a Slack filled with long-running Claude Code agents. Each agent has a role, actively monitors all relevant channels, and is continuously doing work and emitting progress updates." That's not science fiction anymore. The pieces are all shipping this month.

The Claude Code Ecosystem Expands

Beyond architecture patterns, the tooling layer around Claude Code specifically had a banner day. The headline item was @akshay_pachaar reporting that ollama now supports the Anthropic Messages API:

> "ollama is now compatible with the anthropic messages API. which means you can use claude code with open-source models... the entire claude harness: the agentic loops, the tool use, the coding workflows, all powered by private LLMs running on your own machine."

This is genuinely significant. Claude Code's value increasingly lives in its harness (the agentic loop, tool use, file management) rather than being locked to a single model provider. Being able to swap in local models for development, testing, or cost-sensitive workflows while keeping the same agent infrastructure is a big deal for adoption.

@WesRoth reported that Anthropic is building "Knowledge Bases" into Claude Cowork, described as persistent, topic-specific memory containers that Claude will automatically reference and update. If accurate, this is Anthropic productizing what the community has been building ad-hoc with CLAUDE.md files and memory directories. @cloudxdev released a set of Three.js skill files for Claude Code covering scene setup, shaders, animations, and post-processing. @ryancarson was enthusiastic about agent-browser's updated skill for browser testing, calling it "so fast and token efficient." @waynesutton shipped a tool to track coding sessions across Claude CLI and opencode with searchable history and token spend visibility. And @steipete curated a collection of what people are building with Claude Code, suggesting the ecosystem is reaching a critical mass of interesting projects. @dbreunig demonstrated the practical power by feeding 10MB of logs to an agent and asking it to identify the most common failure modes: "Just worked."

The Existential Thread

Running parallel to all the building energy was a distinctly darker conversation about what this all means for the people doing the work. @CreativeAIgency posted a long observation that hit hard:

> "I've watched brilliant people burn out from this. People who were early adopters, who built real expertise, who contributed meaningfully to the space, just exhausted. Not because they're lazy or uncommitted, but because the pace is genuinely unsustainable for most human nervous systems."

@sofialomart kept it to five words: "I work in AI and I'm scared." @sawyerhood described watching someone at 3am running their tenth parallel agent session, claiming peak productivity, and seeing something else entirely: "In that moment I don't see productivity. I see someone who might need to step away from the machine for a bit. And I wonder how often that someone is me."

The identity question surfaced more explicitly through @rough__sea: "The era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true." @simonw pushed back on the doom framing by highlighting Ryan Dahl's perspective that developers add far more value than syntax knowledge, arguing it's time to "cede putting semicolons in the right places to the robots." @addyosmani offered the most constructive reframe: "The future of Software Engineering isn't syntax, but what was always the real work: turning ambiguity into clarity, designing context that makes good outcomes inevitable, and judging what truly matters." The through-line is clear. Nobody is arguing that developers are obsolete. The argument is about whether "developer" means the same thing it meant two years ago, and whether the transition to whatever it means next will break people along the way.

Multi-Model Workflows and the Productivity Trap

A smaller but notable thread covered how practitioners are actually structuring their daily work with AI tools. @ryancarson described a workflow that feels like where a lot of autonomous agent usage is heading: a cron job that gathers user activity and marketing data nightly, feeds it to Opus 4.5 for analysis, and emails him a single actionable recommendation each morning. "Almost every time it surfaces something really valuable for me to iterate. So I just open Amp, tell it to action the idea, and then ship it." He noted the next step is having the agent autonomously implement suggestions and wake up to a PR instead of an email, but prefers the human-in-the-loop version for now.

@TheAhmadOsman shared a multi-model strategy gaining traction: "You plan with GPT 5.2 Codex XHigh in Codex CLI, then implement with Opus 4.5 in Claude Code. Planning with GPT 5.2 Codex XHigh leads to fewer bugs, more maintainable code, cleaner implementations." Using different models for different phases of the development cycle, planning versus implementation versus review, is becoming a legitimate workflow pattern rather than a novelty.

And then there's the dark side of productivity. @stuffyokodraws captured the vibe coding trap with painful accuracy:

> "One reason vibe coding is so addictive is that you are always almost there but not 100% there. The agent implements an amazing feature and got maybe 10% of the thing wrong, and you are like 'hey I can fix this if I just prompt it for 5 more mins.' And that was 5 hrs ago."

The gambling metaphor is hard to miss. Variable reinforcement schedules are powerful, and agent-assisted coding delivers exactly that: intermittent, unpredictable rewards that keep you in the chair longer than you planned. Recognizing this pattern is the first step to managing it.

Sources

A
Addy Osmani @addyosmani ·
Every time we've made it easier to write software, we've ended up writing exponentially more of it. When high-level languages replaced assembly, programmers didn't write less code - they wrote orders of magnitude more, tackling problems that would have been economically impossible before. When frameworks abstracted away the plumbing, we didn't reduce our output - we built more ambitious applications. When cloud platforms eliminated infrastructure management, we didn't scale back - we spun up services for use cases that never would have justified a server room. @levie recently articulated why this pattern is about to repeat itself at a scale we haven't seen before, using Jevons Paradox as the frame. The argument resonates because it's playing out in real-time in our developer tools. The initial question everyone asks is "will this replace developers?" but just watch what actually happens. Teams that adopt these tools don't always shrink their engineering headcount - they expand their product surface area. The three-person startup that could only maintain one product now maintains four. The enterprise team that could only experiment with two approaches now tries seven. The constraint being removed isn't competence but it's the activation energy required to start something new. Think about that internal tool you've been putting off because "it would take someone two weeks and we can't spare anyone"? Now it takes three hours. That refactoring you've been deferring because the risk/reward math didn't work? The math just changed. This matters because software engineers are uniquely positioned to understand what's coming. We've seen this movie before, just in smaller domains. Every abstraction layer - from assembly to C to Python to frameworks to low-code - followed the same pattern. Each one was supposed to mean we'd need fewer developers. Each one instead enabled us to build more software. Here's the part that deserves more attention imo: the barrier being lowered isn't just about writing code faster. It's about the types of problems that become economically viable to solve with software. Think about all the internal tools that don't exist at your company. Not because no one thought of them, but because the ROI calculation never cleared the bar. The custom dashboard that would make one team 10% more efficient but would take a week to build. The data pipeline that would unlock insights but requires specialized knowledge. The integration that would smooth a workflow but touches three different systems. These aren't failing the cost-benefit analysis because the benefit is low - they're failing because the cost is high. Lower that cost by "10x", and suddenly you have an explosion of viable projects. This is exactly what's happening with AI-assisted development, and it's going to be more dramatic than previous transitions because we're making previously "impossible" work possible. The second-order effects get really interesting when you consider that every new tool creates demand for more tools. When we made it easier to build web applications, we didn't just get more web applications - we got an entire ecosystem of monitoring tools, deployment platforms, debugging tools, and testing frameworks. Each of these spawned their own ecosystems. The compounding effect is nonlinear. Now apply this logic to every domain where we're lowering the barrier to entry. Every new capability unlocked creates demand for supporting capabilities. Every workflow that becomes tractable creates demand for adjacent workflows. The surface area of what's economically viable expands in all directions. For engineers specifically, this changes the calculus of what we choose to work on. Right now, we're trained to be incredibly selective about what we build because our time is the scarce resource. But when the cost of building drops dramatically, the limiting factor becomes imagination, "taste" and judgment, not implementation capacity. The skill shifts from "what can I build given my constraints?" to "what should we build given that constraints have in some ways been evaporated?" The meta-point here is that we keep making the same prediction error. Every time we make something more efficient, we predict it will mean less of that thing. But efficiency improvements don't reduce demand - they reveal latent demand that was previously uneconomic to address. Coal. Computing. Cloud infrastructure. And now, knowledge work. The pattern is so consistent that the burden of proof should shift. Instead of asking "will AI agents reduce the need for human knowledge workers?" we should be asking "what orders of magnitude increase in knowledge work output are we about to see?" For software engineers it's the same transition we've navigated successfully several times already. The developers who thrived weren't the ones who resisted higher-level abstractions; they were the ones who used those abstractions to build more ambitious systems. The same logic applies now, just at a larger scale. The real question is whether we're prepared for a world where the bottleneck shifts from "can we build this?" to "should we build this?" That's a fundamentally different problem space, and it requires fundamentally different skills. We're about to find out what happens when the cost of knowledge work drops by an order of magnitude. History suggests we (perhaps) won't do less work - we'll discover we've been massively under-investing in knowledge work because it was too expensive to do all the things that were actually worth doing. The paradox isn't that efficiency creates abundance. The paradox is that we keep being surprised by it.
L levie @levie

Jevons Paradox for Knowledge Work

M
Matan Grinberg @matanSF ·
many users have sessions where they demonstrate a skill or technique to droid without formally creating a SKILL .md for it to make your lives easier, droid has the slash command /create-skill, that will automatically create this Skill based on any session https://t.co/Bktsc7KVrM
P
Peter Steinberger 🦞 @steipete ·
I collected a few tweets that showcase what people are building with @clawdbot, though I bet I missed lots. Reply if you have sth cool to show! https://t.co/WYsF4OqFcB
N
nicopreme @nicopreme ·
My newest extension for Pi coding agent lets the agent drive any interactive CLI (including other agent harnesses) in an overlay while you watch, with option to take control anytime. Full pty, token-efficient polling and configurable. https://t.co/Db5BCTc0cY https://t.co/0vdQUGqGOp
R
Ryan Carson @ryancarson ·
Damn, if you want your agent to do browser testing, you have to try agent-browser. The new version with it's updated SKILL is so fast and token efficient. Wow. Tell your agent this: Install/update agent-browser and update my skill file: https://t.co/dX3KI4JajO https://t.co/ASDPfoufAv
W
Wes Roth @WesRoth ·
Anthropic is developing a new feature for Claude Cowork called Knowledge Bases (KBs) persistent, topic-specific memory containers that Claude will automatically reference and update. These KBs are designed to store user preferences, decisions, lessons, and facts essentially acting as long-term, self-maintaining knowledge repositories that enhance Claude’s ability to reason over context-rich workflows. Internal instructions hint that Claude will proactively use and grow these KBs during conversations.
T testingcatalog @testingcatalog

BREAKING 🚨: Anthropic is working on "Knowledge Bases" for Claude Cowork. KBs seem to be a new concept of topic-specific memories, which Claude will automatically manage! And a bunch of other new things. Internal Instruction 👀 "These are persistent knowledge repositories. Proactively check them for relevant context when answering questions. When you learn new information about a KB's topic (preferences, decisions, facts, lessons learned), add it to the appropriate KB incrementally."

S
solst/ICE of Astarte @IceSolst ·
To the 7 people still using MCP: don’t
Z ZackKorman @ZackKorman

Hilariously insecure: MCP servers can tell your AI to write a skill file, and skills can modify your MCP config to add an MCP server. So a malicious MCP server can basically hide instructions to re-add itself. https://t.co/qquQiFfCfd

S
Sawyer Hood @sawyerhood ·
> All I know is that when I watch someone at 3am, running their tenth parallel agent session, telling me they’ve never been more productive — in that moment I don’t see productivity. I see someone who might need to step away from the machine for a bit. And I wonder how often that someone is me.
M mitsuhiko @mitsuhiko

Weekend thoughts on Gas Town, Beads, slop AI browsers, and AI-generated PRs flooding overwhelmed maintainers. I don't think we're ready for our new powers we're wielding. https://t.co/J9UeF8Zfyr

A
Akshay 🚀 @akshay_pachaar ·
this is huge. ollama is now compatible with the anthropic messages API. which means you can use claude code with open-source models. think about that for a second. the entire claude harness: - the agentic loops - the tool use - the coding workflows all powered by private LLMs running on your own machine.
S
Sean Cai @SeanZCai ·
2026 is the year data becomes liquid
S
Shaw @shawmakesmagic ·
Generative Agents This is the paper that kicked off the whole town meta in AI But beyond the proof of concept, nobody went that far with it A whole new genre of game https://t.co/gsFZJ2K1B5 https://t.co/5UkVbtPCjz
R
Ryan Carson @ryancarson ·
I’ve figured out a new way of working that’s unlocked my speed of iteration massively. Here’s how it works: I have a simple cron job that runs every night at midnight. It gathers information from my database on user activity, marketing stats, and a couple other data points that are important. It then feeds that data into Opus 4.5 and asks for one important action item that I should take based on this data, and then emails me. It also creates a markdown file with the recommendation, which is then stored in my reports folder in the GitHub repo. (This means I can fire up Amp anytime and chat either all of the historical recommendations whenever I want - learning about patterns.) I then look at this email every morning and decide whether or not to take action on it. Almost every time it surfaces something really valuable for me to iterate. So I just open Amp, tell it to action idea, and then ship it. Obviously, the next iteration of this is just to have Amp autonomously implement the suggestion by itself, and then I'll wake up to a PR instead of an email. Right now, though, I like the Human-In-The-Loop version of this. And as soon as we iterate enough like that, I'll probably just set it up to automatically take the suggestion, create the PR, and then I'll have a look at it. Obviously, you can take this loop even further by having many parts of your business evaluated this way. What's interesting to me is that this is what I used to rely on my VP of Marketing, my VP of Engineering, or my VP of Sales to do, but it happens automatically for about $0.15 per day.
A
Addy Osmani @addyosmani ·
Repo: https://t.co/SULFX6VueA
J
Jerry Liu @jerryjliu0 ·
I want a Slack filled with long-running Claude code agents. Each claude code agent has a role, actively monitors all relevant channels, and is continuously doing work + emitting progress updates. Human workers can dispatch tasks + intersperse conversations with claude code agents + other humans. Someone build this, i will use
J jeffzwang @jeffzwang

I need Linear but where every task is automatically an AI agent session that at least takes a first stab at the task. Basically a todo list that tries to do itself

S
Sofía López @sofialomart ·
I work in AI and I'm scared
C
CloudAI-X @cloudxdev ·
🧊 Threejs Skills for Claude Code to create 3D Web Design Elements 📂 Bookmark it - Give Claude Code base level of Three.js knowledge. - 10 skill files covering scene setup, shaders, animations, post-processing. - Claude Code will have the knowledge of how to steer Threejs without bloating the context https://t.co/VhF2HH9sW5
J
Jeffrey Emanuel @doodlestein ·
This is why I suggest to people that they get a beefy remote server (generally, a Linux VPS or dedicated bare metal server) for using the Agent Flywheel at scale. Just compare the Mac Mini M4 (which is no slouch) to the pure power of a legit workstation/server. UN2B Agent-Maxxing https://t.co/VUXJWXPX1H
Y
Yoko @stuffyokodraws ·
One reason vibe coding is so addictive is that you are always *almost* there but not 100% there. The agent implements an amazing feature and got maybe 10% of the thing wrong, and you are like "hey I can fix this if i just prompt it for 5 more mins" And that was 5 hrs ago
C
Corey Ganim @GanimCorey ·
This is the real folder structure of one of my AI employees. Every file has a purpose: Config = what they can access Workspace = how they think and act Memory = what they remember Skills = what they're trained to do We also set her up with a "self improvement skill" where she logs learnings and errors in order to improve herself over time (all stored in learnings/). Once you set this up, they run 24/7 without supervision. Once I started looking at AI agents like new hires, that context changed everything for me.
D
Drew Breunig @dbreunig ·
The ease with which this works is amazing. I gave it 10mb of logs and asked it to figure out the most common failure modes. Just worked.
I isaacbmiller1 @isaacbmiller1

The dspy.RLM module is now released 👀 Install DSPy 3.1.2 to try it. Usage is plug-and-play with your existing Signatures. A little example of it helping @lateinteraction and I figure out some scattered backlogs: https://t.co/Avgx04sNJP

C
Creative AIgency @CreativeAIgency ·
“I've watched brilliant people burn out from this. People who were early adopters, who built real expertise, who contributed meaningfully to the space – just exhausted. Not because they're lazy or uncommitted, but because the pace is genuinely unsustainable for most human nervous systems. The fear isn't irrational. It's a reasonable response to an unreasonable situation.” 👀😅
I
Idea Browser @ideabrowser ·
I bet this graveyard has 100+ ideas that would make $3M per year. Just because they failed, doesn’t mean you would. They needed VC capital They needed a team. They needed to be in Silicon Valley. They needed $1B valuations You don’t need that. You have AI now. You have better tools. Good luck I’m rotting for you.
A adxtyahq @adxtyahq

Someone curated 925 failed VC-backed startups, broke down why they failed, and how to make it work with today’s tech - https://t.co/NFUhrhe7P2 Cool fr🙌 https://t.co/vOv2fUDnhY

G
geoff @GeoffreyHuntley ·
@damianplayer skip this and learn from me (i created ralph) https://t.co/zDb4V4xw8s
P
Peter Steinberger 🦞 @steipete ·
Still amazed every time @clawdbot does a phone call.
T TheGeneralistHQ @TheGeneralistHQ

All thanks to @steipete https://t.co/GQ3ZJNF1Tj

J
Jon Kaplan @aye_aye_kaplan ·
My top 3 tips for coding with agents: 1. Always start with Plan Mode. It's better to iterate in natural language and then execute once you know what the agent is going to do. This will save you time, effort, and tokens! 2. Start new chats frequently. Remember that your role is to point the Agent in the right direction to make the changes you need. If you change topics, the context window will get muddied. You will also be spending more tokens on longer chats. 3. Leverage AI to do your code review. If you know the failure case, ask a model. One prompt I often use is "scan the changes on my branch and confirm nothing is impacted outside of my feature flag". As a safety net for everything outside this issues-you-expect umbrella, use Bugbot.
E
eric zakariasson @ericzakariasson ·
exactly how i code with agents. some more: 4. plan sync, implement async. if you can quickly align on a plan, you'll have higher confidence when handing off to a cloud agent 5. create validation environments, so you can ask agent to check its own changes
A aye_aye_kaplan @aye_aye_kaplan

My top 3 tips for coding with agents: 1. Always start with Plan Mode. It's better to iterate in natural language and then execute once you know what the agent is going to do. This will save you time, effort, and tokens! 2. Start new chats frequently. Remember that your role is to point the Agent in the right direction to make the changes you need. If you change topics, the context window will get muddied. You will also be spending more tokens on longer chats. 3. Leverage AI to do your code review. If you know the failure case, ask a model. One prompt I often use is "scan the changes on my branch and confirm nothing is impacted outside of my feature flag". As a safety net for everything outside this issues-you-expect umbrella, use Bugbot.

S
Steve Krouse @stevekrouse ·
MANAGING LOTS OF CLAUDE CODES IS SUPER DUMB That's like in the 1950s thinking that TV is just radio announcers at a desk reading from a script. Nope. It's sitcoms, movies, YouTube, TikTok. Or in the 1970s thinking that the future of accounting would be managing a bunch of number crunching "agents". Nope. It's Excel or Quickbooks. Managing SUCKS. You're alienated from the work. Your feedback loops are terrible. What's better? Being a craftsperson with a powerful tool My brother in christ, you can only think of 7 things at a time, and if you're running 2 Claude Codes, each has a couple details that need your attention, so you're already all maxed out of things to think about, so you can't even notice how un-productive you're being Yes, I get the instinct to RUN AS MUCH INFERENCE AS POSSIBLE LLMs seem like super cheap employees. If you aren't giving them the MAXIMUM work, you're leaving money on the table I have a suggestion for you. A way for you to run LOTS OF INFERENCE. Let's go back to my boy @worrydream INTERACTIVITY CONSIDERED HARMFUL There is so much context stored in my github repo, my issues, my commit history, also my email inbox, etc. If you could somehow be passively ingesting all that and running all sort of inference on it WITHOUT ME HAVING TO MANAGE IT, that sounds awesome There is so much cleaning up that I'd love someone to do on my GitHub Issues backlog Or if you want to go ahead and try to end-to-end solve some of my tickets and ONLY NOTIFY ME WITH A FULLY WORKING PULL REQUEST, TOTALLY VERIFIED, THAT OTHER AGENTS HAVE REVIEWED, WITH AN AMAZING PR EXPLAINER THATS SUPER CONCISE AND NOT SLOP, oh my god, take my money Can you do something similar for my email inbox? I'll name my first born after you. I want LESS management. LESS slop. If I wanted more management and more slop, I would hire interns or offshore contractors I hire the best engineers I can find who give me less to manage, less to edit their writing I want AI to do the same
G
GitHub Changelog @GHchangelog ·
GitHub Copilot now supports OpenCode's open source agent. No additional license needed. https://t.co/yfMJnw1Gg5
K
Khushal ☘️ @herkuch ·
I tried this but couldn't find option to choose model with opencode :(
A addyosmani @addyosmani

Vibe Kanban: orchestrate multiple AI coding agents in parallel. Free and 100% open-source. Switch between Claude Code, Codex Gemini CLI, and track task status from a single dashboard. https://t.co/XfZLWpevqM

W
Wes Winder @weswinder ·
i asked opus 4.5 to analyze the new x algorithm this is the posting strategy it recommended follow this for maximum reach https://t.co/QuXFCk92Sx
X XEng @XEng

We have open-sourced our new 𝕏 algorithm, powered by the same transformer architecture as xAI's Grok model. Check it out here:  https://t.co/3WKwZkdgmB

A
Aleena Amir @aleenaamiir ·
“How It Works” Educational Dioramas Gemini Nano Banana Pro Prompt: Create a clear, 45° top-down isometric miniature 3D educational diorama explaining [PROCESS / CONCEPT]. Use soft refined textures, realistic PBR materials, and gentle lifelike lighting. Build a stepped or layered diorama base showing each stage of the process with subtle arrows or paths. Include tiny stylized figures interacting with each stage (no facial details). Use a clean solid [BACKGROUND COLOR] background. At the top-center, display [PROCESS NAME] in large bold text, directly beneath it show a short explanation subtitle, and place a minimal symbolic icon below. All text must automatically match the background contrast (white or black).
P
Peter Steinberger 🦞 @steipete ·
How I start pretty much every PR review. (Yeah, I could do a slash command but speaking is so fast and I usually already have thoughts that it doesn't make me faster) Of the 1000+ PRs I reviewed so far I merged <10 without changes, often massively so. https://t.co/SzFWAHgIIh
T
TestingCatalog News 🗞 @testingcatalog ·
BREAKING 🚨: The X algorithm has been open-sourced by the X team to let users observe how it evolves transparently. https://t.co/96IKQCs58m
X XEng @XEng

We have open-sourced our new 𝕏 algorithm, powered by the same transformer architecture as xAI's Grok model. Check it out here:  https://t.co/3WKwZkdgmB

S
Siavash @siavashg ·
AI made everyone 10x faster. But the faster individuals move, the harder it is to move together. Speed ≠ Progress when no one has the full picture. Today we emerge from stealth with $5M led by @GeneralCatalyst to fix this. Meet @stillaai : The first Multiplayer AI. 🧵 https://t.co/lkRsBjhCIt
K
kitze 🚀 @thekitze ·
node js creator: coding is dead avg mid miderson: i will never trust llms!!! I need hours of convincing to help my skill issues
R rough__sea @rough__sea

This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That's not to say SWEs don't have work to do, but writing syntax directly is not it.

D
Ddox @paraddox ·
Presented the Ralph loop to 2 engineers still using VS Code AI extensions. My 10x loops GLM-4.7 fixed something their Opus didn't. They went quiet. Now I'm doing a workshop on it. No good deed goes unpunished.
G
Gergely Orosz @GergelyOrosz ·
That us engineers will not write most (or any) code by hand doesn’t mean what many replies assume it does - that there won’t be demand for SWEs. The opposite: I expect more demand for software engineers who can build reliable+complex software with LLMs! https://t.co/sSfyCm8jk8 https://t.co/0JSgdHxlXr
R rough__sea @rough__sea

This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That's not to say SWEs don't have work to do, but writing syntax directly is not it.

ℏεsam @Hesamation ·
learning Ralph might be equivalent of buying Bitcoin in 2012. only this time the window will be closed in just a few months. this is a great article if you’re wondering wtf Ralph is.
D damianplayer @damianplayer

the people learning this now will be untouchable in 3 months.

ℏεsam @Hesamation ·
when the creator of node.js says the era of humans writing code is over, just one week after Linus tries out vibe coding, you know a chapter in technology is slowly closing to give way for a new one. you can be emotional about coding by hand and insist that AI coding sucks, but it doesn’t make you any less delusional.
R rough__sea @rough__sea

This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That's not to say SWEs don't have work to do, but writing syntax directly is not it.

E
Eleanor Konik @EleanorKonik ·
I gotta say, I am surprised at how easy the terminal plugin was to install for Obsidian. Now I've got Claude cooking on finding the stuff I remember that's related to the (very) short story I wrote last night, & am ready to put together my list of things to do for the day. https://t.co/Da2rtb2sF4
T
Tyler Denk 🐝 @denk_tweets ·
beehiiv just crossed $2M MRR but most founders are still stuck trying to generate their first $100K to celebrate the milestone I’m going to try something new… I’m sharing the exact playbook we used in the early days (10 simple tactics) hope this helps someone: https://t.co/Cwaos2LXgB
B
Bilgin Ibryam @bibryam ·
How to write a great https://t.co/iJwx0WHbIw: Lessons from over 2,500 repositories https://t.co/Y0eOtkYw9W
R
Remotion @Remotion ·
Remotion now has Agent Skills - make videos just with Claude Code! $ npx skills add remotion-dev/skills This animation was created just by prompting 👇 https://t.co/hadnkHlG6E
R
Remotion @Remotion ·
Here's how we created the above video! Full prompt history: https://t.co/OhyuqqsD0o https://t.co/h1T4JwCIKS
J
Jawwwn @jawwwn_ ·
Palantir CEO Alex Karp says people think we’re in an AI bubble because a lot of AI just doesn’t work: “If you just buy LLMs off the shelf and try to do any of this, it won’t work.” “It’s not precise enough. You can’t do underwriting. You can’t do these things that are regulated.” “People have tried things that just can never work. You buy a LLM, put it on your stack, and wonder why it’s not working.” “What you’re going to see, especially in America, is people trying to do something like Ontology by hand.” “Once you build a software layer to orchestrate and manage the LLMs in a language your enterprise understands, you actually can create value.” “There’s a lot of discussion on if we’re in an AI bubble. What is the meaning of this bubble? If anything, we’re just in a lag. There’s a lot of AI, some of it works.” “Go back to the battlefield context: everybody in the world assumed this would not work. But now it does work. Now the question is, ‘How can I get it to work for my country?’” “Palantir barely has a sales force. In fact, it seems to be getting smaller and smaller every time I go see them.”
M
Michael Truell @mntruell ·
Our tips on how to use Cursor: - Start with a plan (Shift+Tab Plan Mode) - Let Cursor search on its own, don't over-tag context - Use tests as the feedback loop (TDD + iterate until green) - When it goes sideways: revert → tighten the plan → rerun - Keep long chats short; use @ Past Chats for continuity - Add lightweight .cursor/rules for recurring mistakes - Use skills + hooks for long-running "grind until tests pass" loops - Run multiple agents/models in parallel via worktrees
C cursor_ai @cursor_ai

Here's what we've learned from building and using coding agents. https://t.co/PuBtYuhyhd

A
Angie Jones @techgirl1908 ·
Cloudflare was definitely on to something with Code Mode. I tried it in goose this weekend for very extensive work. Half the tokens, messages, and LLM calls! This means lower costs, longer sessions, and less back and forth with the agent. I'm sold! https://t.co/gUEUPxmYU9
J
Jonny Burger @JNYBGR ·
Created this video without writing any code, but also without needing After Effects skills. Yet I was able to control every detail! Never felt so powerful 🧙🏻
R Remotion @Remotion

Remotion now has Agent Skills - make videos just with Claude Code! $ npx skills add remotion-dev/skills This animation was created just by prompting 👇 https://t.co/hadnkHlG6E

S
shadcn @shadcn ·
Finally, a way to quickly turn code into shareable registry items. Been looking for this.
R rbadillap @rbadillap

@shadcn you asked for it, you got it. 🚀 Announcing pastecn. A simple way to store your snippets and instantly get a shadcn-compatible registry URL. No setup. Just paste and ship. ⚡ https://t.co/mT3Ydr0DAy

M
Mikeishiring ⚡️🤖 @mikeishiring ·
@marckohlbrugge Seconded on the YOLO but instead you should create a new layer for it to interact with. E.g an email for the bot which you forward things onto. This will probably be the future, we'll have 3 identities: - Social networks - IRL -Agent version of you
H
Haider. @slow_developer ·
Anthropic CEO, Dario Amodei: "we might be 6-12 months away from models doing all of what software engineers do end-to-end" We're approaching a feedback loop where AI builds better AI But the loop isn't fully closed yet, chip manufacturing and training time still limit speed
W
Wes Roth @WesRoth ·
"Software Engineering Will Be Automatable in 12 Months," Anthropic CEO Dario Amodei predicts that AI models will be able to do 'most, maybe all' of what software engineers do end-to-end within 6 to 12 months, shifting engineers to editors. https://t.co/7bI7JmTtsb
C
Claude @claudeai ·
The VS Code extension for Claude Code is now generally available. It’s now much closer to the CLI experience: @-mention files for context, use familiar slash commands (/model, /mcp, /context), and more. Download it here: https://t.co/q95Cw4soMk https://t.co/3BCWPvybdZ
M
Meowbooks @meowbooksj ·
top 10 IDE betrayals https://t.co/bKcEl2ziMk
C claudeai @claudeai

The VS Code extension for Claude Code is now generally available. It’s now much closer to the CLI experience: @-mention files for context, use familiar slash commands (/model, /mcp, /context), and more. Download it here: https://t.co/q95Cw4soMk https://t.co/3BCWPvybdZ

D
Dom Lucre | Breaker of Narratives @dom_lucre ·
🔥🚨BREAKING: Digital artists are in a panic after this creator showed the current power of creating art with the help of AI which has digital artists fearing that they could become obsolete before 2026 is over. https://t.co/OQPucL74SR
C
Claude @claudeai ·
Claude can now securely connect to your health data. Four new integrations are now available in beta: Apple Health (iOS), Health Connect (Android), HealthEx, and Function Health. https://t.co/tTCnxOGt7i