AI Digest.

Claude Code Ships Task Management and Multi-Agent Swarms as Skills Ecosystem Hits Critical Mass

Claude Code's new Tasks system and swarm capabilities signal the end of community workarounds like Ralph Wiggum, while the skills ecosystem reaches critical mass with contributions from Vercel, Supabase, and Exa in a single day. MagicPath launches Figma Connect for pixel-perfect design-to-code, and Alibaba open-sources Qwen3-TTS across 10 languages.

Daily Wrap-Up

Today's feed painted a clear picture of where AI-assisted development is heading: the tools are becoming self-managing. Claude Code shipped native task management and multi-agent swarm capabilities, effectively replacing community-built workarounds with first-party features. At the same time, the skills ecosystem experienced a Cambrian explosion, with Vercel, Supabase, and Exa all shipping skills within the same news cycle. The convergence of these two trends points toward a future where developers spend more time directing agents and less time babysitting them.

The other major thread was the race to give AI agents persistent, full-fidelity computing environments. Martin Casado championed the Sprite model of containerized AI workspaces, Microsoft shipped the GitHub Copilot SDK for embedding agentic loops anywhere, and Palantir's AgentOS documentation started making the rounds. The infrastructure layer for autonomous agents is being built in real time by multiple well-funded players, and it's happening faster than most people realize. Meanwhile, WSJ ran a feature about people getting "Claude-pilled," which has to be the first time a major newspaper has used that particular construction.

The most entertaining moment was @thdxr's perfect distillation of the recursion: "first we had LLMs, put it in a loop and call it an agent, put that in a loop and call it ralph. guys i think i know what's next." The most practical takeaway for developers: invest time learning the Claude Code skills system and start writing project-specific skills. As @rauchg noted, the return on effort for a well-crafted skill far exceeds that of MCPs, and the ecosystem is moving fast enough that early adopters will have a meaningful advantage.

Quick Hits

  • @cursor_ai shipped a feature letting agents ask clarifying questions mid-conversation without pausing their work.
  • @NickADobos on Cursor's new activity tracking: "Cursor is trying to get me fired. Now they will know I'm not writing any code."
  • @sdrzn announced Cline integration with ChatGPT subscriptions for unlimited GPT 5.2 access.
  • @iruletheworldmo claims Google is "preparing for AGI" with dedicated roles.
  • @unusual_whales: OpenAI plans to take a cut of customers' AI-aided discoveries, per The Information.
  • @qianl_cs praised OpenAI's Postgres scaling blog, noting write-heavy workloads will be the next pain point.
  • @ExaAILabs launched semantic search over 60M+ companies with structured data on traffic, headcount, and financials, plus a companion Claude skill.
  • @codewithantonio discovered a tool that looks like a SaaS but is actually an open-source npm package: "this is genius, I will be using this in every project going forward."
  • @ShaneLegg (DeepMind co-founder) is hiring a Senior Economist to investigate post-AGI economics, reporting directly to him.
  • @cb_doge shared Elon Musk predicting more robots than people and "amazing abundance."
  • @IterIntellectus listed a dozen simultaneous breakthroughs from self-driving to fusion to CRISPR and concluded "I think we're going to be fine."
  • @crystalsssup generated a 25-slide Stardew Valley-themed business report using Kimi Slides in one shot.
  • @nummanali is switching to Browser Use's new CLI as a primary driver for browser-based agents.
  • @tetsuoai posted a meme of vibe coders watching senior engineers struggle to ship features.
  • @milichab reacted to Claude Code changes: "Insane, open a pull request!"
  • @aulneau and @benjitaylor exchanged links with minimal commentary.
  • @lukebelmar offered the always-insightful "AI is about to get crazy."

Claude Code Gets Tasks and Multi-Agent Swarms

The biggest news of the day landed with relatively little fanfare. @trq212 announced "We're turning Todos into Tasks in Claude Code," and the diffs from @ClaudeCodeLog filled in the details. Claude can now configure spawned Task agents with names, team contexts, and permission modes. More significantly, the ExitPlanMode schema now includes launchSwarm and teammateCount fields, meaning Claude can request spawning a multi-agent swarm to implement an approved plan.

The implications weren't lost on the community. @AlexFinn declared "And just like that Ralph Wiggum is dead," referring to the community pattern of looping Claude Code as an autonomous agent:

> "This is the next step towards Claude being a 24/7 autonomous agent. Lesson from this: spend more time on the planning phase. Have Claude build as many detailed tasks as it can. The more time you spend on this, the more time you'll save later."

@thdxr summed up the recursion with characteristic brevity: "first we had LLMs, put it in a loop and call it an agent, put that in a loop and call it ralph. guys i think i know what's next." And @nayshins posted about "everyone showing off their crazy vibe coded claude orchestrators," underscoring how quickly the community has been building on top of the pattern. The @WSJ, meanwhile, is running features about executives getting "Claude-pilled" after witnessing what they called "a thinking machine of shocking capability." That a major newspaper is covering the cultural phenomenon around a specific AI tool is itself a signal of how much the landscape has shifted.

The Skills Ecosystem Finds Its Groove

If tasks are how Claude Code manages itself, skills are how the community teaches it new tricks. And today the ecosystem hit critical mass. @rauchg reflected on the reception, noting that "the return on effort invested is much greater" compared to MCPs, and that "a skill on how to use a CLI + Claude Code makes your service or library way more attractive." @vercel_dev dropped a one-liner showing how simple skill installation has become: npx skills add anthropics/skills --skill frontend-design.

The contributions came from all directions. @supabase launched Postgres best practices agent skills. @dom_scholz proposed the natural UI for skills should be a skill tree (the gaming metaphor writes itself). @elithrar is already thinking about discoverability with npx skills add parsing an index.json. @mamagnus00 demonstrated the practical power, using a remotion skill to go from zero to polished product demo videos in five steps.

@RayFernando1337 highlighted the often-overlooked foundation: "The context you build here is powerful for getting high quality output from your agents." Skills work because they encode domain knowledge in a format agents can reliably consume. @doodlestein took this further, describing a workflow of "using skills to improve skills, skills to improve tool use, and then feeding the actual experience in the form of session logs back into the design skill." Whether or not you buy every claim in that thread, the core loop of using agent experience to refine agent instructions is sound engineering practice.

Agent Runtimes Get Serious Infrastructure

The question of where agents actually run got real attention today. @martin_casado championed the Sprite model: "Basically full linux environments running an AI agent. Full persistent with checkpoints. No need for git. Spin up as many as you want. Just little AI compute gremlins in the cloud." @AniC_dev offered a practical counterpoint, explaining that they tried building on similar infrastructure but found it "too expensive for how much compute you get" with HTTP-only access and Docker headaches, so they wrapped Hetzner VPSs instead.

On the enterprise side, @satyanadella positioned the GitHub Copilot SDK as embedding "the same production-tested runtime behind Copilot CLI" directly into apps, with @github describing it as the "agentic core" made embeddable in a few lines of code. @ashpreetbedi noticed Palantir's AgentOS documentation mirrors many of the same patterns. And @irl_danB tracked the proliferation: "since announcing OpenProse, I've seen four more attempts: first VVM, then Kimi Agent-Flow, then NPC, now lobster shell. I told you the year of the intelligent VM was upon us."

The connecting thread is that agents need more than a chat window. They need filesystems, processes, network access, and persistence. Whether that comes from Sprite, Copilot SDK, wrapped VPSs, or @penberg's AgentFS, the infrastructure layer is being built simultaneously by multiple teams racing toward the same destination.

Figma Connect Bridges Design and Code

@skirano launched Figma Connect for MagicPath across a four-post thread, positioning it as "the best way to turn your Figma designs into code." The workflow is straightforward: connect your Figma account, copy any design with Cmd+L, paste into MagicPath. "Images, typography, colors, and layout are all preserved," with the output becoming an interactive prototype you can edit with AI, share, or export as production code.

The pitch explicitly addresses the MCP fatigue that's been building: "No MCP hell. No plugins. Just copy and paste your designs into MagicPath and turn them into interactive prototypes without compromising your craft." @nityeshaga called the onboarding "straight out of a science fiction movie," adding that "it's bringing design to the vibe coding era." For frontend developers who've been hearing "design to code is solved" for years, the key differentiator here is fidelity: "You spent hours perfecting those pixels in Figma. We care about that. Your precision, plus the magic of MagicPath."

Models, Benchmarks, and a TTS Breakthrough

The model race continues on multiple fronts. @iruletheworldmo claims "openai will drop gpt 5.3 next week and it's a very strong model, much more capable than claude opus, much cheaper, much quicker." Meanwhile, Anthropic published a blog post about their notoriously difficult performance engineering take-home exam. @AnthropicAI explained that Opus 4.5 beat it, forcing a redesign: "Given enough time, humans still outperform current models, but the fastest human solution we've received still remains well beyond what Claude has achieved even with extensive test-time compute."

On the open-source side, @Alibaba_Qwen shipped Qwen3-TTS with five models spanning 0.6B and 1.8B parameters, support for 10 languages, voice cloning, and a state-of-the-art 12Hz tokenizer. They called it "arguably the most disruptive release in open-source TTS yet." Separately, @TheAhmadOsman detailed an impressive knowledge distillation workflow where a 0.6B model went from 36% accuracy on Text2SQL to 74% after distilling from DeepSeek-V3 using a Claude skill as the orchestrator. The takeaway: "You don't need a giant model for every job. You need tiny specialists that understand your world."

Code Quality at Scale Remains Unsolved

@nayshins articulated what many are feeling: "infinite code currently leads to the choices: 1. infinite review burden, 2. slop. We need to keep experimenting with tools to ease the review burden or we will be buried in slop." This is the uncomfortable truth lurking behind every productivity claim about AI coding tools. @emollick offered a more measured take, noting that "there is definitely an accumulating AI skillset that comes with experience" and that knowledge about model capabilities "changes more gradually and, with enough experience, predictably, than you might expect."

@_coenen provided a compelling case study, sharing a massive isometric pixel art map of NYC built entirely with coding agents: "I didn't write a single line of code." But the follow-up was telling: "Of course no-code doesn't mean no-engineering. This project took a lot more manual labor than I'd hoped!" The gap between "the AI wrote all the code" and "the project was effortless" remains wide, and closing it is the central challenge for the next generation of developer tools.

Sources

S
Simplifying AI @simplifyinAI ·
Claude Code just got an "App Store" for agents 🤯 A massive new open-source library just dropped with 100+ pre-made agents, skills, and templates that you can install instantly. And it's 100% free to use. https://t.co/2R07ziHNKV
P
Paul Dix @pauldix ·
Getting agents into a verification loop is the superpower for 2026. Agents will build all the software if you give them the context and tools to verify and iterate. My thoughts on Building the Machine that Builds the Machine:
P pauldix @pauldix

Build the machine that builds the machine

A
am.will @LLMJunky ·
Holy sh*t! This is cracked. I just ran this skill in my repo with the following prompt: 'Make me a flash promo video for CodexSkills that shows installing the skills and then highlights all the skills available.' And it came up with this without any further prompting. 🤯 Are you kidding me?
R Remotion @Remotion

Remotion now has Agent Skills - make videos just with Claude Code! $ npx skills add remotion-dev/skills This animation was created just by prompting 👇 https://t.co/hadnkHlG6E

S
Scott Wu @ScottWu46 ·
Most AI review tools today center around asking an arms-length agent to catch & report potential bugs. This is really valuable! But until we reach the point where you can confidently hit "Merge" on a 5000-line agent PR, you're still bottlenecked on reviewing the code yourself. This will stay true for a while even as the tools get better. Would you rather have an arms-length AI that catches 80% of bugs or an AI-powered review UX that makes *you* 5x faster? Probably the latter since you'd still have to review the whole PR yourself to catch the last 20%. Of course, the best review experience should have both! We built Devin Review with these thoughts in mind. Let us know what you think!
C cognition @cognition

Meet Devin Review: a reimagined interface for understanding complex PRs. Code review tools today don’t actually make it easier to read code. Devin Review builds your comprehension and helps you stop slop. Try without an account: https://t.co/Zzu1a3gfKF More below 👇 https://t.co/sYQLjwSk6s

S
Steve Ruiz @steveruizok ·
The best code review tool I've come up with is asking Claude to reimplement the PR on a new branch in a narratively optimized perfect git history
S steveruizok @steveruizok

v1 of my "reimplement this PR using an ideal commit history" command, actually works quite well. "What commits would I have made if I had perfect information about the desired end state?" https://t.co/5S4kCIo8bR

J
Jake @nayshins ·
This is the core of the infinite software crisis infinite code currently leads to the choices: 1. infinite review burden 2. slop We need to keep experimenting with tools to ease the review burden or we will be buried in slop.
T theodormarcu @theodormarcu

I don't think people have fully internalized the implications of autonomous AI software engineering agents yet Early adopters have noticed (or at least intuited) something important: as the cost of generating code approaches zero, the bottleneck shifts from writing code to understanding it, verifying it, and catching bugs or security issues before you ship Put simply: our capacity to generate code is growing much faster than our capacity to review it The good news is that as we get better at building AI coding agents, we also get better at building tools that help us understand, organize, and verify the generated code This is why I think Devin Review is indicative of the next generation of SWE agents: we're now moving beyond going from prompt-to-PR or prompt-to-app, and toward automating the other parts of being a SWE (specifically planning and testing)

T
tetsuo @tetsuoai ·
vibe coders watching senior engineers trying to ship a feature https://t.co/np1klUy4HL
D
dax @thdxr ·
first we had LLMs put it in a loop and call it an agent put that in a loop and call it ralph guys i think i know what's next
T
The Wall Street Journal @WSJ ·
They call it getting “Claude-pilled.” It’s the moment software engineers, executives and investors turn their work over to Anthropic’s Claude AI—and then witness a thinking machine of shocking capability, even in an age awash in powerful AI tools. https://t.co/sm2yyLTsev https://t.co/jr1aEIyJv1
A
Ahmad @TheAhmadOsman ·
INCREDIBLE Someone on r/LocalLLaMA did an incredibly practical thing They took a tiny 0.6B model that was trash at task (Text2SQL) Created a knowledge distiliation agent with a Claude Code skill And made the 0.6B model behave like a specialist using 100 examples The problem > Small Language Models are “generally helpful” > but specialized tasks are “exact or you die” > you ask: “Which artists have >1M album sales?” > the model answers: “check if genre is NULL” The old way to fix this > Finetune the model: > collect + clean data > build training pipeline > tune hparams > rerun when it’s wrong > accidentally become the unpaid > intern of your own experiment The new way > Knowledge distillation via a Claude skill > use a strong teacher (DeepSeek-V3) > generate synthetic pairs from a small seed set > train a tiny student to imitate the teacher on your task > ship it as GGUF / HF / LoRA > run it locally Distillation isn’t “creating skill” It’s compressing skill THE REAL HACK: agent-as-interface > They wrapped the whole distillation loop in an agent “skill”: > picks task type (QA / classification / tool calling / RAG) > converts messy inputs into clean JSONL > runs teacher eval first > kicks off distillation + monitors progress > packages weights for you to run locally This is the quiet unlock Why “teacher eval first” is elite behavior > distillation amplifies competence and incompetence > if the teacher is wrong, the student learns wrong faster > garbage in -> efficient garbage out Adult supervision, but for models The run breakdown: > seed: ~100 raw conversation traces > teacher (LLM-as-judge): ~80% > base 0.6B: ~36% > distilled 0.6B: ~74% > output: ~2.2GB GGUF > runs locally with llama.cpp Before vs after (the entire reason you do this) > before: wrong tables, wrong logic, nonsense SQL > after: correct JOINs, GROUP BY, HAVING > aka “this query actually executes and answers the question” What this really means (bigger than Text2SQL) You don’t need a giant model for every job You need tiny specialists that understand your world: > internal schemas > service / OS logs > tool outputs > company-specific workflows TL;DR > “fine-tuning is hard” is mostly “the pipeline is annoying” > distillation skill turns 10–100 examples into a real specialist > the agent wrapper turns the whole thing into a conversation > this is how you get practical local SLMs > without becoming an MLOps monk Small & Specialized models > High-leverage > Boringly effective > Exactly where this is going The future is Local inference Lower latency Fewer secrets leaving the building
J
Jeffrey Emanuel @doodlestein ·
I’m living this every day, and let me tell you, things are accelerating very rapidly indeed. Using skills to improve skills, skills to improve tool use, and then feeding the actual experience in the form of session logs (surfaced and searched by my cass tool and /cass skill) back into the design skill for improving the tool interface to make it more natural and intuitive and powerful for the agents. Then taking that revised tool and improving the skill for using that tool, then rinse and repeat. And finding any way I can to squeeze out more token density and agent intuitiveness and ergonomics wherever I can, like porting toon to Rust and seeing how I can add it as an optional output format to every tool’s robot mode. Meanwhile, I’m going over each tool with my extreme optimization skill and applying insane algorithmic lore that Knuth himself probably forgot about already to make things as fast as the metal can go in memory-safe Rust. Now I’m applying this to much bigger and richer targets, not just making small tools for use by agents, but now complex, rich protocols like my Flywheel Connector Protocol, which is practically an alien artifact (same for my process_triage or pt tool, which could cover a dozen PhD theses worth of applied probability), in that it weaves together so many innovative and clever ideas. Skeptical? Check out the spec, it’s all public in my GH. All the “slop callers” have been conspicuously silent about this stuff, I wonder why? And now I’m even starting to build up my own core infrastructure for Rust. Just because certain libraries and ecosystems like Tokio have all the mindshare, doesn’t mean they’re the best, or even particularly good. Design by committee over 10+ years while the language evolves is not a recipe for excellence. But people are content to defer to the experts and then they end up with flawed structured concurrency primitives that forgo all the correctness by design that the academics already solved. For instance, check out my asupersync library, which I’m already using to replace all the networking in my other rust tools, for a glimpse at this new clean-room, alien-artifact library future based on all that CS academic research that only a dozen people in the world ever read about. The knowledge is just sitting there and the models have it. But you need to know how to coax it out of them. I will be skipping out on all the Rust politics though! Naysayers can stick to Tokio. At the same time, I’m raiding and pillaging the best libraries available for every language and making clean-room, conformance-assured, heavily-optimized Rust versions. I’m nearly done porting rich, fastapi, fastmcp, and sqlmodel from Python, as well as all of the Charm libraries from Golang (like bubbletea and lipgloss), and even OpenTUI (I’ll have to port OpenCode afterwards just to antagonize Dax for being so nasty to me). These aren’t idle boasts; all of these repos are public and available NOW for your perusal and verification. And I’ve already proven I can do this with my beads_rust project that I made in a few days and which turned 270k lines of Golang into 20k lines of Rust that is 8x faster. Just need a few more days to finish everything and establish correctness and conformance, and then the iterated extreme isomorphic optimization Olympics can begin in earnest, and I can turn all of these libraries into alien artifacts, too. And btw, when I’m done porting all the console formatting related libraries, I’m going to merge them all into an unholy Franken-Library (but don’t worry, it will be super elegant and agent-intuitive). Again, this isn’t some crazy dream. All of this will be completed by early February at the latest. Just watch. AI skeptics in shambles.
D daniel_mac8 @daniel_mac8

Humanity's future rest on one key question: https://t.co/mSMlVmEYim

Q
Qwen @Alibaba_Qwen ·
Qwen3-TTS is officially live. We’ve open-sourced the full family—VoiceDesign, CustomVoice, and Base—bringing high quality to the open community. - 5 models (0.6B & 1.8B) - Free-form voice design & cloning - Support for 10 languages - SOTA 12Hz tokenizer for high compression - Full fine-tuning support - SOTA performance We believe this is arguably the most disruptive release in open-source TTS yet. Go ahead, break it and build something cool. 🚀 Everything is out now—weights, code, and paper. Enjoy. 🧵 Github: https://t.co/X4CNGRpBAG Hugging Face: https://t.co/QzshIqzYDU ModelScope: https://t.co/XaWVuDerZ6 Blog: https://t.co/xPER3lyeb5 Paper: https://t.co/9mi5dFyJza Hugging Face Demo: https://t.co/cL7AyaMDwM ModelScope Demo: https://t.co/MYpIeYdYN5 API: https://t.co/lIEikdB6uM
D
dan @irl_danB ·
the methods are spreading like wildfire now @steipete , I like this a lot. if you’re interested in more, check out https://t.co/9bMOpcGG9F which applies this pattern across all major harnesses since announcing OpenProse, I’ve seen four more attempts: first VVM, then Kimi Agent-Flow, then NPC, now lobster shell I told you the year of the intelligent VM was upon us, I couldn’t have anticipated this type of proliferation in the span of three weeks
S steipete @steipete

We been working on a typed workflow runtime for @clawdbot - composable pipelines with approval gates. Use fewer tokens, have more predictable outcomes. lobster🦞 is the "shell" for your agent. (kudos, @_vgnsh) https://t.co/MY9Tq9hfrU https://t.co/ooSe6VqNsw

D
Dominik Scholz @dom_scholz ·
The natural UI for skills? A skill tree 🌳 https://t.co/zoQGU37SNb
R rauchg @rauchg

In love with this aesthetic https://t.co/pYz1Gn97jD https://t.co/5fvSPHco1k

A
Andy Coenen @_coenen ·
Of course no-code doesn't mean no-engineering. This project took a lot more manual labor than I'd hoped! I wrote a deep dive on the workflow and some thoughts about the future of AI coding and creativity: https://t.co/RUXK48iPuu
A
Andy Coenen @_coenen ·
I wanted to share something I built over the last few weeks: https://t.co/QRqMK9CpTR is a massive isometric pixel art map of NYC, built with nano banana and coding agents. I didn't write a single line of code. https://t.co/97nOJPzF0u
P
Pekka Enberg @penberg ·
Nice use of AgentFS!
V vimota @vimota

Agent Sandboxes: A Primer

S
Satya Nadella @satyanadella ·
A new developer workflow and app paradigm is emerging, with an agentic execution loop at the core. With the GitHub Copilot SDK, you can embed the same production-tested runtime behind Copilot CLI—multi-model, multi-step planning, tools, MCP integration, auth, streaming—directly into your apps. https://t.co/RamJvw2U9D
G
Guillermo Rauch @rauchg ·
Industry response to https://t.co/pYz1Gn9F9b exceeded my expectations. While I don't think skills are 1:1 to MCPs, it's very obvious that the return on effort invested is much greater. A skill on how to use a CLI + Claude Code makes your service or library way more attractive.
V vercel_dev @vercel_dev

Over 4,500 unique agent skills have been added via 𝚗𝚙𝚡 𝚜𝚔𝚒𝚕𝚕𝚜 from major products across the ecosystem: • @neondatabase • @remotion • @stripe • @expo • @tinybird • @supabase • @better_auth Find new skills and level up your agents at https://t.co/wcRHxRUm9u

L
Luke Belmar 👽 @lukebelmar ·
AI is about to get crazy 😳
T theworldlabs @theworldlabs

The World API is live. Generate persistent, explorable 3D worlds from text, images, and video. Integrate them directly into your products. https://t.co/oJQwP50A6e

E
Ethan Mollick @emollick ·
There is definitely an accumulating AI skillset that comes with experience using it. You learn what models can do, how to work with them and when & how they will make mistakes. That knowledge changes more gradually and, with enough experience, predictably, than you might expect.
S simonw @simonw

@DavidKPiano "Catching up takes a day, not month" I don't think that's true. I see so many people throwing their hands up saying "I don't get why you have good results from this stuff while I find it impossible to get decent code that works" The difference is I've spent 3+ years with it!

A
Ashpreet Bedi @ashpreetbedi ·
Did palantir just validate the agent runtime? From the AgentOS docs: https://t.co/ftaJGr6DGD
P PalantirTech @PalantirTech

Securing Agents in Production (Agentic Runtime, #1)

S
Saoud Rizwan @sdrzn ·
Use your ChatGPT subscription to get unlimited GPT 5.2 in Cline! We've optimized for the best results over profit margins, and so don't take the cost cutting measures other tools do. Hoping this partnership with OpenAI makes this more accessible ❤️
C cline @cline

Bring your ChatGPT subscription to Cline for inference. We partnered with @OpenAI to let you use your existing subscription. Sign in and access all the models in your subscription. No API keys, flat-rate pricing instead of per-token costs. Here is how to enable this: https://t.co/Plq2qrfxVH

U
unusual_whales @unusual_whales ·
OpenAI plans to take a cut of customers' AI-aided discoveries, per The Information
N
Nick Dobos @NickADobos ·
Cursor is trying to get me fired Now they will know I’m not writing any code https://t.co/kMZy9ADSSg
C cursor_ai @cursor_ai

Learn about everything new in 2.4: https://t.co/hNxdhhaPdi

V
Vercel Developers @vercel_dev ·
One command to level up frontend designs: ▲ ~/ npx skills add anthropics/skills --skill frontend-design
A asidorenko_ @asidorenko_

frontend-design skill https://t.co/Tl20xQJZc1

J
Jake @nayshins ·
everyone showing off their crazy vibe coded claude orchestrators https://t.co/Wwcurx8GJ4
T
Thariq @trq212 ·
We’re turning Todos into Tasks in Claude Code
Q
Qian Li @qianl_cs ·
Great read on how OpenAI scales Postgres. Impressive work! Look forward to future work/blog post on handling write-heavy workloads, as it'll likely become a huge pain point.
B BohanZhangOT @BohanZhangOT

@PostgreSQL has long powered core @OpenAI products like ChatGPT and the API. Over the past year, our production load grew 10× and keeps rising. Today we run a single primary with nearly 50 read replicas in production, delivering low double-digit millisecond p99 client-side latency and five-nines availability. In our latest OpenAI Engineering blog, we unpack the optimizations we made to to scale @Azure PostgreSQL to millions of queries per second for more than 800M ChatGPT users. Check out the full post here: https://t.co/VTnxhlwlat

A
Alex Finn @AlexFinn ·
@trq212 Nobody will be using 'Ralph Wiggum' in a month. Claude will just be able to loop itself. This is clearly step 1 of that
A
Alex Finn @AlexFinn ·
And just like that Ralph Wiggum is dead Claude Code can now create its own project tasks and manage itself This is the next step towards Claude being a 24/7 autonomous agent Lesson from this: spend more time on the planning phase. Have Claude build as many detailed tasks as it can. The more time you spend on this, the more time you'll save later having to prompt Claude, because it will just be able to manage itself for hours
T trq212 @trq212

We’re turning Todos into Tasks in Claude Code

A
Andrew Milich @milichab ·
Insane, open a pull request! https://t.co/vfxAxNYNMB
B BlendiByl @BlendiByl

Been loving IsoCity by @milichab, but one thing was missing - what if I wanted ANY building in my city? So I built this using fal 🏙️ https://t.co/V2kRrFuAnp

I
ian @shaoruu ·
i've created a command we use internally @cursor_ai called /council: "teach me how auth works /council n=8" "can you make sure this plan works /council" "i'm tired. please debug <bug> n=25" spins off n (=10 by default) subagents to dig around and explore. install below 🧵
C cursor_ai @cursor_ai

Cursor now uses subagents to complete parts of a task in parallel. Subagents lead to faster overall execution and better context usage. They also let agents work on longer-running tasks. Also new: Cursor can generate images, ask clarifying questions, and more. https://t.co/LTsxuaYuoU

L
Lucas Crespo 📧 @lucas__crespo ·
This is the craziest nano banana + coding agents example I've seen. The entirety of NYC mapped into a massive isometric art https://t.co/k7Wm0oZAMs
_ _coenen @_coenen

I wanted to share something I built over the last few weeks: https://t.co/QRqMK9CpTR is a massive isometric pixel art map of NYC, built with nano banana and coding agents. I didn't write a single line of code. https://t.co/97nOJPzF0u

D
Ddox @paraddox ·
This is bigger than it sounds. Claude Code can now: → Track dependencies between tasks → Coordinate across multiple sessions → Let subagents collaborate on the same project The "unhobbling" era is here. AI agents that can run longer and remember where they left off.
T trq212 @trq212

We’re turning Todos into Tasks in Claude Code

A
Aaron Ng @localghost ·
Got a mac mini for clawdbot. Had a lot of fun setting this up today. Instead of access to my accounts, I gave it: ✅ its own apple account for messages ✅ its own gmail to sign up for stuff ✅ its own github to push code https://t.co/TaXkRVlEtq
M
Matt Pocock @mattpocockuk ·
Anthropic's Ralph plugin sucks, and you shouldn't use it It defeats the entire purpose of Ralph - to aggressively clear the context window on each task to keep the LLM in the smart zone. Full article here: https://t.co/ssOY9PiPdR https://t.co/O40SrB6d9s
V
vittorio @IterIntellectus ·
the loss people are feeling is real identifying with one’s craft has been the hallmark of some of the greatest and there’s something sacred in mastering a skill but this is going to hit everyone who found meaning only in labor. and it’s revealing something already broken. work was supposed to be a means to an end. meaning should come from what you’re working for. family, community, something beyond yourself. somewhere along the way we substituted the tool for the purpose now the tool is being automated and there’s nothing underneath for a lot of people people should ask themselves “what was the work for?” some will rediscover what matters. some will realize they never built those things the ones who answer “who are you” with “i’m a father” instead of “i am my job title ” won’t even understand what everyone else is panicking about. they built on something that can’t be automated if the purpose was real, it’s still there. if it wasn’t, now you know. painful, but not too late
M Madisonkanna @Madisonkanna

as a software engineer, i feel a real loss of identity right now. for a long time i defined myself in part by the act of writing code. the pride in a hard-earned solution was part of who i was. now i watch AI accomplish in seconds what took me hours. i find myself caught between relief and mourning, awe and anxiety. the craft that shaped me is suddenly eclipsed by a machine. who am i now?

N
Numman Ali @nummanali ·
Claude Code's New Task System: The Practical Guide and Explainer
D
dax @thdxr ·
the author of git ai put together a spec for annotating commits with information about what code is ai generated need to review deeper but opencode will probably implement this we can't have this kind of functionality only exist in proprietary products like cursor blame https://t.co/VUwurEBA6W
A
Alex Cheema - e/acc @alexocheema ·
The frontier of local coding has lot of rough edges, but it works, and the models are super capable. This will only get better. We are going to make local coding the default.
_ _mjmeyer @_mjmeyer

yaaaaas! got GLM-4.7-Flash 4-bit running on my M3 with @opencode 🚀 crashed my mac 3 times already... and not exactly fast enough to do anything with... still epic that it's possible though 🙌 https://t.co/8XcY7MR3m4

L
Liad Shababo @L1AD ·
@nummanali Built a task viewer for this. Kanban board with live updates across all sessions. https://t.co/gygvZKbhYT https://t.co/eHAI2yw36b
S
Sebastian Siemiatkowski @klarnaseb ·
Being "AI native" will mean a complete rebuild of the entire tech stack to run a business. Every tool. Every system. Every workflow. The companies that figure this out first will make everyone else look like they're still running on fax machines.
N
NetworkChuck @NetworkChuck ·
My server has a phone number now. I can call it from ANYWHERE (even a payphone in the middle of nowhere with zero internet) and I can talk to Claude Code. But that's not the crazy part. My server can call ME. When something breaks, it picks up the phone and tells me about it. Check it out: https://t.co/Jg2qVmOZFO @3CX
M
Michael Truell @mntruell ·
Excited for skills in Cursor!
C cursor_ai @cursor_ai

Agent Skills are now available in Cursor. Skills let agents discover and run specialized prompts and code. https://t.co/aZcOkRhqw8

N
NetworkChuck @NetworkChuck ·
Introducing Claude-Phone
J
Jediah Katz @jediahkatz ·
Prompt: https://t.co/GMtTWkHCaT A recent example of how I used this: I was investigating tool call errors with the (amazing) Datadog MCP. I was telling the model which tags to use and correcting it when it made poor queries. When done, I captured it as /investigate-tool-errors.
J
Jediah Katz @jediahkatz ·
This is the most important Skill you need in Cursor. "capture-skill" takes what you taught the agent in the current session and saves it for you and your team to use over and over. You should be using this CONSTANTLY! Full prompt included below:
C cursor_ai @cursor_ai

Agent Skills are now available in Cursor. Skills let agents discover and run specialized prompts and code. https://t.co/aZcOkRhqw8

A
Austin Hickam @AustinHickam ·
@NetworkChuck This is awesome! I did something similar for a birthday party https://t.co/Shh0u8NRdF
C
Claude @claudeai ·
Claude in Excel is now available on Pro plans. Claude now accepts multiple files via drag and drop, avoids overwriting your existing cells, and handles longer sessions with auto compaction. Get started: https://t.co/cAMDXM1h7r https://t.co/yt9Gy2HLY3
S
Steve Clarke @SevenviewSteve ·
@jediahkatz Love it! I'm constantly doing this but hadn't thought of turning it into a skill itself. Very meta! Added to my growing library of skills https://t.co/bkiivGH1Jp