AI Digest.

Ralph Wiggum Loop Dominates Dev Twitter as Dario Amodei Predicts Full SWE Automation in 12 Months

The Ralph Wiggum autonomous coding loop exploded in popularity with developers running it 24/7 and comparing early adoption to buying Bitcoin in 2012. Dario Amodei predicted AI models will handle end-to-end software engineering within 6-12 months, sparking heated debate about the future of the profession. Meanwhile, the Claude Code VS Code extension went generally available and a new skills ecosystem began replacing MCP servers.

Daily Wrap-Up

The Ralph Wiggum autonomous coding loop went from niche technique to the main character of dev Twitter today. Multiple developers shared stories of running loops overnight, presenting them to skeptical colleagues, and fundamentally reshaping their development workflows. The enthusiasm bordered on evangelical, with comparisons to early Bitcoin adoption. What made the discourse interesting was the split between those who see autonomous loops as the future and @stevekrouse's sharp counter-argument that managing multiple agent sessions is a fool's errand. He compared it to 1950s TV producers thinking the medium was just radio with cameras. The tension between "run more inference" and "be a craftsperson with a powerful tool" is going to define how the next generation of developer tooling gets built.

Dario Amodei dropped a bomb by predicting that AI models will automate "most, maybe all" of software engineering within 6-12 months. The Node.js creator apparently echoed similar sentiments the same week Linus tried vibe coding, creating a moment that felt like a generational inflection point. But @GergelyOrosz offered the most grounded take: this doesn't mean less demand for software engineers, it means more demand for engineers who can build reliable, complex software with LLMs. The signal here isn't that coding is dying. It's that the definition of "coding" is changing fast.

The most entertaining moment was @johnpalmer casually dropping "you kinda seem more like a Claude Cowork user (derogatory)" with zero context, perfectly capturing the emerging tribal dynamics in AI tooling. The most practical takeaway for developers: follow @mattpocockuk's hard-won lesson and specify your module boundaries and interfaces upfront before handing work to an agent. Vague instructions produce slop; precise architectural constraints produce working software.

Quick Hits

  • @claudeai announced Claude can now connect to health data via Apple Health, Health Connect, HealthEx, and Function Health integrations in beta.
  • @dom_lucre shared viral AI art demonstrations that have digital artists concerned about obsolescence before 2026 ends.
  • @vasuman dropped a post simply titled "AI Agents 102" for those ready to go beyond the basics.
  • @parcadei asked "WTF is a Context Graph?" and called it the trillion-dollar problem.
  • @meowbooksj shared "top 10 IDE betrayals" which resonated with anyone who's been burned by an autocomplete.
  • @johnpalmer coined the insult "Claude Cowork user (derogatory)" and honestly it stings.
  • @mikeishiring proposed we'll soon have three identities: social networks, IRL, and an agent version of yourself.
  • @bibryam shared lessons from analyzing 2,500+ repositories on how to write a great CLAUDE.md file.
  • @denk_tweets celebrated beehiiv crossing $2M MRR and shared 10 early-stage growth tactics.
  • @EleanorKonik got Claude running in Obsidian's terminal plugin and immediately put it to work finding related notes for fiction writing.
  • @framara launched TuCuento, an interactive storytelling app for parents and kids to create stories together.
  • @JNYBGR created a polished video "without writing any code, but also without needing After Effects skills."
  • @ryanflorence wondered why nothing on the mobile web is animated well, then announced he's buying a course.
  • @aleenaamiir shared an elaborate Gemini Nano prompt for generating educational 3D isometric dioramas.
  • @testingcatalog reported that X open-sourced its recommendation algorithm for public transparency.
  • @weswinder fed the new X algorithm to Opus 4.5 and got back a posting strategy for maximum reach.
  • @herkuch noted trouble selecting models in OpenCode, a pain point as the tool gains adoption.
  • @steipete shared his voice-driven PR review workflow and noted that of 1,000+ PRs reviewed, fewer than 10 merged without changes.
  • @steipete also expressed continued amazement at @clawdbot making actual phone calls.

The Ralph Wiggum Loop Goes Mainstream

The autonomous coding loop created by @GeoffreyHuntley has crossed from power-user trick into full-blown movement. Today's timeline was saturated with developers sharing their experiences running Claude Code in unattended loops, and the testimonials ranged from practical to almost spiritual. The pattern is simple: set up a task, let the agent iterate in a loop with test feedback, and come back to working code. But the cultural moment around it is anything but simple.

@d4m1n captured the addictive quality perfectly: "I now run 1-2 loops 24/7, tweaking, iterating. Before sleep I set off a loop, I wake up 3-5x a night thinking of it with excitement." He called the workflow "extremely unhealthy" but couldn't stop. In a follow-up, he flatly stated that "using Ralph Wiggum loop will put you ahead of 98% of devs."

@Hesamation compared the opportunity to "buying Bitcoin in 2012" and warned the window would close in months. @paraddox described presenting the loop to two engineers still using VS Code AI extensions: "My 10x loops GLM-4.7 fixed something their Opus didn't. They went quiet. Now I'm doing a workshop on it." @mattpocockuk, already a prominent voice in the TypeScript community, declared that after discovering Ralph, traditional agent workflow advice "feels a bit quaint" since "all this advice can be automated away with a few lines in a bash loop."

But @mattpocockuk also provided the day's most important cautionary note: he "got a lot of slop out of Ralph" because he didn't specify module boundaries upfront or request a simple, testable interface. @ctatedev put the productivity gains in concrete terms, claiming he agent-coded a complex networking and orchestration system over a 3-day weekend that "would've taken me 1-2 years solo." Whether you buy the hype or not, the loop is forcing a conversation about what developer workflows look like when execution becomes nearly free.

Agent Workflow Philosophy: Craft vs. Management

While the Ralph loop dominated in volume, the more nuanced conversation was about how developers should actually relate to their AI tools. @stevekrouse delivered the sharpest take of the day, arguing that managing multiple Claude Code instances is fundamentally misguided. "My brother in christ, you can only think of 7 things at a time," he wrote, "and if you're running 2 Claude Codes, each has a couple details that need your attention, so you're already all maxed out." His alternative vision: agents should passively ingest your repo, issues, and email, then "ONLY NOTIFY ME WITH A FULLY WORKING PULL REQUEST, TOTALLY VERIFIED."

On the practical side, @aye_aye_kaplan offered three concrete tips: always start with Plan Mode, start new chats frequently to avoid muddied context, and leverage AI for code review. @ericzakariasson built on this with a key insight: "plan sync, implement async." If you align on a plan quickly, you can hand off to a cloud agent with high confidence. @mntruell from Cursor shared similar philosophy, emphasizing TDD as the feedback loop and reverting when things go sideways rather than trying to steer a derailed session.

@techgirl1908 highlighted Cloudflare's Code Mode in Goose, which cut tokens, messages, and LLM calls in half. The efficiency angle matters because longer productive sessions mean fewer context resets. The emerging consensus: the best agent workflows aren't about running more agents. They're about giving fewer agents better instructions and tighter feedback loops.

Dario Amodei's 12-Month Prediction Splits the Community

Anthropic CEO Dario Amodei predicted that AI models will be able to do "most, maybe all" of what software engineers do end-to-end within 6-12 months. The quote ricocheted across tech Twitter, amplified by @WesRoth and @slow_developer, with reactions splitting predictably between alarm and measured optimism. @slow_developer added useful nuance: "We're approaching a feedback loop where AI builds better AI, but the loop isn't fully closed yet. Chip manufacturing and training time still limit speed."

The Node.js creator's similar comments the same week added fuel. @Hesamation framed it as a generational shift: "when the creator of node.js says the era of humans writing code is over, just one week after Linus tries out vibe coding, you know a chapter in technology is slowly closing." @thekitze was less diplomatic about the holdouts: "node js creator: coding is dead / avg mid miderson: i will never trust llms!!!"

@GergelyOrosz offered the counterweight that mattered most. Rather than seeing automation as displacement, he predicted "more demand for software engineers who can build reliable+complex software with LLMs." The framing shift from "writing code" to "building reliable software" is subtle but important. The skills that matter are moving up the stack: architecture, system design, verification, and the judgment to know when an AI output is wrong. The code itself is becoming the easy part.

Claude Code and the Skills Ecosystem

Anthropic had a productive day. The @claudeai account announced the VS Code extension for Claude Code is now generally available, bringing @-mentions for file context, slash commands like /model and /mcp, and a much closer experience to the CLI. This matters because VS Code is where most developers live, and reducing friction between the editor and the agent is table stakes.

More interesting was the emerging skills ecosystem. @Remotion launched Agent Skills that let developers create videos entirely through Claude Code prompts. @andrewqu called it "SICK" and claimed he "nearly 1 shotted" a launch video. @intellectronica went further, announcing he'd dropped all MCP servers from his setup entirely: "Context7, Tavily, Playwright, all replaced with SKILLs + curl or agent-browser. SKILLs are all you need!" This is a meaningful shift. MCP servers require running processes and managing connections. Skills are just instructions and scripts that the agent can execute directly. If this pattern holds, the integration story for AI coding tools gets dramatically simpler. @GHchangelog also announced GitHub Copilot now supports OpenCode's open source agent with no additional license required, further expanding the agent ecosystem.

The AI Adoption Gap

While tech Twitter debated which autonomous loop configuration is optimal, the world outside the bubble painted a very different picture. @bwarrn had lunch with a founder who helps Fortune 500 companies adopt AI and came back with a reality check: "Some of the biggest companies on earth use zero AI tools. Not even ChatGPT. Execs only recognize: ChatGPT, Copilot, Gemini (maybe Perplexity). Everyone feels behind. Nobody knows what to buy."

Palantir CEO Alex Karp, as quoted by @jawwwn_, explained why the gap exists: "If you just buy LLMs off the shelf and try to do any of this, it won't work. It's not precise enough. You can't do underwriting." His argument is that you need a software orchestration layer that speaks your enterprise's language before LLMs create real value. The "AI bubble" narrative, in his view, is really just a lag between capability and implementation.

@ideabrowser flipped the perspective entirely, pointing to startup graveyards full of ideas that failed because they "needed VC capital, needed a team, needed to be in Silicon Valley." Today's solo developer with AI tools doesn't need any of that. The gap between what's possible for well-tooled individuals and what large enterprises are actually doing has never been wider, and that gap is where opportunity lives.

Products and Launches

@excalidraw upgraded its text-to-diagram feature with a new chat interface, streaming, and smarter generation. For developers who think in diagrams, this is a meaningful quality-of-life improvement. @shadcn shared enthusiasm for a new tool that turns code into shareable registry items, calling it something he'd been looking for. @siavashg emerged from stealth with Stilla AI, billed as "the first Multiplayer AI," backed by $5M from General Catalyst. The pitch addresses a real problem: AI makes individuals faster, but "the faster individuals move, the harder it is to move together." Whether a startup can solve coordination at the team level when everyone's running their own agent loops remains to be seen, but the problem statement resonates.

Sources

M
Matt Pocock @mattpocockuk ·
It's crazy how after discovering Ralph this stuff feels a bit quaint All this advice can be automated away with a few lines in a bash loop
A aye_aye_kaplan @aye_aye_kaplan

My top 3 tips for coding with agents: 1. Always start with Plan Mode. It's better to iterate in natural language and then execute once you know what the agent is going to do. This will save you time, effort, and tokens! 2. Start new chats frequently. Remember that your role is to point the Agent in the right direction to make the changes you need. If you change topics, the context window will get muddied. You will also be spending more tokens on longer chats. 3. Leverage AI to do your code review. If you know the failure case, ask a model. One prompt I often use is "scan the changes on my branch and confirm nothing is impacted outside of my feature flag". As a safety net for everything outside this issues-you-expect umbrella, use Bugbot.

P
Paco @framara ·
Today Iโ€™m launching TuCuento. An interactive storytelling app designed for parents and children to create stories together. Choose characters, make decisions as a team, and shape how the adventure unfolds. Demo video below ๐Ÿ‘‡ Link in replies. https://t.co/u3EeI9q1Dz
E
Eleanor Berger @intellectronica ·
I am no longer using _any_ MCP servers in my local setup [ @code / @GitHubCopilot, @opencode, @claudeai code ]. ใƒปContext7 โ†’ SKILL + curl ใƒปTavily โ†’ SKILL + curl ใƒปPlaywright โ†’ SKILL + agent-browser SKILLs are all you need!
R
Ryan Florence @ryanflorence ·
why is nothing on the mobile web animated like this? anyway, i'm going to buy this course
E emilkowalski @emilkowalski

You can enroll in my animation course for the next 10 days! It's the perfect way to learn the theory behind great animations, but also how to build them in code. Now with a skill file for agents. We'll cover all of these components and more, source code included. https://t.co/RvK4piO5QQ

D
Dan โšก๏ธ @d4m1n ·
Using Ralph Wiggum loop will put you ahead of 98% of devs
D
Dan โšก๏ธ @d4m1n ·
Ralph loop is extremely unhealthy. I now run 1-2 loops 24/7, tweaking, iterating. Before sleep I set off a loop, I wake up 3-5x a night thinking of it with excitement. My workflow is getting better, but it's a steep learning curve. I'll share it all when I'm a bit further, there are some things I still don't like. The best thing? I am doing projects I never had time to do and I can't stop. I am tired too ๐Ÿ˜… But having something code while you sleep is pretty incredible.
D d4m1n @d4m1n

Using Ralph Wiggum loop will put you ahead of 98% of devs

M
Matt Pocock @mattpocockuk ·
Got a lot of slop out of Ralph today The reason was, I didn't specify what modules I wanted up front, and that I wanted a simple, testable interface From the Philosophy of Software Design: https://t.co/jJcGHyH9Nd
J
John Palmer @johnpalmer ·
you kinda seem more like a Claude Cowork user (derogatory)
E
Excalidraw @excalidraw ·
We've made text-to-diagram better. Chat interface. Streaming. Smarter. Faster. Stronger. https://t.co/q0xKW3dJTi
D
dei @parcadei ·
WTF is a Context Graph? A Guide to the Trillion-Dollar Problem
B
Ben @bwarrn ·
Lunch w/ an exited founder who helps fortune 500 companies adopt AI. Insane reality check: Some of the biggest companies on earth use *zero* AI tools. Not even ChatGPT. Execs only recognize: ChatGPT, Copilot, Gemini (maybe Perplexity). Everyone feels behind. Nobody knows what to buy or how to plug it in. The "AI saturation" narrative is another example of what a bubble Silicon Valley is. Rest of the world hasnโ€™t started yet. We have to build for the 99%.
V
vas @vasuman ·
AI Agents 102
C
Chris Tate @ctatedev ·
This is where we're at rn: I spent the 3-day weekend agent-coding a complex system: advanced networking, orchestration, caching, bare metal, reverse proxies, custom Linux kernel This would've taken me 1-2 years solo And the result might be one of the best in its category
A
Andrew Qu @andrewqu ·
Wow this skill by remotion is SICK I nearly 1 shotted this launch video for https://t.co/kdOWe32i5V @opencode chat transcript down below ๐Ÿ‘‡ https://t.co/eNJkC6HoZt
R Remotion @Remotion

Remotion now has Agent Skills - make videos just with Claude Code! $ npx skills add remotion-dev/skills This animation was created just by prompting ๐Ÿ‘‡ https://t.co/hadnkHlG6E

C
Chris McCoy @TheRealMcCoy ·
Fascinating. tl;dr for my crowd Photonic computing swaps electricity for light to handle the massive number-crunching that makes AI models work, particularly the matrix multiplications needed to train and run large systems like ChatGPT. Light travels extremely fast and can process huge amounts of data all at once through beams spreading out, overlapping, or using different colors (wavelengths), hitting speeds around 100 trillion cycles per second. Recent breakthroughs in top scientific journals show setups where these giant multiplications happen in a single quick pass of lightโ€”meaning the time it takes doesn't grow much bigger even when dealing with enormous models or datasets, unlike regular computer chips that slow down as things get larger. This could bring huge jumps in speed and much lower energy use for AI tasks, potentially shifting future computers to rely mainly on light instead of electrical signals.
๐Ÿ“™
๐Ÿ“™ Alex Hillman @alexhillman ·
I meet a lot of people who don't realize how much valuable paper trail Claude Code creates for itself. Slurping up those session transcripts and parsing them in various ways unlocks: - memory and recall - pattern recognition - self-generating/repairing skills and workflows And SO MUCH MORE
T trq212 @trq212

@souravbhar871 Itโ€™s all stored locally in your .claude folder, you can ask Claude to read it and create scripts to help visualize it

โ‚•
โ‚•โ‚โ‚˜โ‚šโ‚œโ‚’โ‚™ @hamptonism ·
pov: driving to your $450k swe job knowing itโ€™s just another 8 hours of having Claude do everything for you until youโ€™re eventually replaced entirely within 12 months, https://t.co/AclKNRZCKP
๐Ÿ“™
๐Ÿ“™ Alex Hillman @alexhillman ·
@stolinski Add a user message hook that uses bash to check the date and time. Injects it into the session invisible to you but reminds the agent what time it is.
๐Ÿ“™
๐Ÿ“™ Alex Hillman @alexhillman ·
When I started building my assistant I figured this one out FAST. Claude Code doesn't know what time it is. Or what time zone you are in. So when you do date time operations of ANY kind, as simple as saving something to your calendar, things get weird fast. My early solution has stuck thru every iteration of my JFDI system and it's dummy simple: I use Claude Code hooks to run a bash script that generates current date time, timezone of host device, friendly day of week etc. Injects it silently into context. I never see it but date time issues vanish. 3+ most battle tested. Kinda wild that this isn't baked in @bcherny (thank you for CC btw it changed my life no exaggerating)
S stolinski @stolinski

My clawdbot sucks at days and time. It never seems to have any clue what the current day or time is.

A
abhi @Abhigyawangoo ·
Why your AI agents still donโ€™t work
D
David E. Weekly @dweekly ·
@bwarrn I worked for a Fortune 100 company that liked to declare itself on the "frontier of AI" when only one percent of the employee population had access to any form of it.
C
cogsec @affaanmustafa ·
~7500 stars and ~1000 forks in < 4 days 01/21/2026 @ 9AM PST: "The Longform Guide to Everything Claude Code" > Token optimization > Memory persistence > Continuous Learning > Verification loops > Parallelization > Subagent orchestration > + advanced e.g. (pass@k vs pass^k) https://t.co/0pvpQDc5CP
A affaanmustafa @affaanmustafa

The Shorthand Guide to Everything Claude Code

J
Jakub Krcmar @jakubkrcmar ·
Its nuts to see what an open source project like @clawdbot is quickly becoming โ€” wet dream of leading ai companies and many startups. Just shows how fundamental things are shifting. Respect to @steipete
N nateliason @nateliason

Yeah this was 1,000% worth it. Separate Claude subscription + Clawd, managing Claude Code / Codex sessions I can kick off anywhere, autonomously running tests on my app and capturing errors through a sentry webhook then resolving them and opening PRs... The future is here.

C
Charly Wargnier @DataChaz ·
NVIDIA just removed one of the biggest friction points in Voice AI. PersonaPlex-7B is an open-source, full-duplex conversational model. Free, open source (MIT), with open model weights on @huggingface ๐Ÿค— Links to repo and weights in ๐Ÿงตโ†“ The traditional ASR โ†’ LLM โ†’ TTS pipeline forces rigid turn-taking. Itโ€™s efficient, but it never feels natural. PersonaPlex-7B changes that. This @nvidia model can listen and speak at the same time. It runs directly on continuous audio tokens with a dual-stream transformer, generating text and audio in parallel instead of passing control between components. That unlocks: โ†’ instant back-channel responses โ†’ interruptions that feel human โ†’ real conversational rhythm Persona control is fully zero-shot! If youโ€™re building low-latency assistants or support agents, this is a big step forward ๐Ÿ”ฅ
D
Ddox @paraddox ·
You folks asked for it. Simplest Ralph loop: #!/bin/bash PROMPT="${1:-prompt here}" for i in {1..50}; do echo "=== Run $i/50 ===" claude --dangerously-skip-permissions -p "$PROMPT" echo "" done
G
Gergely Orosz @GergelyOrosz ·
One interesting observation: inside a Big Tech, the internal token leaderboard is dominated byโ€ฆ very very experienced engineers. Distinguished-level folks who you rarely saw code day to day before LLMs. Also, some VPs (!!)
J
Jarred Sumner @jarredsumner ·
In the next version of Bun `bun --cpu-prof-md <script>` prints a CPU profile as Markdown so LLMs like Claude can easily read & grep it https://t.co/1B3Xv3pcLG
T
Tom Osman ๐Ÿฆโ€โฌ› @tomosman ·
How I'm using Clawd.bot to change how I get things done.
K
Kernel @usekernel ·
Introducing Browser Pools โ€” instant browsers with the logins, cookies, and extensions your agents depend on. Designed to make using Kernel even faster. https://t.co/Gt6cc9awcd
J
Jason Resnick ๐ŸŒฒ๐Ÿ’Œ @rezzz ·
@theirongolddev @alexhillman What Alex did I thought was geniusโ€ฆ I had it interview me for ergonomics I had it ask me my fears, what I didnโ€™t like, what works for me, what I want, how I want to work/show up, and other things about me so the system works for me and not the other way around.
๐Ÿ“™
๐Ÿ“™ Alex Hillman @alexhillman ·
hillman's razer of ai assistants: if you ask your AI assistant more questions than it asks you, you're gonna have a bad time. the real magic is combining confidence scoring with interviewing workflows. effectively "if you're not above X confidence threshold, stop and use this interview workflow until you're above that threshold" solves a wide swath of problems
R rezzz @rezzz

@theirongolddev @alexhillman What Alex did I thought was geniusโ€ฆ I had it interview me for ergonomics I had it ask me my fears, what I didnโ€™t like, what works for me, what I want, how I want to work/show up, and other things about me so the system works for me and not the other way around.

E
Eric S. Raymond @esrtweet ·
We're in the Singularity now, and it's screwing up the business planning of everybody in tech. How do you do product design when the pace of change in AI is so rapid that you can be pretty sure your concept will be obsolete before it ships? Vernor Vinge first articulated the concept of the Singularity in 1983, describing it as the point at which technological change accelerates to a speed where what comes after the Singularity is incomprehensible in terms of what was before it. And that's right where we are in early 2026. Nobody knows what to build that will still have value in 3 months. Which, in retrospect...what did you think it was going to be like? Vibes? Papers? Essays? Strap in, kids. The ride is only going to get wilder.
B bwarrn @bwarrn

Lunch w/ an exited founder who helps fortune 500 companies adopt AI. Insane reality check: Some of the biggest companies on earth use *zero* AI tools. Not even ChatGPT. Execs only recognize: ChatGPT, Copilot, Gemini (maybe Perplexity). Everyone feels behind. Nobody knows what to buy or how to plug it in. The "AI saturation" narrative is another example of what a bubble Silicon Valley is. Rest of the world hasnโ€™t started yet. We have to build for the 99%.

R
Rafael Garcia @rfgarcia ·
Browser pools unlock so many cool uses cases: - Spin up a of bunch of browsers all QAing your site - Run large-scale evals on your browser agent - Give a fleet of parallel subagents different research tasks Keep them running as long as you like w/o getting charged for standby CPU time.
U usekernel @usekernel

Introducing Browser Pools โ€” instant browsers with the logins, cookies, and extensions your agents depend on. Designed to make using Kernel even faster. https://t.co/Gt6cc9awcd

L
Lior Alexander @LiorOnAI ·
Repo: https://t.co/NAqzYcIyua
L
Lior Alexander @LiorOnAI ·
You can now run 70B LLMs on a 4GB GPU. AirLLM just made massive models usable on low-memory hardware. ๐—ช๐—ต๐—ฎ๐˜ ๐—ท๐˜‚๐˜€๐˜ ๐—ต๐—ฎ๐—ฝ๐—ฝ๐—ฒ๐—ป๐—ฒ๐—ฑ AirLLM released memory-optimized inference for large language models. It runs 70B models on 4GB VRAM. It can even run 405B Llama 3.1 on 8GB VRAM. ๐—›๐—ผ๐˜„ ๐—ถ๐˜ ๐˜„๐—ผ๐—ฟ๐—ธ๐˜€ AirLLM loads models one layer at a time. Instead of loading everything: โ†’ Load a layer โ†’ Run computation โ†’ Free memory โ†’ Load the next layer This keeps GPU memory usage extremely low. ๐—ž๐—ฒ๐˜† ๐—ฑ๐—ฒ๐˜๐—ฎ๐—ถ๐—น๐˜€ โ€ข No quantization required by default โ€ข Optional 4-bit or 8-bit weight compression โ€ข Same API as Hugging Face Transformers โ€ข Supports CPU and GPU inference โ€ข Works on Linux and macOS Apple Silicon ๐—ช๐—ต๐—ฎ๐˜ ๐˜†๐—ผ๐˜‚ ๐—ฐ๐—ฎ๐—ป ๐—ฑ๐—ผ โ€ข Run Llama, Qwen, Mistral, Mixtral locally โ€ข Test large models without cloud GPUs โ€ข Prototype agents on cheap hardware
A
Anthropic @AnthropicAI ·
Weโ€™re publishing a new constitution for Claude. The constitution is a detailed description of our vision for Claudeโ€™s behavior and values. Itโ€™s written primarily for Claude, and used directly in our training process. https://t.co/CJsMIO0uej
L
Lisan al Gaib @scaling01 ·
Anthropic is preparing for the singularity https://t.co/QtTehqoyu8
S scaling01 @scaling01

I'm starting to get worried. Did Anthropic solve continual learning? Is that the preparation for evolving agents? https://t.co/pcCoSM4gAr

T
Talley @__Talley__ ·
Okayโ€ฆ video editors are cooked. I made this video for Polymarket in 30 minutes. Only took 4-5 prompts. https://t.co/YFOeHSTwgW
R Remotion @Remotion

Remotion now has Agent Skills - make videos just with Claude Code! $ npx skills add remotion-dev/skills This animation was created just by prompting ๐Ÿ‘‡ https://t.co/hadnkHlG6E

F
Factory @FactoryAI ·
Introducing Agent Readiness. AI coding agents are only as effective as the environment in which they operate. Agent Readiness is a framework to measure how well a repository supports autonomous development. Scores across eight axes place each repo at one of five maturity levels. https://t.co/9POPIY3hXr
B
Ben Tossell @bentossell ·
all repos should be agent-ready
F FactoryAI @FactoryAI

Introducing Agent Readiness. AI coding agents are only as effective as the environment in which they operate. Agent Readiness is a framework to measure how well a repository supports autonomous development. Scores across eight axes place each repo at one of five maturity levels. https://t.co/9POPIY3hXr

M
Matan Grinberg @matanSF ·
โ€ข No pre-commit hooks = agent waits 10 min for CI instead of 5 sec โ€ข Undocumented env vars = agent guesses, fails, guesses again โ€ข Build requires tribal knowledge from Slack = agent can't verify its own work codebases with fast validation makes every agent more effective
F FactoryAI @FactoryAI

Introducing Agent Readiness. AI coding agents are only as effective as the environment in which they operate. Agent Readiness is a framework to measure how well a repository supports autonomous development. Scores across eight axes place each repo at one of five maturity levels. https://t.co/9POPIY3hXr

K
Kevin @kcosr ·
@FactoryAI Who is going to create an open skill for this concept? Any takers? ๐Ÿค”