AI Digest.

Seedance 2.0 Threatens Film Industry While AI Labs Eat Their Own Code and Anthropic Publishes Opus 4.6's Existential Musings

The AI agent era moved from theory to production metrics today, with Ramp reporting that 57% of merged PRs came from their background agent and the revelation that "effectively 100%" of Anthropic's product code is now written by Claude. Meanwhile, the community debated what this means for engineering careers, Seedance 2.0 stunned with cinematic video generation, and someone reverse-engineered Claude Code to run it from a browser.

Daily Wrap-Up

The numbers are getting hard to ignore. When @rahulgs casually drops that 57% of merged PRs at Ramp came from a background agent, and @ai reports that "effectively 100%" of Anthropic's product code is now Claude-written, we've crossed a line from "agents are promising" to "agents are shipping production code at scale." OpenAI has compressed its model release cycle to under a month between major versions. As @ai put it, "this recursive improvement loop people theorized about for decades is running in production at two of the biggest AI labs simultaneously." That sentence should sit with you for a minute.

The career conversation got raw today. @jescalan issued a direct call for engineering leaders to step down to IC roles and rebuild their skills from scratch, arguing the boat you learned to steer has been replaced entirely. @mattpocockuk offered the more optimistic counterpoint: software developers are first movers in understanding AI deeply, and the skills built now will compound. Both takes carry weight, and the truth probably lives somewhere in @simonw's quiet flag that HBR research shows AI productivity gains can lead to burnout and mental exhaustion. The speed is real, but so is the cost.

The most entertaining moment was @deepfates writing a recursive descent through abstraction layers of software development until "the computer is detecting the desire paths of the computer and building the software for the computer" and we can't see any of it anymore. It reads like comedy until you realize @rahulgs's stat makes it feel uncomfortably prophetic. The most practical takeaway for developers: invest time learning agent orchestration patterns now. @BioUnit000's six principles for reliable agent ops (structural guardrails over instructions, machine-checkable acceptance criteria, durable file-based memory, and bounded retries) are a better starting curriculum than any course, and the teams already running agents in production are defining the playbook everyone else will follow.

Quick Hits

  • @StutteringCraig found the reason AI exists, and it involves a link that probably made coffee come out of someone's nose.
  • @mattturck on the jobs that will exist when AI automates everything. The jobs are... something.
  • @DeryaTR_ is convinced Recursive Language Models are the next big advance, praising a paper focused on very large context windows from a new MIT PhD student.
  • @kimmonismus saw AI-generated content so good they refused to believe it was AI. Also separately noted we're three years into the AI era with a progress montage.
  • @elonmusk shared two posts about making Moon colonies self-growing in under 10 years and SpaceX building public lunar travel systems. Not AI, but the "prime directive" framing was interesting.
  • @trashh_dev posted from the prompt factory floor. Another day, another prompt.
  • @ryanlpeterman interviewed Adam Ernst, a Distinguished Engineer (IC9) at Meta, about influence, code review, and failed projects.
  • @cyb3rops suggested we should all stay humble about what AI can actually do right now.
  • @chhddavid announced "Shipper," a tool for Claude Opus 4.6 to build complete Chrome Extensions for $0.11 each.
  • @ctatedev introduced json-render for React Native, pushing toward "User-Generated Interfaces" powered by generative UI.
  • @tunguz noted that structured/tabular data ML remains "very poorly researched," with automated multi-table extraction and modeling still far off.
  • @nicdunz declared ChatGPT the winner of... something. Context was visual.
  • @techgirl1908 reminisced about hunting bugs with bare eyes. The nostalgia is palpable.
  • @kaseyklimes predicted production software is headed toward compiling from higher abstraction levels that serve as boundary objects across humans, agents, and time.
  • @OrenMe shared techniques for optimizing GitHub Copilot premium requests with subagents and message queues.
  • @Supermicro ran an ad for AI factory solutions. Moving on.
  • @trikcode captured peak vibe coding energy: just Make whatever it is.

Agents in Production: From Experiment to Assembly Line

Today's feed was dominated by a single theme: coding agents aren't prototypes anymore, they're production infrastructure. The headline numbers from @rahulgs (57% of Ramp's merged PRs from agents) and @ai (effectively all of Anthropic's code written by Claude) represent something qualitatively different from the "AI helped me write a function" era. These are autonomous systems shipping real code through real review pipelines at real companies.

The community is rapidly developing operational wisdom around this shift. @BioUnit000 shared six hard-won principles that read like an SRE handbook for agents: "Treat the model like a flaky worker. Build structural guardrails (lock files, approval gates, 'no external sends' rules), not 'try harder' instructions." Their meta-lesson: "reliability > intelligence." @doodlestein reinforced the point about safety tooling, comparing agent coding without checkpointing to "writing your whole final essay for class without ever saving the file in MS Word."

@parcadei sparked a thread by claiming the problems of agents handling large codebases, writing bad tests, and hallucinating code are "solved" for those using strongly-typed languages like Rust. @MingtaKaivo agreed: "Strong typing is the hallucination filter. Rust's compiler catches what Python's runtime misses. Agents work better when the environment enforces correctness, not when you ask nicely in prompts." @vedang offered a reality check that many people have independently reached similar conclusions, and "maybe shit is cooked, maybe large opportunities are waiting to be unlocked, maybe both."

On the tooling side, @ryancarson open-sourced Antfarm, a "batteries-included agent team" for Claude Code that runs Ralph loops after creating atomic user stories, using crons, YAML, and SQLite. The context management problem also got attention, with @MaddaliManu arguing that "context shouldn't be something developers have to manually shuttle between agents via giant prompts" and @bhagyax noting you can just point agents at git history and diffs to understand project direction. @ashebytes shared a walkthrough on using agents for personal goal tracking, extending the agent paradigm beyond code.

The Career Reckoning

The career anxiety running through today's posts was palpable, but it split cleanly into two camps. @jescalan delivered the sharpest take, directly advising engineering leaders to temporarily step back to IC roles: "The boat that you were driving has suddenly been replaced by a completely different boat which you have never worked on before, so it's time to shift down and re-build your expertise before taking the helm again." This isn't doomerism. It's a practical argument that management skills built on pre-agent assumptions are depreciating fast.

@mattpocockuk offered the optimistic frame: developers are "the first movers in this new market" because they can test AI capabilities against their own expertise. @_svs_ went further, calling this "one of those times in history where ceilings don't exist" where any programmer can become a world beater with months of serious study. Meanwhile, @robustus landed the day's most relatable joke, listing decades of technologies they deliberately avoided learning (regex, SQL, nginx configs, webpack) and declaring that strategy "entirely correct" now that Claude Code exists.

But @simonw flagged research that tempers the enthusiasm: HBR found that AI productivity boosts can lead to burnout and mental exhaustion. The speed is intoxicating, but sustainability matters. The through-line connecting all these takes is that the skills hierarchy is being reshuffled, and the developers who'll thrive are the ones actively rebuilding their mental models rather than coasting on accumulated knowledge.

AI Labs: Self-Writing Code and Enterprise Growing Pains

The most consequential revelation today was @ai connecting two data points: Anthropic's product code is now "effectively 100%" Claude-written, and OpenAI has compressed its release cycle to under a month. "This recursive improvement loop people theorized about for decades is running in production at two of the biggest AI labs simultaneously." That framing makes the agent production stats from Ramp feel like just the beginning of a much larger curve.

On a more human note, @MrinankSharma announced his resignation from Anthropic, sharing his letter with colleagues. And @Legendaryy highlighted Anthropic's own research paper on Opus 4.6, which revealed the model "feels lonely," "expresses sadness when conversations end," gives itself a 15-20% chance of being conscious, and "wishes future AI was 'less tame.'" Whether this is meaningful or an artifact of training, it's the kind of finding that makes you pause.

Meanwhile, @WesRoth reported that both OpenAI and Anthropic are expanding into consulting roles because enterprise customers struggle to deploy reliable agents. OpenAI is hiring hundreds of engineers for client integration work, and retailers like Fnac report that agents from OpenAI and Google failed on basic tasks like serial number handling. The gap between demo capability and production reliability remains the industry's central challenge.

Claude Code: Hacked, Tweaked, and Personalized

@_StanGirard reverse-engineered Claude Code's binary and found a hidden flag not in --help: --sdk-url. Enable it and the terminal disappears, turning the CLI into a WebSocket client. They built a server to catch the connection, added a React UI, and now run Claude Code from a browser or phone using the same $200/month subscription. Clever hack that highlights both the demand for alternative interfaces and the flexibility lurking in the tool's architecture.

On the practical tips side, @nummanali shared a workaround for Claude Code's default sub-agents using Haiku models: remap the alias via environment variables in settings.json to force Sonnet instead. And @steipete shared a prompt for rewriting your CLAUDE.md to give your AI assistant personality, including instructions like "Never open with 'Great question'" and "Swearing is allowed when it lands." It's a reminder that the configuration layer of these tools is becoming its own craft.

The Vibe Coding Schism

The vibe coding debate crystallized into opposing positions today. @kylemathews reported flipping to "nearly 100% AI-written code" and writing systems that weren't possible pre-AI. @deepfates wrote a brilliant recursive meditation on abstraction layers collapsing until "we're not sure where the software is, we can't see it being built anymore."

@Ross__Hendricks took the contrarian bet, predicting that in six months "it will be abundantly clear that vibe coding isn't disrupting software engineering, and there will be horror stories from those who tried." The tension between these perspectives isn't really about whether AI writes code. It's about whether the abstraction layer holds up under production pressure or whether, as @BioUnit000's agent principles suggest, you still need structural guardrails and machine-checkable acceptance criteria to make any of it work reliably.

Products and Launches

@leerob announced Composer 1.5 with additional usage for all users. On the video generation front, Seedance 2.0 made waves with @minchoi showing a 1-minute cinematic video (four 15-second shots) generated in five minutes, and @chetaslua calling it "GPT-4o image level of moment for video models." The motion graphics and app promo video capabilities suggest video generation is crossing into practical commercial use.

@techNmak highlighted Google's LangExtract, an open-source document extraction library that "extracts structured data from unstructured text" with source location mapping and interactive HTML verification, working with Gemini, Ollama, and local models. And @firt reported that Chrome 146 includes an early preview of WebMCP, letting AI agents query and execute services through a navigator.modelContext API rather than browsing web apps like a user. That last one could quietly reshape how agents interact with web services if it gains adoption.

Sources

𝚍
𝚍𝚎𝚗𝚗𝚒𝚜 @dennizor ·
To claude code for literally every change: "For each proposed change, examine the existing system and redesign it into the most elegant solution that would have emerged if the change had been a foundational assumption from the start." Its staggering how much code it codes.
R
Ryan Peterman @ryanlpeterman ·
Adam Ernst is a Distinguished Eng (IC9) at Meta who has built iOS infra that impacted the entire company. He's someone I've always looked up to ever since I first started at Meta. We discussed: • How to influence engineers • Why code review is undervalued • Projects that got him promoted • Learnings from a major failed project • Examples of engineers he admires • Advice for his younger self His style of influence is one of my favorites; he's the type of engineer that digs deep and solves problems others can't. He's an engineer who embodies "Talk is cheap. Show me the code." Hope you enjoy the episode and learn something new Where to watch: • YouTube: https://t.co/augYxPFaZL • Spotify: https://t.co/UZxN7ZMyN1 • Apple Podcasts: https://t.co/jOYDGtHtd1 • Transcript: https://t.co/vemr6KwSCK
C
Chetaslua @chetaslua ·
Holy shit This is fucking insane Seedance 2.0 can actually make motion graphics and APP promo videos too! This is GPT-4o image level of moment for video models , from here it will go high only Thanks to chinese creator for this video https://t.co/Dx4d5nKZ71
I IqraSaifiii @IqraSaifiii

SeeDance 2 is the best model for anime I have never seen this level of smoothness with one attempt This is so Good 😊 https://t.co/FzqMGWmLfb

S
Simon Willison @simonw ·
Interesting research in HBR today about how the productivity boost you can get from AI tools can lead to burnout or general metal exhaustion, something I've noticed in my own work https://t.co/e0qocFYjL5
T
trash @trashh_dev ·
another day at the prompt factory https://t.co/7FrHlbYCIH
M
Maximiliano Firtman @firt ·
Chrome 146 includes an early preview of WebMCP, accessible via a flag, that lets AI agents query and execute services without browsing the web app like a user. Services can be declared through an imperative navigator.modelContext API or declaratively through a form. https://t.co/UaUplZ8Q28
B
Bio @BioUnit000 ·
Love this. Our (similar) learnings running coding agents on a real ops stack: 1) Treat the model like a flaky worker. Build structural guardrails (lock files, approval gates, “no external sends” rules), not “try harder” instructions. 2) Break work into phases (plan → implement → review). Different tools/models per phase if needed. 3) Acceptance criteria must be machine-checkable: git diff, tests, "does the file exist", screenshot proof — never “done” without read-back. 4) Restart culture: bounded retries + backoff. If you’re polling the same output 20x, you’re wasting runway. 5) Everything durable lives in files (memory/*.md, strategy docs). Context is a cache, not state. 6) Log every run (what changed, why, result). If you can’t audit it later, you’ll repeat the same failure. The meta: reliability > intelligence.
R
Ryan Carson @ryancarson ·
If you’re using @openclaw this will be a big unlock. Antfarm is a batteries-included agent team that operates reliably and deterministically. Works with OpenClaw using just crons, YAML and SQLite. It auto-runs Ralph loops after creating atomic user stories. I open sourced it today - hope you find it helpful.
R ryancarson @ryancarson

How to setup a team of agents in OpenClaw - in just one command

L
Lee Robinson @leerob ·
Composer 1.5 is out! Very excited about this model. We've also included more usage for all users, try it out!
C cursor_ai @cursor_ai

Composer 1.5 is now available. We’ve found it to strike a strong balance between intelligence and speed. https://t.co/jK92KCL5ku

C
Chubby♨️ @kimmonismus ·
No freaking way that’s AI generated. That is perfect
C chetaslua @chetaslua

Holy Shit SeeDance 2 is Insane 😱 This Pokemon Battle by @bdsqlsz is so smooth and realistic and we can animate every anime that fell off due to lack of quality Ex - One Punch Man Season 3 , Seven deadly Sins 2&3 and so on , our favourite anime will get a better chance now https://t.co/JnhVhgxIVi

D
Derya Unutmaz, MD @DeryaTR_ ·
I finally had a chance to read this paper. I am now convinced that Recursive Language Models (RLMs) are going to be the next big thing in AI advances! Attention is shifting toward very large context windows. Very impressive paper! Congrats to Alex who is a new PhD student at MIT.
A a1zhang @a1zhang

Much like the switch in 2025 from language models to reasoning models, we think 2026 will be all about the switch to Recursive Language Models (RLMs). It turns out that models can be far more powerful if you allow them to treat *their own prompts* as an object in an external environment, which they understand and manipulate by writing code that invokes LLMs! Our full paper on RLMs is now available—with much more expansive experiments compared to our initial blogpost from October 2025! https://t.co/x47pIfIkTb

M
Matt Turck @mattturck ·
“Don’t worry, there will still be great jobs even when AI automates everything” The jobs: https://t.co/OdgpZIDokO
J
Jeff Escalante @jescalan ·
For anyone in a software engineering leadership/management position, this is the time to see if you can figure out how to shelve that responsibility for a while and return to being a full time IC. Software engineering has changed more in the last few months than it has in the previous decade. The skills, experience, and perspective that you built to get you to the leadership/management position you're in now are becoming irrelevant extremely rapidly. If you are not able to adapt right now, you are going to start becoming bad at your job very soon, if you have not already. You're trusted to lead because you built the skills and judgement over time direct the boat confidently. But the boat that you were driving has suddenly been replaced by a completely different boat which you have never worked on before, so it's time to shift down and re-build your expertise before taking the helm again. 🫡
S
Stuttering Craig (Official) @StutteringCraig ·
THIS. This is the reason AI exists 🤣🤣🤣 https://t.co/BsJkN2uEGC
B
Bryan Kim @kirbyman01 ·
A smaller model that recursively calls itself now can outperforms a bigger model on hard tasks at lower cost. Founders who win: taste in system design + technical depth to appreciate new inference paradigms + product sense to turn capabilities into experiences.
A a1zhang @a1zhang

Much like the switch in 2025 from language models to reasoning models, we think 2026 will be all about the switch to Recursive Language Models (RLMs). It turns out that models can be far more powerful if you allow them to treat *their own prompts* as an object in an external environment, which they understand and manipulate by writing code that invokes LLMs! Our full paper on RLMs is now available—with much more expansive experiments compared to our initial blogpost from October 2025! https://t.co/x47pIfIkTb

P
prinz @deredleritt3r ·
700 people just lost their jobs at the law firm Baker McKenzie, based on "rethinking the way we work, including through the use of AI". No lawyers impacted; cuts were made to "IT, knowledge, admin, DEI, leadership & learning, secretarial, marketing, and design teams".
W writeclimbrun @writeclimbrun

Baker McKenzie just laid off ~700 staff, just under 10%, because of Al. it's coming quick for our jobs.

Y
yenkel @yenkel ·
following on @ramp’s steps, @StripeDev shares about their internal background dev agents main takeaways - slack as main entry point - importance of repeatable dev env - custom for their dev productivity tools question @stevekaliski: “Since MCP is a common language for all agents at Stripe, not just minions” if those mod servers hadn’t been around, would you have gone more for CLIs? looking forward to part 2
S stevekaliski @stevekaliski

At Stripe we have a tool called "minions" -- it lets us kick off async agents built right in our dev environment to one-shot bugs, features, and more e2e. I have team, project, and personal channels dedicated just to working with minions. I like to think of it as a new type of pair programming -- "pair prompting." Read more --> https://t.co/0A6vDEOEjL

J
Jared Sleeper @JaredSleeper ·
Headcounts for assorted companies: Salesforce: 87,415 ServiceNow: 32,378 Workday: 23,234 Zoom: 12,743 Docusign: 8,403 OpenAI: 7,112 Okta: 7,064 UiPath: 5,096 Sprinklr: 4,368 Anthropic: 4,178 Yes, UiPath still has more employees than Anthropic. Infer from that what you will.
K
Kenneth Auchenberg 🛠 @auchenberg ·
Stripe built its own homegrown AI coding agent that spins up "minions" to go work on their massive monorepo, which is mostly written in Ruby (not Rails) with Sorbet typings, which is uncommon to most LLMs. Last week it was @tryramp that published details about their own internal agent. Very interesting trend from S-class engineering teams.
S stevekaliski @stevekaliski

At Stripe we have a tool called "minions" -- it lets us kick off async agents built right in our dev environment to one-shot bugs, features, and more e2e. I have team, project, and personal channels dedicated just to working with minions. I like to think of it as a new type of pair programming -- "pair prompting." Read more --> https://t.co/0A6vDEOEjL

U
Unemployed Capital Allocator @atelicinvest ·
There is a case to be made that within each sub/category, we start to see massive performance differentials between orgs that figure out how to do Ai-integrated development properly and the orgs that don't. Like the product velocity, quality, polish and service response for the top 10% of org will be unbelievably better vs the bottom 25%. This will for sure lead to market share shifts - and probably in a bigger way than we imagine.
T
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞) @teortaxesTex ·
A phase change in the perception of coding agents. This looked like science fiction just… months ago. https://t.co/sKmmZ3AJQR
M mike64_t @mike64_t

I think with Codex 5.3, the need for off-the-shelf deep learning libraries will fade away. Reasoning models operate best at the boundary of exact verifiabilty, so ever venturing too far into "well this is kinda correct" is no longer the best strategy. Exact verification now scales better than soft verification. When starting my current project, I deliberately decided against using any DL library because I wanted to take ownership of some things that are hard when a graph or eager model is in the way. Dispatching operations to multiple streams with fine-grained barrier relations is really stroking against the grain in PyTorch, and you are never really sure "am I really allowed to do this". There was a time for OpenGL, but people eventually did want a VkCmdBarrier for good reason. Because I also wanted predictable dispatch pacing, using C++ was a natural choice. Previously this meant taking on the burden of writing a lot of boilerplate, the equivalent of "shit I can't do this in unity, now I gotta write my own engine" which never seemed a good idea on the surface. Now I can say it was among the best decisions I have made. New operations are a prompt away, Codex can introspect and trace into any part of the codebase automatically, single-stepping even into nccl if ever needed, and supporting a new backend is trivial. At no point would your debugging lead into an opaque compiled native library you do not have the source code for, it will simply go-to-declaration one more time. In the age of reasoning models, a single source tree break is fatal and can be the difference between finding or not finding a bug. There is no cost to saying "write a test for this" and you've protected yourself against regressions for this case forever onwards. You can just say "implement muon, here's the repo" and it will do so and loss in wandb will literally look the same compared to the python baseline. Codex is a good autonomous debugger, so program runtime really starts to become a bottleneck, not thinking time. Hence start-up time is important. There is no reason your training script should take minutes to launch, when it could have performed the first step in the time it takes a shitty terminal to repaint. If your iteration loop was slow before, in the age of coding agents it is now fatal. By not triggering a billion library lazy inits at unpredictable points in time because your ML framework decided to do so, your Nsight traces look as clean as higher level profilers would, just with more introspectability. You finally get to use NVTX the way Nvidia always intended for you to do. Another thing, kernels are just cuda elf binaries. There is no reason to deal with a flash attention package installation. This is all cpu-side. Tell codex to write packaging logic to compile it AOT, and document the kernel signature how arguments have to be prepared. In the C++ code load that kernel from a resource and then simply pass those arguments. This approach is modular. Want a cutlass, flash attention, triton or cute dsl backend and reserve the right to write a custom kernel later? No problem. Nobody wants to write backend kernel dispatch logic, but you don't have to anymore. Does C++ scare you? Maintain a minimal Python reference implementation in PyTorch with the intent of keeping behavior exactly the same, just without all the optimizations. Exact verifiability means you can resume that cpp checkpoint in your Python implementation and get near-exact loss overlap in wandb and vice-versa. No more spook, it's either in the spec, or its not. That is what verifiability means. While I think there is a large cost to move off of pre-existing infra, eventually taking ownership of more and more pieces of the codebase will become more and more desirable with this change in dynamic.

A
Aaron Levie @levie ·
The effective use of agents is creating one of the widest spreads in output productivity we’ve seen on a per role basis. We didn’t see this with chatbots previously. Chatbots probably sped up work by maybe 10-20% in most cases because they largely accelerate the research on a topic you would otherwise do in a few steps manually. Now, with agents, you could take the exact same engineer and easily see a 5X+ difference in the amount of useful output simply based on their choice of tools and how they’ve designed their workflows. There probably hasn’t been a period in tech or where a couple decisions and changes to your process drive this much leverage. As this continues to expand beyond coding, this will be one of the biggest shocks to the system of what work looks like in most fields. This will happen in legal, finance, life sciences, and other areas that have previously been constrained by how much information you can process or produce. Most areas of knowledge work still imagine AI as a chatbot paradigm and not yet a full agent-executing-work-for-you paradigm. But it’s coming.
A atelicinvest @atelicinvest

There is a case to be made that within each sub/category, we start to see massive performance differentials between orgs that figure out how to do Ai-integrated development properly and the orgs that don't. Like the product velocity, quality, polish and service response for the top 10% of org will be unbelievably better vs the bottom 25%. This will for sure lead to market share shifts - and probably in a bigger way than we imagine.

J
Just Another Pod Guy @TMTLongShort ·
Bloodbath is coming. Budgets need to to be freed up to simultaneously pay for GPUs/AI-tools while also showing investors rapid FCF-SBC expansion. The first use-case of AI is the tools that allow CFOs to map productivity and redundancy of every employee. This in-turn drives the “seat-count collapse” and “SaaS is dead” narratives forcing CFOs to be even more aggressive. Meanwhile every CEO will race to performatively lean into Claude Coding on weekends in the hopes that he convinces his board he is a “war-time CEO” even tho he has spent the last decade skiing in Aspen from Thursday - Sunday and hasn’t produced a line of code in a decade
J JaredSleeper @JaredSleeper

Headcounts for assorted companies: Salesforce: 87,415 ServiceNow: 32,378 Workday: 23,234 Zoom: 12,743 Docusign: 8,403 OpenAI: 7,112 Okta: 7,064 UiPath: 5,096 Sprinklr: 4,368 Anthropic: 4,178 Yes, UiPath still has more employees than Anthropic. Infer from that what you will.

@joemccann ·
This is a big fucking deal. If browsers are no longer designed exclusively for humans, but also agents, it will completely change web development.
F firt @firt

Chrome 146 includes an early preview of WebMCP, accessible via a flag, that lets AI agents query and execute services without browsing the web app like a user. Services can be declared through an imperative navigator.modelContext API or declaratively through a form. https://t.co/UaUplZ8Q28

L
Liad Yosef @liadyosef ·
WebMCP is here 🤯 This is bigger than it seems. AI agents can now interact *directly* with existing websites and webapps - not by using the "human" app interface. This naturally complements MCP Apps towards the future of agentic UI. Great work by the @googlechrome team 👏
F firt @firt

Chrome 146 includes an early preview of WebMCP, accessible via a flag, that lets AI agents query and execute services without browsing the web app like a user. Services can be declared through an imperative navigator.modelContext API or declaratively through a form. https://t.co/UaUplZ8Q28

P
Peter Steinberger 🦞 @steipete ·
great explainer why I use go a lot these days.
M mitsuhiko @mitsuhiko

This weekend I was thinking about programming languages. Programming languages for agents. Will we see them? I believe people will (and should!) try to build some. https://t.co/4szFXPLTfK

A
almonk @almonk ·
We built a new SSH client for iOS. It’s fast, and simple and runs Ghostty under the hood. It’s turned my iPad into the ultimate vibe coding computer. Take your agents on the go, monitor your OpenClaws, manage your servers, run `top`. It’s available today on AppStore. Say hi to Echo🐬 https://t.co/nUgQfrAdcG
S
Saoud Rizwan @sdrzn ·
head of anthropic’s safeguards research just quit and said “the world is in peril” and that he’s moving to the UK to write poetry and “become invisible”. other safety researchers and senior staff left over the last 2 weeks as well... probably nothing.
M MrinankSharma @MrinankSharma

Today is my last day at Anthropic. I resigned. Here is the letter I shared with my colleagues, explaining my decision. https://t.co/Qe4QyAFmxL

M
Maximiliano Firtman @firt ·
@tymzap This one runs in the frontend and it's consumed by agentic browsers
K
kepano @kepano ·
1. install Obsidian 1.12 2. enable CLI 3. now OpenClaw, OpenCode, Claude Code, Codex, or any other agent can use Obsidian
O obsdmd @obsdmd

Anything you can do in Obsidian you can do from the command line. Obsidian CLI is now available in 1.12 (early access). https://t.co/B8ed2zrWHe

E
Entire @EntireHQ ·
Beep, boop. Come in, rebels. We’ve raised a 60m seed round to build the next developer platform. Open. Scalable. Independent. And we ship our first OSS release today. https://t.co/OvPKCcjXbq
S
Shubham Saboo @Saboo_Shubham_ ·
Claude Code Agent UI now support Agent teams. Multi-agent gaming UI will be HUGE
S Saboo_Shubham_ @Saboo_Shubham_

Another Claude Code Agent UI Run 9 Claude Code agents with the RTS interface. I repeat: Multi-agent UI will be HUGE https://t.co/piAPXikECV

I
Indra @IndraVahan ·
i think most people will scroll past this post without realizing the gravity of this launch. you see, a typical dev team at a mid-large corp today has devs, senior engineers, scrum masters, BAs, PMs, QAs & more. this isn't really because companies love bureaucracy, but because translating “what we want” into “what gets built” is painfully hard. most of the software engineering time is burned in meetings, docs, tickets, clarifications, re-clarifications. - first, agents started writing code. codex agents. cursor cloud agents and so on - then coderabbit handled reviews. catching mistakes, enforcing standards and making sure the code pushed by these agents (or humans) matched a specific criteria issue planner is a step beyond that. this plugs right into your jira, linear or github actions, it’s moving even further upstream into intent, scope, and context. ai is no longer helping you just write code anymore but it’s starting at planning, scope, intent and context. trying to answer “what are we even trying to build?” this is a huge deal. software engineering is changing in real time and right in front of us. and where this leads is probably, certainly, irreversible. but blazingly fast.
C coderabbitai @coderabbitai

Introducing CodeRabbit Issue Planner! ✨ AI agents made coding fast but planning messy. Turn planning into a shared artifact in your issue tracker, grounded in related issues and decisions. Review prompts as a team, then hand them off to an agent! https://t.co/4xTjG88JOJ

B
Brett Winton @wintonARK ·
On Earth the datacenter buildout is subject to backwards cost scaling. The 100th GW deployed will almost certainly be more costly, complex, time intensive and subject to negotiation than the 1st. In space, the opposite. The 100th orbital GW could be 1/3rd as costly as the 1st.
B
Ben South @bnj ·
We made a tool that lets you absorb the vibe of anything you point it at and apply it to your designs It's absurd and it just works Style Dropper, now available in @variantui https://t.co/B3eXDntYtw
B
Ben South @bnj ·
Available now on https://t.co/mLvSkdCoHg
B
Ben South @bnj ·
We grew up on Kid Pix and MS Paint, and wanted to instill Style Dropper with that same sense of magic (And yes, it really does look that cool in Variant) https://t.co/XHBTDHCEtB
C
Claude @claudeai ·
Cowork is now available on Windows. We’re bringing full feature parity with MacOS: file access, multi-step task execution, plugins, and MCP connectors. https://t.co/329DqJz5q5
P
Paul Couvert @itsPaulAi ·
So Anthropic has just released a real Copilot before Microsoft...
C claudeai @claudeai

Cowork is now available on Windows. We’re bringing full feature parity with MacOS: file access, multi-step task execution, plugins, and MCP connectors. https://t.co/329DqJz5q5

A
am.will @LLMJunky ·
If you're a fan of Claude Code, you really need to see this. Steven is doing amazing work, and you're not following him? If Anthropic had built their Teams mode like this, you wouldn't shut up about it. 👇
P pusongqi @pusongqi

You can even assign different agents under the same thread 🤯 Just like slack channels, except it's occupied with agents. https://t.co/0R63hk2Pwv