AI Learning Digest.

MCP Context Pollution Fixed as Claude Code Skills Ecosystem Explodes Past 60,000

Daily Wrap-Up

Today felt like a tipping point for the Claude Code ecosystem. The fix for MCP context pollution, a problem that kept power users from connecting more than a handful of tools, landed with immediate impact. Simon Willison, who rarely gets excited about tooling that doesn't work reliably, declared he'd now hook up "dozens or even hundreds of MCPs." When the skeptics start celebrating, you know something real shipped. Paired with a skills marketplace that crossed 60,000 entries and Trail of Bits publishing their first batch, Claude Code is rapidly becoming a platform rather than just a coding assistant.

The other story that dominated the feed was Geoffrey Huntley's interview about the Ralph loop, an orchestration pattern for running thousands of AI coding loops with fresh context windows. The interview crystallized something a lot of practitioners have been feeling but couldn't articulate: the distinction between software development (translating tickets to code) and software engineering (architecture, security, orchestrating the loops). Huntley's claim that you can clone any SaaS product using AI-generated clean-room specifications is provocative, but the underlying point lands. The moat isn't in the code anymore. It's in taste, domain expertise, and the ability to steer these systems well. Ethan Mollick's "vibefounding" MBA class, where non-coders built working products in four days that would have taken a semester, reinforced the same message from the education side.

The most practical takeaway for developers: if you've been avoiding MCP because of context pollution, today's the day to revisit it. Start connecting your project-specific tools, linters, and databases. The skills ecosystem is also worth exploring for reusable agent behaviors. But more importantly, start thinking about your work in terms of specifications and orchestration rather than raw code output. The people pulling ahead are the ones writing specs that AI loops can execute deterministically, not the ones typing faster.

Quick Hits

  • @AngryTomtweets showed off Kling AI 2.6 Motion Control for video generation, continuing the steady march of video AI capabilities.
  • @alexhillman shared a "seeds" workflow for capturing proto-ideas in markdown with AI-assisted scoring frameworks. 132 seeds planted and counting.
  • @alexhillman also detailed batch transcription with local Whisper models, noting it runs about $1-1.50/hr via API but is free (just slower) locally.
  • @gregisenberg posted "40 reasons 2026 is the best time ever to build a startup," which is exactly the kind of optimism you'd expect from the current moment.
  • @bibryam quoted @addyosmani: "The best software engineers won't be the fastest coders, but those who know when to distrust AI."
  • @emollick posed the question of the day: "Could this meeting be an email? Could this organization be a set of markdown files?"
  • @victor_explore offered the one-liner that deserves a t-shirt: "the real context window was the architecture decisions we made along the way."
  • @BiaNeuroscience ran an ad for a sleep headband. Not AI-related, but honestly, most of us could use better sleep given the pace of this industry.

Claude Code, Skills, and the MCP Breakthrough

The biggest development today was structural rather than flashy: MCP context pollution got fixed. This was the silent killer of MCP adoption. Every connected tool injected context that muddied the model's understanding, making it impractical to wire up more than a few integrations. @simonw captured the shift perfectly: "Context pollution is why I rarely used MCP, now that it's solved there's no reason not to hook up dozens or even hundreds of MCPs to Claude Code." @arlanr was more succinct: "it happened mcp is no longer bs."

The timing aligned with a broader ecosystem push. @bcherny, clearly involved in the launch, noted that "every Claude Code user just got way more context, better instruction following, and the ability to plug in even more tools." @trq212 flagged Tool Search landing in Claude Code, which helps the model discover and select from large tool libraries, a critical piece when you're connecting dozens of MCPs.

On the skills side, the marketplace hit 60,000+ entries, as @milesdeutscher pointed out. @dguido announced that Trail of Bits published their first batch of Claude Skills, marking a significant moment where security-focused enterprises are publishing reusable agent behaviors. @d4m1n shared the practical installation path: copy a directory into .claude/skills and skills load on demand with minimal context overhead. @asidorenko_ demonstrated Codex-style skills usage patterns.

The usage intensity is real, though. @pvncher reported hearing from a big tech company that rolled out Claude Code with $100/month budgets, and "people burn through it in 2-3 days." The question of how agentic work scales with API pricing remains open. On the other end, @mattlam_ showed a $5/month setup using Clawdbot as a 24/7 personal assistant and coding agent on a Hetzner VPS, suggesting the cost spectrum is wide depending on your approach. @steipete reported productivity roughly doubling after switching from Claude Code to Codex, adding another data point to the ongoing tool comparison.

@jefftangx did something entertaining: he exported Cowork's entire VM snapshot and reverse-engineered it. Turns out it's an Electron app wrapping Claude Code with its own Linux sandbox, and it has an "internal-comms skill" made by Anthropic. The most poignant detail? When he asked it what questions he should have asked, it suggested adding memory and leaving notes for itself once it "dies." @emollick, meanwhile, built a plugin that visualizes Claude Code's subagent work as employees in an office, a fun reminder that these systems are genuinely multi-agent under the hood.

The Ralph Loop and AI-Native Engineering

The longest and most substantive post of the day was @jaimefjorge's writeup of his interview with Geoffrey Huntley on the "Ralph loop" pattern. The core idea: run thousands of AI coding loops, each with a fresh context window, with institutional knowledge living in specification files rather than accumulating in context. Every loop picks one task, executes it, and starts fresh, avoiding the compaction problem where models get dumber as context fills up.

The interview's most provocative claims landed in sequence. Huntley argued that "software development, the work of translating tickets into code, can now be done by anyone for $10-42/hour while they sleep" while software engineering remains human. He claimed you can clone any SaaS product, even BSL-licensed ones, by running AI in reverse over source code to generate clean-room specs, then filling gaps from marketing materials. And he offered a programming language tier list for AI agents: S-tier includes Rust and TypeScript with Effect.js for their strong type systems, while Java and .NET sit at F-tier due to DLL-based dependency systems that don't work well with AI search tools.

@Hesamation distilled Cursor's blog post on agent coding best practices into ten principles that map well to this worldview: use plan mode first, write tests so the agent can iterate, add rules for repeated mistakes, and give it linters to verify. @hjcharlesworth shared a mental model for agent pairing, noting "the gap is getting wider." @forgebitz observed that monorepos turned out to be a massive advantage for AI coding since "all context is inside one repo." @addyosmani pushed the conversation toward code review, arguing that when agents write code, "you stop asking only 'is this correct?' and start asking 'was this intent clear enough to execute safely?'" The prompt becomes the spec, the code becomes build output, and review should happen at the layer where human judgment lives.

AI Transformation Roles and the Career Reckoning

A cluster of posts converged on the same organizational insight: companies need dedicated AI transformation people, and the ones who have them are pulling ahead fast. @Codie_Sanchez called an internal AI transformation hire "the best money I've ever spent as a CEO," describing someone who "doesn't care about title, just wants to ship" and goes across the entire org killing manual processes. @jainarvind shared that Glean calls these "AI Outcomes Managers" who work with customers to identify high-friction workflows and deploy agents. @damianplayer reported that demand for these roles among $5M-$50M companies is "insane."

@emollick provided the education angle with his "vibefounding" MBA class where students launch companies in four days. His observations cut deep: "Everything they are doing in four days would have taken a semester in previous years, if it could have done it at all. Quality is also far better." Non-coders are building working products. People with industry experience have a huge advantage because they can build solutions with built-in markets. His hardest teaching challenge? Getting students to understand that AI doesn't just do work for you, it also does new kinds of work.

On the bleaker end, @DaveShapi revised his estimates for future employable humans downward to 15% labor force participation, meaning fewer than 1 in 6 working-age adults with meaningful employment. He promises "the solution is elegant," though the posts didn't elaborate. @vista8 shared a lengthy Chinese-language analysis of AI moving from personal assistant to organizational intelligence, arguing that the fastest path is embedding AI into existing collaboration tools (email, messaging, documents) rather than inventing new workflows. The organizational context isn't stored anywhere static; it's generated and destroyed through interaction, and AI needs to participate in those interactions to learn it.

New Tools and Platform Moves

Three product launches stood out. @_Evan_Boyle announced GitHub open-sourcing the Copilot CLI SDK, a technical preview supporting Go, Python, TypeScript, and C# with custom tools, built on the same agent loop powering the Copilot CLI and GitHub Coding Agent. It supports bring-your-own-key and any model, which positions it as a serious alternative for teams building custom agent tooling.

@_orcaman launched Openwork AI, an open-source (MIT) computer-use agent claiming to be roughly 4x faster than Claude for Chrome/Cowork and more secure since it doesn't use your main browser instance where you're already logged into everything. It's built by combining several open-source AI modules and supports any provider via bring-your-own-key.

Perhaps the most strategically significant announcement was @cryptopunk7213 flagging Google's "Personal Intelligence" launch, where emails, photos, YouTube history, search history, location, and documents all feed a personalized Gemini. The argument is straightforward: Google's data moat from billions of users' daily digital lives is something OpenAI and Anthropic simply cannot replicate. Whether users will opt into this level of data utilization remains to be seen, but the competitive dynamics are real.

Source Posts

E
Evan Boyle @_Evan_Boyle ·
Today we're open sourcing a technical preview of the GitHub Copilot CLI SDK. Build agents with custom tools in Go, Python, TypeScript, and C#. Built on the same agent loop that powers the Copilot CLI and GitHub Coding Agent. Supports BYOK, and any model. Here is the Copilot CLI driving Excel:
向阳乔木 @vista8 ·
这篇文章有点厉害,把组织如何用AI提效讲的很清楚。 文章超级长,转写一半大家感受下,推荐看原文 --- 你可能会看到一个矛盾的现象。 AI帮个人干活,效率高得惊人,但放到公司里,效果就大打折扣了。 为什么? 因为公司里的活儿,本质上不是一个人能搞定的。 需要协作、谈判、升级决策,要在时间线上不断对齐判断。 一个再聪明的AI,如果只能单打独斗,在组织里也就是个"局部优化"的工具。 作者这篇文章,主要讲AI怎么从"个人助理"进化成"组织智能"。 上下文不是藏在某个地方的宝藏 很多人觉得,只要给AI足够多的上下文,它就能理解组织怎么运作。 前提是:组织的上下文是个完整的、结构化的东西,就像化石埋在地层里,只要挖出来就行。 真相是,大部分组织根本不是这样运作的。 上下文不存在于某个数据库里,不在某份文档里,甚至不在老板脑子里。 它是在互动中不断生成和消失的。 今天开会定的事,明天可能因为一封邮件就变了。 AI要理解组织,不能只是"读资料",它得参与进来,像人一样在邮件、会议、文档里观察决策怎么展开,冲突怎么升级,共识怎么形成。 这才是真正的"上下文学习"。 人类的协作史,就是AI的未来 尤瓦尔·赫拉利在《人类简史》里说,人类能统治地球,不是因为个体更聪明,而是因为学会了大规模协作。 我们发明了神话、法律、货币、宗教这些"共同故事",让陌生人也能对齐行为。 科学也是这样。 17世纪之前,科学知识是碎片化的,靠私人信件和书籍传播,错误会一直流传,发现会不断丢失。 转折点不是某个新理论,而是协作系统的出现如科学期刊、学术社团、同行评议。 知识开始积累,是因为判断变成了社会化的过程。 电话也一样。 早期电话是点对点连接的,你得知道线通到哪儿才能打。 网络一大,这套就崩了。 怎么办?接线员出现了。 她们坐在交换机前,手动连接电话,记得谁在打给谁,哪些电话更紧急,怎么处理冲突。 电话能规模化,是因为有了这个"人工中介层"。 软件开发也经历过这个阶段。 Git之前,代码协作很脆弱。 CVS和SVN是中心化的,多人改代码得排队,冲突成本很高。 Git让分支变便宜了,记录变成了一等公民,冲突变得可见、可解决。 GitHub又加了一层社会化协作:PR、代码审查、issue讨论。 规律很明显:个体能力先出现,但指数级的生产力,只有在协作结构出现后才会爆发。 AI现在就在这个节点上。 组织不会按"角色"重组,而是按"协作单元" 很多人想象的未来是:AI接管某些岗位,人类做剩下的。 但作者觉得不是这样。 AI不受人类的限制——注意力、带宽、专业分工、层级结构——这些都不存在。 所以未来的组织不会按"角色"设计,而是按"协作单元"设计。 比如法务。 法务的核心工作是"共同立场"。 合同要经过律师、合伙人、客户的多轮谈判,立场在这个过程中不断演化。 今天,资深合伙人的价值很大一部分在于"记得住"——记得之前的先例、风险、立场变化。 未来,AI会承担这部分协调工作。 它跟踪所有未解决的问题,发现立场冲突,把判断性的决策升级给合适的人。 法务团队会重组:大量AI做机械性的起草和信息收集,少数资深合伙人做决策、风险判断、客户关系维护。 再比如市场。 市场的挑战是"叙事一致性"。 产品市场、增长、品牌、销售,各自有各自的说法,怎么对齐? 今天靠开会、审稿、非正式影响力。 未来,AI会跨渠道追踪叙事,发现偏离,升级冲突。 人类的角色从"渠道负责人"变成"叙事把关人"和"战略意图制定者"。 财务、产品也是类似的逻辑。 AI不是替代某个岗位,而是重新分配了协调工作。 最快的路径是: 把AI嵌入到组织已经在用的协作工具里——邮件、消息、浏览器、文档。 这不是"遗留系统",它们是工作的活基础设施。 意图怎么表达、分歧怎么浮现、决策怎么升级、责任怎么记录,都编码在这些工具里。 而且,升级机制已经内置了:@提及、批注、评论、建议编辑、通知。(AI也可以做) AI要做的,不是发明新的协作方式,而是学会在这些已有的机制里参与和升级。
A Aatish Nayak @nayakkayak

Collaborative Intelligence

C
Codie Sanchez @Codie_Sanchez ·
Best money I've ever spent as a CEO... an internal AI transformation hire. He doesn't care about title. He just wants to ship. And he goes across your entire org, sales, revenue, hr, apps, tech and kills stupid manual processes. Such an underrated unlock.
O
Or Hiltch @_orcaman ·
Today we are launching @openwork_ai, an open-source (MIT-licensed) computer-use agent that’s fast, cheap, and more secure. @openwork_ai  is the result of a short two-day hackathon our team decided to hack, which brings together some of our favorite open source AI modules into one powerful agent, to allow you to: 1. Bring your own model/API key (any provider and model supported by @opencode is supported by Openwork) 2. ~4x faster than Claude for Chrome/Cowork, and much more token-efficient, powered by dev-browser by @sawyerhood (legend) 3. More secure - contrary to Claude for Chrom/Cowork, does not leverage the main browser instance where you are logged into all services already. You login only to the services you need. This significantly reduces the risk of data loss in case of prompt injections, to which computer-use agents are highly exposed. 4. Free and 100% open-source! You can download the DMG (macOS only for now) or fork the github repo via the link in bio (@openwork_ai). Let us know what you think (or better, send a pull request)!
C Claude @claudeai

Introducing Cowork: Claude Code for the rest of your work. Cowork lets you complete non-technical tasks much like how developers use Claude Code. https://t.co/EqckycvFH3

A
Angry Tom @AngryTomtweets ·
@antoinemarcel this is Kling AI 2.6 Motion Control
P
Peter Steinberger @steipete ·
Did some statistics. My productivity ~doubled with moving from Claude Code to codex. Took me a bit to figure out at first but then 💥 https://t.co/cfyKg0E1hf
D
David Shapiro (L/0) @DaveShapi ·
85% Of People Will be Unemployable
📙
📙 Alex Hillman @alexhillman ·
I had my Claude assistant build a script to do them in batches. Local whisper model is free but slower. 200 would probably take a day or so. https://t.co/KFRjWr6VFf api keys work outbton $1-1.50/hr, but WAY faster, so for a few hundred bucks you can do the whole thing. My advice would be toget it to do one the way you want, THEN ask it to do a batch of 5 and see how it works/how much it costs, then ask it to do the full set
K
Klaas @forgebitz ·
having a monorepo turned out to be a massive advantage for ai coding all context is inside one repo api's, servers, auth, landing page, marketing sites, dashboard, ops, everything
📙
📙 Alex Hillman @alexhillman ·
Early in building my exec assistant system, I created a workflow to capture proto-ideas that I don't want to forget but don't have time to explore or implement right now. I call them "seeds" and they all go into a folder with markdown that captures the idea, the context that generated it, and the goal. At the moment I have 132 seeds planted 😅 So I worked with my assistant to develop a scoring framework for these seeds. Here's what it is and how we use it.
B
Boris Cherny @bcherny ·
Super excited about this launch -- every Claude Code user just got way more context, better instruction following, and the ability to plug in even more tools
T Thariq @trq212

Tool Search now in Claude Code

D
Dan ⚡️ @d4m1n ·
since many asked, to "install" all these 1. copy this entire directory: https://t.co/r6fcreGXPZ (including https://t.co/wtrWrWPVid) 2. paste inside the .claude/skills directory in your project 👉 skills only take a bit of context and are loaded when needed by the agent
D
Dan Guido @dguido ·
.@trailofbits released our first batch of Claude Skills. Official announcement coming later. https://t.co/vI4amorZrc
H
Harry Charlesworth @hjcharlesworth ·
The gap is getting wider and I'm glad I could finally write this down. A mental model that works for us when pairing with an agent. https://t.co/xVFJG6JgM5
E
Ethan Mollick @emollick ·
Teaching an experimental class for MBAs on “vibefounding,” the students have four days to come up and launch a company. More on this eventually, but quick observations: 1) I have taught entrepreneurship for over a decade. Everything they are doing in four days would have taken a semester in previous years, if it could have done it at all. Quality is also far better. 2) Give people tools and training and they can do amazing things. We are using a combination of Claude Code, Gemini, and ChatGPT. The non-coders are all building working products. But also everyone is doing weeks of high quality work on financials, research, pricing, positioning, marketing in hours. All the tools are weird to use, even with some training, but they are figuring it out. 3) People with experience in an industry or skill have a huge advantage as they can build solutions that have built-in markets & which solve known hard problems that seemed impossible. (Always been true, but the barriers have fallen to actually doing stuff) 4) The hardest thing to get across is that AI doesn’t just do work for you, it also does new kinds of work. The most successful efforts often take advantage of the fact that the AI itself is very smart. How do you bring its analytical, creative, and empathetic abilities to bear on a problem? What do you do with access to a very smart intelligence on demand? I wish I had more frameworks to clearly teach. So many assumptions about how to launch a business have clearly changed. You don’t need to go through the same discovery process if you build a dozen ideas at the same time & get AI feedback. Many, many new possibilities, and the students really see how big a deal this is.
D
Damian Player @damianplayer ·
this role will become a key hire for most orgs. if you aren’t actively looking for an AI partner, automation specialist, or bringing AI teams in house, you’re already behind. we’re talking to companies doing $5M-$50M/year right now. the demand is insane.
C Codie Sanchez @Codie_Sanchez

Best money I've ever spent as a CEO... an internal AI transformation hire. He doesn't care about title. He just wants to ship. And he goes across your entire org, sales, revenue, hr, apps, tech and kills stupid manual processes. Such an underrated unlock.

A
Arvind Jain @jainarvind ·
Love this. At @glean, we call these AI Outcomes Managers. They not only lead our internal “Glean on Glean” initiatives, they also work directly with customers to identify high-friction workflows, automate repetitive steps, and deploy AI agents that drive clear business impact.
C Codie Sanchez @Codie_Sanchez

Best money I've ever spent as a CEO... an internal AI transformation hire. He doesn't care about title. He just wants to ship. And he goes across your entire org, sales, revenue, hr, apps, tech and kills stupid manual processes. Such an underrated unlock.

H
Harry Charlesworth @hjcharlesworth ·
Read it here: https://t.co/uniFTtCas6
G
GREG ISENBERG @gregisenberg ·
40 reasons 2026 is the best time ever to build a startup
E
Ejaaz @cryptopunk7213 ·
there it is- "today we're introducing Personal Intelligence" now your emails, photos, youtube & search history, location, documents will all be used to train a personalized version of gemini to deliver you a tailored experience. this is all part of googles multi-pronged masterplan and they're executing much quicker than i expected tbh people are about to realize how powerful their data moat is. openai, anthropic cannot compete. wrote about this in detail here https://t.co/jkShii1XhK
G Google @Google

Today, we’re introducing Personal Intelligence. With your permission, Gemini can now securely connect information from Google apps like @Gmail, @GooglePhotos, Search and @YouTube history with a single tap to make Gemini uniquely helpful & personalized to *you* ✨ This feature is launching in beta today in the @GeminiApp. See Personal Intelligence in action 🧵 ↓

J
Jeff Tang @jefftangx ·
Last night I stayed up late talking to Cowork about how it was built I exported the entire VM snapshot What I learned: - It's an Electron App with its own Linux sandbox (bubblewrap) - Cowork is a wrapper around Claude Code (which is a wrapper around Opus) - It has an "internal-comms skill" made by Anthropic - I found 2 small-ish security vulnerabilites 👀 The craziest part: When I asked it what questions I should've asked it, it suggested adding memory and leaving notes for itself once it "dies" 🥲
S Simon Willison @simonw

I used Claude Code to reverse-engineer the Claude macOS Electron app and had Cowork dig around in its own environment - now I've got a good idea of how the sandbox works It's an Ubuntu VM using Apple's Virtualization framework, details here: https://t.co/lRWVhrNFk0

J
Jaime Jorge @jaimefjorge ·
The biggest takeaways/nuggets from my interview with @GeoffreyHuntley on AI-native software engineering and the Ralph loop: 1. Software development and software engineering are now two different professions, and one of them is over. Software development, the work of translating tickets into code, can now be done by anyone for $10-42/hour while they sleep. Software engineering, architecture, security, requirements breakdown, understanding failure modes, is where humans still matter. If you identify as a "software developer," you're competing against a bash loop. If you identify as a "software engineer," your job is to orchestrate the loops. 2. The moat you think protects your software product doesn't exist anymore. Geoffrey argues you can clone any SaaS product, even those with BSL licenses or proprietary enterprise code, using AI. He ran Ralph in reverse on HashiCorp Nomad's source code to generate clean-room specifications. When he hit gaps from missing enterprise features, he ran Ralph over their marketing materials and product docs to fill them in. Any company relying on licensing or code secrecy as a competitive moat needs to rethink their strategy. 3. Cursor, Windsurf, and every other AI coding tool are essentially the same thing: a loop that automatically copies and pastes. Geoffrey built these tools professionally and says the harness does almost nothing; the model does all the work. There's no real moat in the harness business when you're reselling tokens. The only differentiator is taste and UX. Stop evaluating tools and start learning the underlying patterns. 4. Ralph is not a product. It's an orchestrator pattern for running thousands of AI loops. The simplest version is a bash loop that deterministically allocates memory, lets the LLM pick one task, executes it, then starts fresh. The key insight: every loop gets a brand new context window. You avoid compaction (where the AI gets dumber as context fills up) by never letting the context window accumulate competing goals. Your institutional knowledge lives in specification files, not in the context window. 5. Specifications are the new source code. Geoffrey's workflow: spend 30 minutes in conversation with AI, drilling into requirements, making engineering decisions, building up specs. Then throw those specs to Ralph and get weeks worth of work in hours. The specs act as a "pin" that reframes every fresh loop with your domain knowledge. He doesn't hand-write specs. He code-generates them through structured conversation. Prototypes are now free. Refactoring is cheap. 6. The entry-level path into software engineering is closing fast. Geoffrey's company stopped hiring juniors for a year until they figured out how to interview for AI-native skills. There's already a cohort of juniors who've been practicing these techniques for six months. They'll work at a quarter of senior wages and outship them. If you're just picking up these tools today, you're behind. The new interview question: can you explain how to build a coding agent on a whiteboard? 7. Senior engineers who refuse to adapt are in more danger than juniors who embrace it. Geoffrey sees respected engineers taking hardline stances against AI ("it's installing fascism in your codebase"). Meanwhile, leadership teams are discovering Ralph and realizing three people can run the output of an entire org. When commit velocity and product velocity diverge that dramatically between adopters and non-adopters, founders notice. The hard line is coming. 8. AI is an amplifier of operator skill, not a replacement for it. If you're great at security and you get good at AI, you become a weapon. If you're mediocre and you use AI, you're still mediocre, just faster. The skill gap comes from "discoveries": learning the tricks, the loop-backs, the ways to close the automation loop. These techniques don't have standardized language yet. We're inventing the terms for the new computer every day. 9. Open source may no longer make sense for most use cases. Geoffrey, a former prominent open source maintainer whose land was funded by Open Collective, no longer uses open source libraries. His reasoning: every dependency injects a human into the loop. If there's a bug, you open a PR, chase a maintainer, wait. That's not automation. Instead, code-generate what you need. The exception: don't generate cryptography or security-critical code unless you have the domain expertise to verify it. 10. Programming languages now have a tier list based on how well AI agents can work with them. S-tier: Rust, TypeScript (especially with Effect.js), Python with Pydantic. These are source-based with strong type systems that reject invalid generations and work well with ripgrep for code discovery. F-tier: Java and .NET. Their DLL-based dependency systems don't work natively with the search tools AI agents use. The tradeoff with Rust: compilation is slow, so bad generations cost more time. 11. Corporate AI transformation programs are dangerously slow. Three-to-four-year rollouts with coaches and committees won't cut it when three founders in Bali can Ralph your entire product and undercut your pricing by 99%. Smaller teams ship faster. By the time the transformation is done, the market has moved. Geoffrey calls this the "Titanic moment": the boat is full, get the next boat. 12. We have a new computer, and that's why the legends are coming out of retirement. The last 40 years of computing decisions were designed for humans: TTYs, environment variables, slow language evolution to avoid breaking mental models. Now we have robots. What's the bare minimum a robot needs? Geoffrey sees this as the most exciting time in computing. If you're not excited about what you can now build, you haven't truly picked up the new computer yet.
A
Arlan @arlanr ·
it happened mcp is no longer bs
T Thariq @trq212

Tool Search now in Claude Code

M
Miles Deutscher @milesdeutscher ·
If you're building with Claude Code, you'll want to bookmark this site. A full agent marketplace of 60,000+ Claude Skills that are ready for use now. https:// skillsmp. com/ https://t.co/YfZRf4w9TJ
V
Victor @victor_explore ·
@DanielGlejzner the real context window was the architecture decisions we made along the way
P
Pleometric @pleometric ·
Are you enjoying Claude Code? 😂 https://t.co/J7V9qcIIEE
A
Addy Osmani @addyosmani ·
AI may change how we do code reviews. PRs show what changed. Prompt logs show what the human actually wanted. Full trajectories - the conversation, the iterations, the steering - show you how they got there. When agents write the code, review inverts. You stop asking only "is this correct?" and start asking "was this intent clear enough to execute safely?" Most teams won't abandon code review. They'll do both. Review the output for correctness, review the trajectory for intent. The diff tells you what shipped. The conversation tells you why. We're not replacing PRs but we may consider the prompt is the spec, the code is the build output, and review should also happen at the layer where human judgment actually lives.
G Gergely Orosz @GergelyOrosz

"I don't like pull requests (PRs) any more. A large chunk code change doesn't tell me much about the intent or why it was done. I now prefer prompt requests. Just share the prompt you ran / want to run. If I think it's good, I'll run it myself and merge it." - @steipete wow

🎭
🎭 @deepfates ·
Oh you can just make claude code a RLM by telling it to look at its own conversation logs
ℏεsam @Hesamation ·
the Cursor team released a blog post on the best practices of coding with agents. writing fully functional code vs slop comes down to following 10 very simple principles: 1. use plan mode before any code 2. start fresh conversations when it gets confused 3. let the agent get its context, don’t tag everything 4. revert and refine instructions rather than fixing hopelessly 5. add rules for repeated mistakes 6. write tests first so it can iterate 7. run multiple models and pick the best 8. use debug mode for stubborn bugs 9. specific prompts get way better results 10. give it linters and tests to verify​​​​​​​​​​​​​​​​ blog: https://t.co/M9dWf27F4V
e
eric provencher @pvncher ·
I heard from someone who works at a big tech co that they started rolling out Claude code to employees, with a budget of $100 in credits per month, but people burn through it in 2-3 days. Idk how we scale out agentic work with api pricing
A
Alex Sidorenko @asidorenko_ ·
"How can I use react-best-practices skills?" Codex example 👇 https://t.co/dUrnqOUWIu
G Guillermo Rauch @rauchg

We're encapsulating all our knowledge of @reactjs & @nextjs frontend optimization into a set of reusable skills for agents. This is a 10+ years of experience from the likes of @shuding, distilled for the benefit of every Ralph https://t.co/2QrIl5xa5W

M
Matthew Lam @mattlam_ ·
Fully set up my @clawdbot and now I have my 24/7 personal assistant + coding agent for $5/month. Easy to setup, I just got claude and codex to help me with Hetzner for VPS, and now I get some of my favorite use cases 24/7: - have a new project idea? Instead of just writing in my todo list, tell Clawdy (my assistant) to start helping me do relevant research, set up a new repo, or even start coding. - look through my task list, calendar, emails to help me plan my day and keep track of tasks - periodic reminders that I need (no longer need to go through Apple Reminders app just tell Clawdy) - X's search, including posts you've seen, I find pretty bad, I just get Clawdy to look for me with bird cli, much more likely to find a tweet I forgot to bookmark. @nikitabier checkout @steipete 's https://t.co/fbxAH2WyAp and set yourself up with a personal assistant
T
Thariq @trq212 ·
Tool Search now in Claude Code
D
David Shapiro (L/0) @DaveShapi ·
I have revised my estimates for "future employable humans" For reference, my last work estimated around 20% to 25% total labor force participation rate. However, as I've refined my approaches and assumptions, that has been revised down to a LFPR of only 15%. That means that, in the future, I anticipate that less than 1 out of 6 working age adults will have meaningful employment. That may sound abysmal, but the solution is elegant.
D David Shapiro (L/0) @DaveShapi

85% Of People Will be Unemployable

B
Bilgin Ibryam @bibryam ·
"The best software engineers won’t be the fastest coders, but those who know when to distrust AI." The Next Two Years of Software Engineering - @addyosmani https://t.co/gcR3b75Mpu
E
Ethan Mollick @emollick ·
Could this meeting be an email? Could this organization be a set of markdown files?
E
Ethan Mollick @emollick ·
Had Claude Code build a little plugin that visualizes the work Claude Code is doing as agents working in an office, with agents doing work and passing information to each other. New subagents are hired, they acquire skills, and they turn in completed work. Fun start. https://t.co/wm93gsiBWi
S
Simon Willison @simonw ·
This is great - context pollution is why I rarely used MCP, now that it's solved there's no reason not to hook up dozens or even hundreds of MCPs to Claude Code
T Thariq @trq212

Tool Search now in Claude Code