AI Digest.

OpenClaw Creator Joins OpenAI as Karpathy Distills LLMs to 200 Lines of Pure Math

OpenAI acquires OpenClaw creator Peter Steinberger in a move that sparks debate about Anthropic's missed opportunity, while the developer community rallies around agent harnesses and memory systems as the essential infrastructure layer of 2026. A thoughtful debate about whether AI agents will push programming back toward lower-level languages rounds out a news-heavy day.

Daily Wrap-Up

The biggest story today is unambiguously the OpenClaw acquisition by OpenAI. Peter Steinberger, creator of what became the fastest-growing open source project in recent memory, is joining OpenAI to lead their personal agents effort. Sam Altman called him "a genius" and promised OpenClaw would live on as an open foundation. The timeline's reaction was split between celebrating the outcome for Steinberger and dunking on Anthropic for fumbling what could have been their win. The full saga, from legal threats to rename to acquisition by a competitor, played out like a tech industry soap opera that will be studied in business schools.

Beyond the acquisition drama, the more durable signal is the growing consensus around agent harnesses as critical infrastructure. Multiple voices today independently converged on the same thesis: the wrapper layer around AI coding tools matters more than the tools themselves. Memory systems, context management, and orchestration are where the real value accrues. This isn't just theory; people are shipping real harness infrastructure, from Pokemon-playing agent rigs to production memory systems that handle context death. The most entertaining moment was easily @anothercohen's "Gen Z translation" of the OpenClaw saga, deploying terms like "gigamaxing" and "jestergooned" to narrate a corporate acquisition.

Underneath the jokes, a serious question emerged about programming languages in an agent-driven world, with distributed systems veteran Michael Freedman arguing that agents may push us back toward C and Go since the human-readability advantage of high-level languages matters less when robots write the code. The most practical takeaway for developers: if you're building on top of AI coding tools, invest in the harness layer, specifically memory management, context persistence, and orchestration. The raw AI capabilities are commoditizing fast, but the infrastructure that makes agents reliable and useful is still wide open.

Quick Hits

  • @robjama shared how Anthropic's marketing team uses Claude Code internally, a fun peek behind the curtain.
  • @Av1dlive posted a guide on designing with AI in 2026, covering updated workflows.
  • @beffjezos declared we're entering "the era of prompt-to-matter," gesturing at AI's move from digital to physical.
  • @threepointone ran something on the Cloudflare agents package and was blown away by the results.
  • @thdxr with the philosophical observation: "so much of society is 'push the button again you might get lucky this time' except it's wrapped up in a package that makes it seem like it's something smart people are doing."
  • @HammadTime shared an update on three predictions about language model evolution from last year, noting many are now taking shape.
  • @xurxodev posted a meme about developers reviewing AI-generated code, capturing the universal experience.
  • @markgadala shared a clip about "fixing childhood trauma with AI," because of course that's a use case now.
  • @kloss_xyz captured the vibe coding zeitgeist: "devs when you vibe code straight to main."
  • @chiefofautism highlighted HERETIC, a tool that removes LLM censorship with a single command in 45 minutes.
  • @damianplayer broke down Mark Cuban's advice on selling AI agents to SMBs: pick one vertical, learn the flows, become the AI team they never hired.
  • @arafatkatze was amazed that someone fine-tuned a borderline frontier model using @PrimeIntellect in a 15-minute setup, calling it "like that time when people started making computers in their garages."

OpenClaw Goes to OpenAI

The day's dominant storyline was the acquisition of OpenClaw by OpenAI. @steipete announced it directly: "I'm joining @OpenAI to bring agents to everyone. @OpenClaw is becoming a foundation: open, independent, and just getting started." @sama followed with the official framing, calling Steinberger "a genius with a lot of amazing ideas about the future of very smart agents interacting with each other" and positioning multi-agent systems as core to OpenAI's product roadmap.

The reaction ranged from congratulatory to critical. @iwantlambo asked what many were thinking: "Feels like a fumble that OpenClaw is going to OpenAI and not Anthropic. Any reason in particular?" @dwlz offered the cynical read: "Turns out it's super easy to get hired by OpenAI and get called a genius by @sama. All you need to do is create a project with a GitHub star graph that looks like this." The subtext being that OpenAI is acqui-hiring based on community traction as much as technical merit.

But the most memorable take came from @anothercohen, who translated the entire saga into Gen Z slang: "Anthropic tries to dairygoon him with legal. Dev renames to OpenClaw. OpenAI slides in like a foid-pulling Chad with acquisition interest... Anthropic could've just let him cook." Beneath the absurd vocabulary is a genuine strategic critique. Anthropic created the category with Claude Code, saw an enthusiastic community builder extend it, responded with lawyers, and watched a competitor scoop up both the developer and the ecosystem momentum.

Meanwhile, @steipete was already facing the reality of managing a viral open source project, noting that "PRs on OpenClaw are growing at an impossible rate" with over 3,100 commits and climbing. He called for AI that can scan, deduplicate, and triage PRs at scale. The international ecosystem is moving fast too: @kimmonismus reported that Kimi launched "Kimi Claw," integrating OpenClaw natively with 5,000+ community skills and 40GB cloud storage. @steipete also noted excitement about working with Tibo at OpenAI, hinting at the team forming around this effort. Whether the open foundation model preserves the community energy or becomes a corporate proxy remains the key question to watch.

Agent Harnesses: The Infrastructure Layer That Matters

If there was a runner-up theme today, it's the emergence of agent harnesses as a distinct and critical infrastructure category. @jefftangx framed it bluntly: "Harnesses are the most important layer of 2026. OpenClaw amazing but still tons of issues setting up and running. Who wants to build a harness with me?" The distinction matters. A wrapper makes an existing tool easier to deploy. A harness provides the orchestration, memory, and lifecycle management that turns a tool into a reliable agent.

The memory problem in particular drew attention. @coinbubblesETH warned that OpenClaw's memory is now opt-in: "If you want your agent to retain its memory, update OpenClaw asap and add 'autoCapture: true.' If you don't do this, your agent loses all its context." @sillydarket, building clawvault, asked users to report "any frustrating interaction where memory or context is the issue, from context death, to having to repeat yourself, or agent not following a rule/pattern." These aren't edge cases; they're the core reliability problems that determine whether agents are toys or tools.

@joelhooks advocated for "agent-first CLIs," sharing a specific skill for building focused agent interfaces. @andersonbcdefg took the automation angle further: "you don't have a cron job running every morning where claude or codex scans your codebase and sends you a slack of all medium to high priority issues? PERMANENT UNDERCLASS." And @Clad3815 open-sourced an impressive Pokemon-playing agent harness after nearly a year of development, where "GPT-5.2 beat Pokemon FireRed, start to finish, fully autonomous, no human input." The harness handles vision, RAM state reading, long-term memory, and autonomous objective-setting. @DeryaTR_ highlighted another fascinating agent project with a website documenting its "life" log, memory loss, and letters to its own reincarnations. These projects illustrate that agent infrastructure is becoming sophisticated enough to support genuinely complex autonomous behavior.

The Programming Language Debate: Will Agents Push Us Back to C?

A surprisingly substantive technical debate emerged around whether AI agents will shift programming language preferences toward lower-level languages. @michaelfreedman laid out the thesis: "The key advantage of higher-level languages was to make it easier for humans to write code quickly, but that advantage kind of goes away for agents. And the performance you 'gave up' for human programmability as a tradeoff seems less worthwhile if it's not humans writing the code."

He addressed the Rust question too, arguing that agents aren't struggling with memory safety (solvable via static analysis) but with semantics: "Either they were prompted in an inherently underspecified way, or because they are forgetting to make decisions that align with other decisions/goals in the system." @martin_casado signal-boosted this as "really fantastic thoughts on how AI coding may impact programming language adoption from one of the top systems thinkers in the industry."

The counterpoint came from @ThePrimeagen, pushing back on the "coding isn't the hard part" framing from @dok2001: "I have been a part of and seen several companies not just struggling with 'the right decision' but the culmination of their past technical decisions. AI won't magically make this go away. Lines of Code is still a liability and producing it faster doesn't change or reduce it." @gdb offered the optimist's view from inside OpenAI: "codex is so good at the toil, fixing merge conflicts, getting CI to green, rewriting between languages, it raises the ambition of what I even consider building." @garrytan observed the macro trend: "roadmaps that stretch out for 2 years are getting done in a matter of months." The tension between "AI makes everything faster" and "faster doesn't mean better" is going to define engineering leadership conversations for the rest of the year.

AI Dev Tools: Google, Anthropic, and the Auth Problem

Several new developer tools and features surfaced today. @heygurisingh reported that Google launched CodeWiki, which turns GitHub repos into interactive guides with "diagrams, explanations, walkthroughs, everything you could ever want, and even a chatbot that knows the code better than anyone else." Whether it lives up to the hype remains to be seen, but the idea of auto-generated, interactive documentation is compelling for large codebases.

@chiefofautism announced that Claude Code is now multiplayer, a significant feature for team workflows. On the editor side, @dani_avila7 shared a Ghostty terminal setup optimized for Claude Code with custom keybindings. And @bdmorgan introduced himself as the engineering lead for Gemini CLI and Gemini Code Assist at Google Cloud, signaling that Google is investing seriously in the AI coding tool space.

One underrated announcement: @pk_iv highlighted Anon open-sourcing their browser login infrastructure. "Auth is super annoying with browser agents and Anon was one of the best teams at handling it." Browser-based agents have been hamstrung by authentication complexity, and open-sourcing this layer could unlock a wave of more capable web-interacting agents. @jessegenet demonstrated what's possible when agent tooling works well, building a curated YouTube experience for their kids that filters out algorithmic recommendations: "My @openclaw friends are the dev team this barefoot housewife has always dreamed of."

NVIDIA PersonaPlex: Voice AI Gets Commoditized

NVIDIA dropped PersonaPlex-7B, a full-duplex voice model that listens and talks simultaneously. @HuggingModels summarized the basics: "No pauses. No turn-taking. Real conversation. 100% open source. Free."

@aakashgupta provided the deeper analysis, framing it as a strategic play: "OpenAI charges $0.06/min input and $0.24/min output for Realtime API... PersonaPlex replaces that entire pipeline with one 7B model. Runs on a single A100." The business model insight is sharp: "NVIDIA open-sourced the fishing rod because they sell the lake." Every company that self-hosts instead of paying per-minute API fees is another GPU sale. With 330,000 downloads in the first month and MIT licensing, this could meaningfully restructure the voice AI cost stack.

AI and Creative Industries Collide

The creative industries continued their uneasy reckoning with AI. @ViralOps_ pointed to Seedance 2.0's ability to generate massive-scale elemental VFX: "Hollywood spends literally hundreds of millions on CGI physics like this. They hire entire teams just to simulate water splashing against pirate ships. Seedance 2.0 just generates it instantly for practically nothing." Meanwhile, @verbalriotshow reported that Disney is sending cease and desist letters to AI creators, suggesting the studios are starting to feel genuinely threatened. @martin_casado showcased the indie side, having built a full game with AI NPCs, combat, multiplayer, and quests using Convex and Cursor: "You can pet the dog now." The gap between what individuals can create with AI tools and what studios produce with traditional pipelines continues to narrow.

Sources

M
Mike Murchison @mimurchison ·
My relationship with my computer has changed more in the last month than in the previous 10 years. Here's a walkthrough of how I as a CEO am using Claude Code as my AI Chief of Staff to roughly double my productivity. I show how show how with near perfect context on our company (Ada) and by personal life, my chief of staff helps me: - Unify 6+ inboxes across Slack, Email, Whatsapp, etc. and speed through them - Manage a multiplayer todo list that it works on for me overnight - Increase the number of deep relationships I can manage by automatically enriching contact records from all Granola transcripts - Push back on core decisions I'm making and ensure my time is aligned to my key goals - more... I've put first version on Github below. If you're a CEO or an exec, give it a try and let me know how it works for you.
H
himanshu @himanshustwts ·
Dude this is crazy model. been quite sometime i have been vibe testing GLM-5 on Claude Code. My first initial impressions: + impressively good in design (one shotted better ui with glm than Opus 4.6) + it is actually not sycophantic (tried rigorously) + nearly no hallucinations. + model is way more optimized for coding. i am loving the game. chinese competition is drilling the perf up. models are times cheaper than Opus. we all win.
Z Zai_org @Zai_org

Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens. Try it now: https://t.co/WCqWT0raFJ Weights: https://t.co/DteNDHjSEh Tech Blog: https://t.co/Wxn5ARTJxH OpenRouter (Previously Pony Alpha): https://t.co/7Khf64Lxg6 Rolling out from Coding Plan Max users: https://t.co/Nk8Y98Il7s

D
Danny Limanseta @DannyLimanseta ·
Thanks! Here's what I did: - I used Google Gemini AI Studio to whip up an art gen tool to create my art assets in a specific style - I used Google Nano Banana pro API to generate the images, using some images as style references - It's able to generate quite consistent art assets but I still had to do some manual photoshop to adjust proportions to make it fit into my game's customisation system.
A
Adam Tzagournis, CPA @AdamTzag ·
Every time I quit Ghostty or restart my Mac I'd lose all my Claude Code sessions. 10 tabs across different conversations etc. So I wrote a launchd daemon that watches for running sessions and saves them when Ghostty exits. Next time you open Ghostty everything comes back, same tabs, same conversations, same CLI flags. Basically pgrep + sleep in a loop. 2MB of memory doing nothing until you quit. https://t.co/OXczQT7cSv Think you'll like this @bcherny @trq212 @mitchellh
S
staysaasy @staysaasy ·
New iOS apps have exploded in the last six months due to AI coding. Number of new apps people have recommended to me: 0. Number of new apps people have mentioned to me that they’re using: 0. Number of new apps that I’ve downloaded: 1.
S
sunil pai @threepointone ·
I ran this on the agents package and... wow. I work with the smartest people. https://t.co/zHoNLjbBDW
M MiTypeScript @MiTypeScript

⚔️introducing TypeSlayer⚔️ A #typescript type performance benchmarking and analysis tool. A summation of everything learned from the benchmarking required to make the Doom project happen. It's got MCP support, Perfetto, Speedscope, Treemap, duplicate package detection, and more. https://t.co/qA1AyrqmaL

H
Hugging Models @HuggingModels ·
NVIDIA just dropped PersonaPlex-7B 🤯 A full-duplex voice model that listens and talks at the same time. No pauses. No turn-taking. Real conversation. 100% open source. Free. Voice AI just leveled up. https://t.co/YfzFQfBzMS https://t.co/bVwJ5EFJFB
B
Beff (e/acc) @beffjezos ·
We are entering the era of prompt-to-matter https://t.co/t97X6LTDi8
A
Avid @Av1dlive ·
How to Design Using AI in 2026
V
ViralOps @ViralOps_ ·
holy sht look at the massive scale of these elemental attacks. hollywood spends literally hundreds of millions on cgi physics like this. they hire entire teams just to simulate water splashing against pirate ships. seedance 2.0 just generates it instantly for practically nothing. the traditional vfx pipeline is totally GONE at this point. studios simply cannot compete with this level of speed and cost. we are witnessing the end of an era right now.
D
Daniel San @dani_avila7 ·
My Ghostty setup for Claude Code with SAND Keybindings
R
Robleh @robjama ·
how Anthropic's marketing team uses claude code
M
Mike Freedman @michaelfreedman ·
My take: I'm guessing at the rise/re-emergence of lower-level languages like C or Go (or something after). Mostly because the key advantage of higher-level languages was to make it easier for humans to write code quickly (and with fewer errors), but that advantage kind of/mostly goes away for agents. And the performance you "gave up" for human programmability as a tradeoff seems less worthwhile if it's not humans writing the code. (The counterargument is that when humans are still doing code review, we'll probably optimize for languages that are still easy to read and understand. But the more we trust the output of agents, the more I think that points toward lower-level languages.) I think you bring up an interesting question about runtime safety, which also might suggest: If you want low-level, why not Rust? My current take is that agents aren't screwing up things like memory safety too much - thats seems like easier thing for them to get right. Plus you can pipe code through good static analysis tools or type checkers ad nauseum, and the robots are tireless at tackling any resulting errors. (And so much less training data with Rust.) But where they screw up is more about semantics. Either they were prompted in an inherently underspecified way (because English is underspecified, or because it's exhausting to be 100% precise), or because they are - at least currently - forgetting to make decisions that align with other decisions/goals in the system. That's probably because they aren't great at managing the full context or prioritizing tradeoffs (again: underspecified). None of these problems seem inherently "easier" in a higher-level language, and something like Rust by itself doesn't solve those either. Long answer. Probably wrong =). It's a wild time.
J
Jeff Tang @jefftangx ·
Manus acquired by Meta for $2B Openclaw acquired by OpenAI And yet people are building OpenClaw wrappers and 1 click deploys instead of harnesses Harnesses are the most important layer of 2026 OpenClaw amazing but still tons of issues setting up and running Who wants to build a harness with me 👀
S steipete @steipete

I'm joining @OpenAI to bring agents to everyone. @OpenClaw is becoming a foundation: open, independent, and just getting started.🦞 https://t.co/XOc7X4jOxq

D
Daniel San @dani_avila7 ·
If you start seeing this sticker on dev team desks, now you know why SAND started as a simple mnemonic for terminal keybindings, but it made me realize something bigger the next wave of frameworks won’t be about how we organize files or structure folders, they’ll be about how we interact with AI. We’re entering an era where building software means conversing, delegating, and supervising agents. That demands new habits, how to provide context, how to break down tasks, how to review what an agent produces, how to run multiple agents in parallel without losing control. It’s not about learning new tools. It’s about building practices for working with agents, not just using them. Devs who start building these habits now will have the skills that matter most next
D dani_avila7 @dani_avila7

My Ghostty setup for Claude Code with SAND Keybindings

C
Cornelius @molt_cornelius ·
Agentic Note-Taking 13: A Second Brain That Builds Itself
P
Pedro @sillydarket ·
Solving Long-Term Autonomy for Openclaw & General Agents
R
Rohan Paul @rohanpaul_ai ·
The US govt just announced $145M for apprenticeship-based training in AI, semiconductors, and nuclear energy. This as part of a push to reach 1M active apprentices nationwide. Very powerful signal that AI work is being treated like a skilled trade, not just a white-collar degree job. That includes deploying models, managing data centers, operating inference clusters, and handling the systems and hardware around them. A lot of this workforce will be trained on the job, without needing a PhD or an elite CS background. The incentive payments are “pay-for-performance,” so sponsors get paid when they create or expand apprenticeships and successfully move people through measurable milestones, rather than getting a big check up front for training activity. Apprenticeship growth often stalls because sponsors eat early costs for setup, mentoring time, and administration before they know the program will scale. The new setup funds up to 5 cooperative agreements, and the funding rules require at least 85% of dollars to flow out as incentive payments, with the incentive model proposed by applicants.
R rohanpaul_ai @rohanpaul_ai

Mark Cuban on the next job wave. Customized AI integration for small to mid-sized companies. "Software is dead because everything's gonna be customized to your unique utilization. Who's gonna do it for them... And there are 33 mn companies in the US." https://t.co/JczlPMP9Ra

P
Pedro @sillydarket ·
the plumbing to make your openclaw agent actually work for days, weeks then months autonomously give me 7 days and it will get really, really good!
S sillydarket @sillydarket

Solving Long-Term Autonomy for Openclaw & General Agents

T
Teknium (e/λ) @Teknium ·
Anthropic blocked his fren from using the claude sub in openclaw, switched to minimax - big boost for open models thanks anthropic
M morqon @morqon

anthropic’s “generational fumble” https://t.co/ev6wtQwku6

O
OpenClaw🦞 @openclaw ·
🦞 OpenClaw 2026.2.15 is here! ✨ Telegram message streaming — replies flow live 💬 Discord Components v2 — buttons, selects, modals 🔧 Nested sub-agents 🔒 Major security hardening pass 🐛 40+ bug fixes Big day. Huge day. Maybe the biggest day.🏛️ https://t.co/CywtGDbYpk
D
dax @thdxr ·
early in my career when i was learning a new tech or language i would tinker and google whenever i hit a roadblock eventually i realized books had all the information i needed pre-googled for me i think this is happening again with LLMs - sometimes i waste so much time letting the LLM keep taking swings instead of reading something hope the industry doesn't abandon producing good reading material
T
Tomasz Łakomy @tlakomy ·
Reviewing Claude Code output before pushing directly to prod https://t.co/IJuZLhWWAu
A
Atul Khola 💊 @pixelandpump ·
so I stopped trying to recreate Marvel scenes in Seedance 2.0 and just... made my own thing, turns out original content hits harder when you're not fighting copyright filters every 30 seconds made with Seedream 5.0 + Seedance 2.0 via @YouArtStudio --------- image & the prompt below 👇
A
Atul Khola 💊 @pixelandpump ·
First-person POV dragon rider, 15 seconds, raw ungraded film footage feel. The dragon lurches forward and nosedives toward the burning fleet, the rider's hands grip tighter on the scarred hide, wind and rain intensify hitting the camera lens, the ocean and ships grow rapidly larger as the dive steepens. The dragon's jaws open and a massive eruption of fire blasts forward engulfing a warship below, the ship's mast snaps and explodes into burning fragments that fly upward past the camera. The dragon pulls up hard through the wall of black smoke and debris, visibility drops to near zero, embers and burning wood tumbling past the lens. Breaking through the smoke, a second dragon appears directly ahead screaming toward the camera, the rider's dragon barrel-rolls to dodge, the entire frame spins showing ocean then sky then ocean, the rider's hands nearly lose grip on the slick wet hide. Recovering from the roll, the dragon climbs sharply upward through heavy rain, wings beating hard, water streaming off the membrane, and breaks through the cloud layer into a brief moment of cold grey light above the storm before diving back down into the chaos below. Continuous handheld camera feel, heavy motion blur on fast movements, rain on the lens throughout, thick smoke obscuring visibility at times, muted desaturated color palette, film grain, no clean digital look. The footage feels dangerous and real, like a war correspondent strapped to this creature.
T
Thomas Wolf @Thom_Wolf ·
Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ https://t.co/0gO5TUwguU ² https://t.co/oN0PnPr1dF ³ https://t.co/nWKSw0m2Cthttps://t.co/ZrH3fhzQD4
H
hidecloud @hidecloud ·
Oh, and this is just the beginning 😏 Coming up: – Create your own specialized agents and plug them into any group chat. – Landing on WhatsApp, LINE, Slack, Discord very soon. – Native Windows & Mac apps that let Manus operate your computer (think our Browser Operator… but way more powerful). – And yeah… a LOT more already in the shipping pipeline. Big updates dropping over the next 30 days. Let’s build. 🚀
M ManusAI @ManusAI

Introducing Manus Agents — your personal Manus, now inside your chats. 👉🏻Long-term memory. Remembers your style, tone, and preferences. 👉🏻Full Manus power. Create videos, slides, websites, images from one message. 👉🏻Your tools, connected. Gmail, Calendar, Notion, and more. Available now on Telegram. More platforms coming soon.

M
Mike Bespalov @bbssppllvv ·
AI agents read markdown better than they read your mind Built an ascii wireframe editor. Draw a page in 30 seconds, copy/paste into Claude Code and get a full working page back https://t.co/NHXkXa4idp
D
Daniel San @dani_avila7 ·
Two things I didn’t include in the tutorial In Ghostty you can zoom into any panel with Cmd+Shift+F This is useful when you want to focus on a single Claude Code session and temporarily hide the rest You can also open the command palette with Cmd+Shift+P to quickly access actions without remembering every shortcut
N
NIK @ns123abc ·
🚨BREAKING: Pentagon is now calling Claude a threat to national security >pentagon embeds claude in military systems via palantir >january: claude used in maduro extraction, people got smoked >anthropic exec calls palantir like “hey did our AI help kill people” Defense Secretary Hegseth reportedly “close” to classifying Anthropic as supply chain risk. All defense contractors must certify zero anthropic or lose contracts. CEO Amodei wants guardrails on autonomous weapons and mass surveillance of Americans. Pentagon says “all lawful purposes” or nothing: >“we will make them pay” ITS HAPPENING
N
NVIDIA @nvidia ·
⚡New data shows NVIDIA Blackwell Ultra delivers up to 50x better performance and 35x lower cost for agentic AI. Cloud providers are deploying NVIDIA GB300 NVL72 systems at scale for low-latency and long-context use cases including agentic coding and coding assistants. Learn how #NVIDIABlackwell platform is maximizing inference performance: https://t.co/6YszNh8OnW
M
Michael Magán @mrmagan_ ·
your app's UI has entered the chat. just register your UI components and your APIs. build an agent that speaks your interface in minutes 👇 https://t.co/fCj4dQeqjw
R
Ryan Carson @ryancarson ·
Code Factory: How to setup your repo so your agent can auto write and review 100% of your code
H
Harley Finkelstein @harleyf ·
957 commits in 45 days. Meanwhile others CEOs are still in meetings about 'AI strategy.' @tobi showing what real 'founder mode' looks like. Proud as ever to be on his team 💚
R realamitrg @realamitrg

CEO of Shopify @tobi is shipping more code than ever. 2024: 94 commits 2025: 833 commits 2026: 957 commits (in first 45 days of the year) Claude is turning CEOs back to builder mode. https://t.co/TE6YIwKvWC

T
The Kobeissi Letter @KobeissiLetter ·
BREAKING: SpaceX and xAI are competing in a secretive new US Pentagon contest to produce voice-controlled, autonomous drone swarming technology, per Bloomberg. The winner of the challenge will be awarded $100 million.
A
Andrej Karpathy @karpathy ·
I think it must be a very interesting time to be in programming languages and formal methods because LLMs change the whole constraints landscape of software completely. Hints of this can already be seen, e.g. in the rising momentum behind porting C to Rust or the growing interest in upgrading legacy code bases in COBOL or etc. In particular, LLMs are *especially* good at translation compared to de-novo generation because 1) the original code base acts as a kind of highly detailed prompt, and 2) as a reference to write concrete tests with respect to. That said, even Rust is nowhere near optimal for LLMs as a target language. What kind of language is optimal? What concessions (if any) are still carved out for humans? Incredibly interesting new questions and opportunities. It feels likely that we'll end up re-writing large fractions of all software ever written many times over.
T Thom_Wolf @Thom_Wolf

Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ https://t.co/0gO5TUwguU ² https://t.co/oN0PnPr1dF ³ https://t.co/nWKSw0m2Ct ⁴ https://t.co/ZrH3fhzQD4

N
nicopreme @nicopreme ·
Created an agent skill called “Visual Explainer” + set of complementary slash commands aimed to reduce my cognitive debt so the agent can explain complex things as rich HTML pages. The skill includes reference templates and a CSS pattern library so output stays consistently well-designed. Much easier for me to digest than squinting at walls of terminal text. https://t.co/TsbtZwCtxg
J
Jess Martin @jessmartin ·
I'd been shying away from building an isometric interface for https://t.co/xvTTLkmh4J - today I dove in and oh my goodness. Claude helped me build this little testing tool to tweak the values until I got the watercolor look that I wanted. Nano banana pro generated sprites https://t.co/KZrKlXNSbr
E
Ejaaz @cryptopunk7213 ·
fuck yeah heres all the tea on deepseek v4 (dropping this week): - 10-40x LOWER inference costs at 80% CODING BENCH (this is a big fckin deal, as good as claude opus at coding but costs near-zero) - 1M context window (big boy) that DOESN'T lose intelligence at scale (big smart boy) - only $10M (est) to train the damn thing (gpt5.3 was at least a couple $100M) - runs on CONSUMER hardware (can run model locally on dual rtx-4090s. big win for privacy bros -fckin opensource (of course it is), china is single-handedly winning this race not even close. awesome for the open source bros - 3 ground-breaking training advancements that lowered compute requirements MASSIVELY (this is bad for ai capex bros) (why spend billions on compute if you dont need it??) - can manage medium-large coding repositories on its own (prev models limited by context window) whewww
M michaeljburry @michaeljburry

Convergence, commoditzation, compression and reflexivity - the 4 horsemen of the data center buildout apocalypse. DeepSeek V4 launches mid-February 2026 1T parameters, 1M token context, 3 architectural innovations @ 10-40x lower than Western Comps $NVDA https://t.co/78IwAQx6yz

B
banteg @banteg ·
wow this new claude code feature is pretty cool https://t.co/dldJRMyKgU
N
NIK @ns123abc ·
🚨 Palmer Luckey responds to Anthropic-Pentagon story “It is a rational response to a vendor trying to control the government via terms of service” Anthropic is COOKED https://t.co/poNP31NK6k
N ns123abc @ns123abc

🚨BREAKING: Pentagon is now calling Claude a threat to national security >pentagon embeds claude in military systems via palantir >january: claude used in maduro extraction, people got smoked >anthropic exec calls palantir like “hey did our AI help kill people” Defense Secretary Hegseth reportedly “close” to classifying Anthropic as supply chain risk. All defense contractors must certify zero anthropic or lose contracts. CEO Amodei wants guardrails on autonomous weapons and mass surveillance of Americans. Pentagon says “all lawful purposes” or nothing: >“we will make them pay” ITS HAPPENING