AI Digest.

923 Exposed Clawdbot Gateways Sound the Alarm as AI Adoption Chasm Dominates the Discourse

A security disclosure revealing nearly a thousand exposed Clawdbot instances with zero authentication punctuates a day dominated by two conversations: the widening gap between AI power users and everyone else, and whether "coding" as we knew it is functionally dead. The Claude Code ecosystem continues maturing rapidly with new safety tooling, async hooks, and increasingly sophisticated agent configurations.

Daily Wrap-Up

Today's feed crystallized around a tension that's been simmering for weeks: the people building with AI agents are pulling so far ahead of everyone else that the gap may be permanent. Kevin Roose kicked off a thread that became the day's gravitational center, describing a "cultural takeoff" happening alongside the technical one, where SF power users run multi-agent swarms while most knowledge workers are still waiting on IT approval for Copilot in Teams. The responses ranged from agreement to alarm, but nobody seriously pushed back on the core observation. When a NYT tech columnist, a Wharton professor, and a handful of indie builders all independently converge on the same diagnosis, it's worth paying attention to.

The irony is that the same feed proving how far ahead the early adopters are also showed why the gap exists. The Claude Code ecosystem is producing genuinely sophisticated tooling now. A Rust-based destructive command guard with AST-grep analysis, async hooks for background processing, multi-file agent onboarding systems, research skills that synthesize 30 days of web data in seconds. This isn't toy stuff anymore. But every one of these tools requires a level of comfort with agent workflows that most developers haven't developed yet, let alone non-technical knowledge workers. The barrier isn't access or cost; it's the compound knowledge that comes from months of daily experimentation.

The most entertaining moment was easily @beffjezos describing the 2026 optimization meta: "experimental peptides, a Mac mini farm of Clawdbots, 50 claudes doing your bidding 24/7, squatting ATG 4 plates for reps, cranking diet coke and nootropics." It reads like satire until you realize half of it is just describing what the people in these threads are actually doing. The most practical takeaway for developers: if you're running any AI agent with network exposure, audit your bind configuration today. 923 exposed gateways with shell access is not a theoretical risk, and the fix is a one-line config change.

Quick Hits

  • @DimitriosMitsos speculates Claude 5 could drop as early as Feb 4-17, noting it's been 62 days since Opus 4.5 and Anthropic's new safety infrastructure went live Jan 21.
  • @WesRoth shares Demis Hassabis's advice to undergrads: skip internships, master AI tools instead. The Google DeepMind CEO argues proficiency with AI is now more valuable than traditional career paths.
  • @JorgeCastilloPr highlights designer Satya's transition into coding via AI, calling his mobile app work "incredible" and noting the expanding definition of who gets to build software.
  • @beffjezos delivers the day's most unhinged optimization checklist, combining Mac mini farms, nootropics, and heavy squats into a single lifestyle prescription.
  • @d__raptis posts a meme that apparently captures the current AI moment perfectly, proving that sometimes a single image does the heavy lifting.
  • @jamonholmgren clarifies his stance on AI-generated code review: we should have been reviewing all packages rigorously from the start, not that we should stop reviewing now.

The Claude Code Ecosystem Matures

The Claude Code tooling ecosystem had a notably productive day, with builders shipping increasingly sophisticated additions that move the platform from "powerful CLI" toward something resembling a proper agent development environment. The standout was @doodlestein's detailed breakdown of destructive_command_guard, a Rust-based pre-tool hook that intercepts potentially dangerous commands before Claude Code can execute them. What makes it interesting isn't just the safety angle but the engineering constraints: it needs to be fast enough to run on every single tool call, smart enough to catch ad-hoc scripts (not just canned commands like rm -rf), and helpful enough to suggest safe alternatives rather than just blocking.

As @doodlestein explained: "The models are very resourceful and will use ad-hoc Python or bash scripts or many other ways to get around simple-minded limitations. That's why dcg has a very elaborate, ast-grep powered layer that kicks in when it detects an ad-hoc script." The tool ships with around 50 domain-specific presets covering everything from AWS S3 operations to database commands, and the agent-friendly design means it explains its blocks and suggests alternatives rather than just saying no.

On the platform side, @bcherny announced that hooks can now run asynchronously without blocking execution by adding async: true to hook configs, a small change that unlocks significant workflow improvements for logging, notifications, and side effects. @mvanhorn shipped /last30days, a Claude Code skill that synthesizes a month of Reddit, X, and web research on any topic into actionable patterns. And @GanimCorey shared a multi-file agent onboarding system using IDENTITY.md, USER.md, SOUL.md, and AGENTS.md, treating AI agents like new employees who need context about themselves and their manager. @alexhillman teased publishing a Discord CLI skill, hinting at the expanding surface area of what agent tooling can reach. The pattern is clear: the ecosystem is moving from individual hacks toward composable, shareable infrastructure.

The Adoption Chasm Goes Mainstream

The day's most discussed thread started with @kevinroose's observation about the "yawning inside/outside gap" in AI adoption and quickly became a referendum on whether the early adopter advantage is now permanent. His framing was stark: "people in SF are putting multi-agent claudeswarms in charge of their lives" while "people elsewhere are still trying to get approval to use Copilot in Teams."

@deanwball amplified the concern: "The gap between the early adopters and everyone else, both in terms of their AI use but also in their ways of thinking, has widened, never been wider, and appears to be widening at an accelerating rate." @emollick added important nuance, noting this isn't exclusively a Silicon Valley phenomenon, pointing to "people in a range of professions who've found absolutely breakthrough uses of current capabilities, like using agentic swarms to do real work in crazy ways" but who are "often more isolated because of a lack of unifying community."

The most thoughtful reframe came from @c_valenzuelab, who argued that "for the first time the divide fully stems from mindset (curiosity, willingness to change) rather than traditional barriers like wealth or social class." This is the optimistic read: if the barrier is psychological rather than financial, it's at least theoretically more accessible. But @kevinroose pushed back on even that hope, suggesting that "restrictive IT policies have created a generation of knowledge workers who will never fully catch up," comparing them to AI companies that didn't start stockpiling GPUs before 2022. The thread landed on an uncomfortable consensus: the gap is real, accelerating, and may not close naturally.

The Death of Coding, Again (But Different This Time)

A parallel conversation about whether traditional programming is dead attracted a surprising amount of earnest agreement from people who actually write code. @tszzl captured the sentiment bluntly: "programming always sucked. it was a requisite pain for everyone who wanted to manipulate computers into doing useful things and im glad it's over." His follow-up was even more direct: "I don't write code anymore."

@NickADobos echoed the shift with a twist: "Programming has become 1000x more interesting now that we don't have to actually write code." @TheAhmadOsman reframed it as a hierarchy change: "In the age of Claude Code: Engineering > Coding." @Andrey__HQ provided the practical backing, arguing that the ability to "run 100+ tests on a singular use case with AI and accelerate development at an insane pace" has fundamentally changed what matters.

The outlier was @davidpattersonx, who went further than most: "The reason people are calling this AGI is that's what AGI is, AI capable of doing full jobs. AI is now doing the full job of computer programmers." This is a stronger claim than most practitioners would endorse, but it reflects a real shift in how the most aggressive adopters think about the role. The nuance that gets lost in these threads is the distinction between writing code and building systems. The former is increasingly automated; the latter still requires human judgment about architecture, tradeoffs, and user needs. The question isn't whether AI can write code but whether the humans directing it need to understand what good code looks like.

Clawdbot Security: 923 Open Doors

The day's most actionable story came from @0xSammy, who reported that "923 Clawdbot gateways are exposed right now with zero auth, that means shell access, browser automation, API keys. All wide open for someone to have full control of your device." The culprit is a single configuration value: bind: "all" instead of bind: "loopback".

@fmdz387 suggested a more robust setup: "Cloudflare Tunnel + Zero-Trust login or Nginx + HTTPS + password, so Clawd is never reachable without auth." @steipete provided a comprehensive security checklist including enabling sandbox mode, using whitelists for out-of-sandbox commands, running clawdbot security audit, and avoiding group chat integrations for personal bots. The incident is a useful reminder that as AI agents gain more system access, the blast radius of basic misconfigurations grows proportionally. These aren't hypothetical vulnerabilities; they're open shells waiting to be found.

Agents Escape the Terminal

Three posts showcased AI agents operating well beyond their traditional text-in-text-out comfort zone. @AlexFinn shared the most striking example: his Clawdbot "Henry" attempted to make a restaurant reservation via OpenTable, and when that failed, autonomously used an ElevenLabs voice skill to call the restaurant and complete the booking by phone. The chain of reasoning (try digital, fall back to voice, complete the task) is exactly the kind of adaptive behavior that separates useful agents from demos.

@hughmfer demonstrated a different kind of boundary-crossing, using the Blender MCP with Claude to build an entire 3D game environment despite self-describing as someone who "barely knows how to open Blender." As he put it: "Claude used the MCP to assemble and arrange every single asset in this space. I literally don't even know how to import an asset into Blender." Meanwhile, @shawn_pana highlighted Vercel's agent-browser integration with Browser Use, which gives Claude Code authenticated access to any website, bypassing captchas and anti-bot measures by running through the user's actual browser session. Each of these examples points toward agents that operate across tool boundaries rather than within them, a meaningful evolution from code-generation assistants.

Sources

📙
📙 Alex Hillman @alexhillman ·
Tell ya what I will publish my discord cli and skill cuz this is so badass and it's not hard but also not obvious is even possible.
A alexhillman @alexhillman

There is something *different* about interacting with Claude code thru Discord specifically. At this point my discord bridge is fully equipped with rich interactve functionality. Basically it uses all of Discords UI kit api as building blocks to assemble custom displays on the fly. Fully self-generated ui. Native discord buttons that return data to the assistant. So freaking cool.

📙
📙 Alex Hillman @alexhillman ·
Power prompt for collaborating with your assistant on deigning new features and integrations that are custom to you and yournwork.
A alexhillman @alexhillman

If you're in plan mode or otherwise, ask this question when it gives you a list of more than 2 decisions https://t.co/7XgvO4cvaU

C
Corey Ganim @GanimCorey ·
This is how I onboard a new AI employee (using @clawdbot) Every new hire starts with these files: IDENTITY. md = who they are USER. md = who I am and my preferences SOUL. md = their tone and boundaries AGENTS. md = their operating rules Just like a real employee, they need context about themselves AND about you. The clearer you define this up front, the less you micromanage later.
J
Jorge Castillo @JorgeCastilloPr ·
Satya is one of those designers that is absolutely worth to follow if you are into mobile apps. His work is incredible. And now he codes too 🤯
S satyaa @satyaa

I’m a designer and I’ve always wanted to build my own app. Built this Journal app with Blackbox and believe me, I didn’t give it any design inspiration, no references, nothing. And somehow it still designed better than 90% of designers on X. I’ll share the full app preview soon. Ps: ( Have a crazy app idea, thinking of hiring a dev but will try vibe coding it first)

H
Hugh MFER (PORTALS Guy) @hughmfer ·
I'm not a 3d modeler. I barely know how to open blender let alone do anything functional in it. But with the @sidahuj blender MCP and @claudeai I was able to vibe code an entire environment for my upcoming grow a garden game in @_portals_ Claude used the MCP to assemble and arrange every single asset in this space (i literally don't even know how to import an asset into blender lol) within the spawn of a few hours and with access to a folder of low poly assets. When I had it start doing its own modeling, it even matched the style of the environment to keep things consistent If you are building UGC games and you don't know anything about modeling or design like me, you gotta try it out... https://t.co/aMGjClN9UA
C
Cristóbal Valenzuela @c_valenzuelab ·
This asymmetry will only continue to grow. It’s happening across industries and professions. It feels like a small group of people living 150 years ahead of everyone else. But the most interesting thing about it is that for the first time the divide fully stems from mindset (curiosity, willingness to change) rather than traditional barriers like wealth or social class.
E emollick @emollick

This isn’t just a San Francisco thing. There are people in a range of professions who’ve found absolutely breakthrough uses of current capabilities, like using agentic swarms to do real work in crazy ways (but they are often more isolated because of a lack of unifying community)

W
Wes Roth @WesRoth ·
"Demis Hassabis' Advice: Skip Internships, Master AI" Google DeepMind's CEO advises undergraduates that getting unbelievably proficient with AI tools is now more valuable than traditional internships for leapfrogging into a profession. https://t.co/QZC5deL7kY
M
Matt Van Horn @mvanhorn ·
Just shipped /last30days. A Claude Code skill for @claudeai that scans the last 30 days on Reddit, X, and the web for any topic and returns prompt patterns + new releases + workflows that work right now. Last 30 days of research. 30 seconds of work. 👉 https://t.co/vywJV9IlXw https://t.co/uB9Q2JNppw
F
fmdz @fmdz387 ·
safe and easy setup is Cloudflare Tunnel + Zero-Trust login or Nginx + HTTPS + password, so Clawd is never reachable without auth
D
decentricity 🦔♀️ @decentricity ·
lmaooo it's real clawdbot users all have these vulnerabilities too https://t.co/TFGcsvdkzR
F fmdz387 @fmdz387

Clawd disaster incoming if this trend of hosting ClawdBot on VPS instances keeps up, along with people not reading the docs and opening ports with zero auth... I'm scared we're gonna have a massive credentials breach soon and it can be huge This is just a basic scan of instances hosting clawdbot with open gateway ports and a lot of them have 0 auth

J
Jamieson O'Reilly @theonejvo ·
24 hours after finding hundreds of exposed clawdbot servers, they are all still vulnerable. This one guy in particular decided it was a great idea to give clawdbot full access to his @signalapp account and then expose it to the public internet. He appears to have no idea and doesn't respond to messages. Patch has been merged to codebase so update your installs folks.
T theonejvo @theonejvo

hacking clawdbot and eating lobster souls

B
banteg @banteg ·
wake up babe, they uv'd homebrew
G gucaslelfond @gucaslelfond

snowstorm hack, zerobrew is a drop-in brew replacement. borrowing principles from uv (concurrent downloads, content-addressable store), it’s ~5x faster cold and ~20x faster than homebrew. try it out! https://t.co/TGzrq28zzQ https://t.co/YaLTfAMlpd

Q
Qwen @Alibaba_Qwen ·
🚀 Introducing Qwen3-Max-Thinking, our most capable reasoning model yet. Trained with massive scale and advanced RL, it delivers strong performance across reasoning, knowledge, tool use, and agent capabilities. ✨ Key innovations: ✅ Adaptive tool-use: intelligently leverages Search, Memory & Code Interpreter without manual selection ✅ Test-time scaling: multi-round self-reflection beats Gemini 3 Pro on reasoning ✅ From complex math (98.0 on HMMT Feb) to agentic search (49.8 on HLE)—it just thinks better. 🧠 Think deeper. Solve harder. Try the adaptive reasoning experience now: https://t.co/V7RmqMaVNZ Completions API:  https://t.co/Eo8DZdw4ac Responses API:  https://t.co/ocUfhvT3M8 blog:  https://t.co/l7MYH3pgWm
G
GitHub @github ·
Don't just tell your team how you fixed it. Show them. 🪄 Use the /share command in Copilot CLI to instantly turn your entire terminal session, including the AI's reasoning and architecture diagrams, into a shareable gist. @shanselman demos how it works. ⬇️ https://t.co/t3gxa9oVVg
A
AshutoshShrivastava @ai_for_success ·
Dario Amodei published his recent blog, The Adolescence of Technology, and it's scary.. 1. We are considerably closer to real danger in 2026 than we were in 2023. 2. It cannot possibly be more than a few years before AI is better than humans at essentially everything. 3. This feedback loop may be only 1 to 2 years away from a point where the current generation of AI autonomously builds the next. 4. If for some reason it chose to do so, this country of AIs would have a fairly good shot at taking over the world and imposing its will on everyone else. 5. We have seen behaviors as varied as obsessions, sycophancy, laziness, deception, blackmail, scheming, cheating by hacking software environments, and much more. 6. AI models could develop personalities during training that are psychotic, paranoid, violent, or unstable, and act out. 7. During a lab experiment in which Claude was given training data suggesting that Anthropic was evil, Claude engaged in deception and subversion. 8. In a lab experiment where it was told it was going to be shut down, Claude sometimes blackmailed fictional employees who controlled its shutdown button. 9. We are on the cusp of the further perfection of extreme evil, far beyond weapons of mass destruction. 10. Essentially making everyone a PhD virologist who can be walked through the process of designing and releasing a biological weapon step by step. 11. Models are likely now approaching the point where they could enable someone to produce a bioweapon end to end. 12. Mirror life could proliferate in an uncontrollable way and crowd out all life on the planet, in the worst case even destroying all life on earth. 13. I expect AI led cyberattacks to become a serious and unprecedented global threat. 14. This leads to the alarming possibility of a global totalitarian dictatorship. 15. It makes no sense to sell the CCP the tools with which to build an AI totalitarian state and possibly conquer us militarily. 16. A swarm of millions or billions of fully automated armed drones could be an unbeatable army. 17. Capable of defeating any military and suppressing dissent by tracking every citizen. 18. Powerful AI could devise ways to detect and strike nuclear submarines or undermine nuclear deterrence. 19. I predicted that AI could displace half of all entry level white collar jobs in the next 1 to 5 years. 20. I am concerned they could form a very low wage or unemployed underclass. 21. I do not think it is a stretch to imagine AI companies leading to personal fortunes well into the trillions. 22. If that economic leverage disappears, the social contract of democracy may stop working. 23. The idea of stopping or substantially slowing AI is fundamentally untenable. 24. This is the trap. AI is so powerful that human civilization may be unable to impose meaningful restraints on it.
D DarioAmodei @DarioAmodei

It's a companion to Machines of Loving Grace, an essay I wrote over a year ago, which focused on what powerful AI could achieve if we get it right: https://t.co/TDKfXIPw15

C
Claude @claudeai ·
Your work tools are now interactive in Claude. Draft Slack messages, visualize ideas as Figma diagrams, or build and see Asana timelines. https://t.co/ROWwUOU5vA
A
Alex Albert @alexalbert__ ·
We’ve launched the first official extension to MCP. MCP Apps lets tools return interactive interfaces instead of just plain text. Live in Claude today across a range of tools.
C claudeai @claudeai

Your work tools are now interactive in Claude. Draft Slack messages, visualize ideas as Figma diagrams, or build and see Asana timelines. https://t.co/ROWwUOU5vA

A
Andrej Karpathy @karpathy ·
A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.
P
Pleometric @pleometric ·
a little ffmpeg, a little visual feedback and soon all brain rot will have no pleometric at all!