Block Fires 4,000 as Stock Surges 22% While Anthropic Refuses Pentagon on Autonomous Weapons
Daily Wrap-Up
Two stories dominated the feed today, and they pull in opposite directions. Jack Dorsey cut 4,000 people from Block in a single announcement, the stock ripped 22%, and every CEO in America started doing the same math on a napkin. Meanwhile, Dario Amodei published a statement refusing Pentagon demands to enable Claude for mass surveillance and autonomous weapons, choosing principle over what would likely be the largest government contract in Anthropic's history. One story is about what AI makes possible. The other is about what lines shouldn't be crossed even when AI makes it possible.
On the product side, Claude Code shipped auto-memory, a feature that lets Claude remember project context, debugging patterns, and preferred approaches across sessions without users writing anything down. It's a meaningful quality-of-life improvement that reduces the friction of starting new sessions, and the reaction from developers was enthusiastic. OpenAI continued pushing Codex integration, Google shipped a faster image generation model, and Perplexity apparently one-shotted a Bloomberg terminal clone. The pace of releases continues to be relentless. The most entertaining moment was @GodsBurnt's timeline of corporate whiplash: work from home in 2020, return to office or get fired in 2024, replaced by AI in 2026. It landed because it's barely an exaggeration.
The most practical takeaway for developers: invest time in code architecture now, specifically deep modules with clean interfaces, because AI coding tools amplify the quality of whatever codebase they're pointed at. @mattpocockuk's advice on this isn't new, but it's never been more relevant when your AI assistant's output quality is directly proportional to the quality of the code it's working with.
Quick Hits
- @OpenAIDevs shared a tutorial on building a restaurant voice agent with gpt-realtime-1.5, continuing the push toward real-time voice as a first-class AI interaction pattern.
- @OpenAIDevs also showed off Codex's new Figma integration: code to design to code without breaking flow.
- @gdb dropped a podcast covering "intense moments at OpenAI" with no further context, plus the cryptic advice to "always run with xhigh reasoning."
- @googleaidevs launched Nano Banana 2 (officially Gemini 3.1 Flash Image), their new state-of-the-art model for faster, cheaper image generation.
- @thekitze celebrated @tinkererclub hitting $333K revenue in its first month, proving there's real money in developer community products.
- @zivdotcat pointed out that Perplexity Computer apparently replicated Bloomberg terminal functionality in minutes with a single prompt, threatening Bloomberg's $12B/year terminal business.
- @mattpocockuk argued that "deep modules" are the 20-year-old solution to getting better results from AI coding tools, because garbage codebases produce garbage AI output.
Block's AI Layoff Sends a $6 Billion Signal to Corporate America
The single biggest story today was Block cutting 4,000 employees, roughly 40% of its workforce, and being immediately rewarded by Wall Street with a 22% stock surge. This wasn't a struggling company trimming fat. Block is profitable, growing revenue, and raised its 2026 guidance to $12.2 billion in gross profit. Dorsey chose this move from a position of strength, and the market's reaction may have just created a template that every public company board will study.
@aakashgupta broke down the math: "That's roughly $1.5 million in enterprise value created per eliminated role." He connected Block to a broader pattern, noting that "ASML cut 1,700 jobs last month while reporting record orders" and "Salesforce cut 5,000 after AI agents started handling 50% of customer interactions." The throughline is clear: companies are growing and cutting simultaneously, and markets are rewarding the approach.
@_Investinq provided the most detailed account, explaining that Block's internal AI platform "Goose" started as a small engineering tool two years ago and now has nearly universal adoption internally. "Engineers are shipping 40% more code per person than they were six months ago. That's the productivity gain that made 4,000 people expendable." AI fluency was baked into performance reviews. If you couldn't keep up, you were next.
The social reaction ranged from alarm to dark humor. @krystalball warned that "other companies are going to want to recreate this. Job loss could get very ugly, very quick." @shiri_shh noted that "Jack Dorsey just laid off 4,000 people in a single tweet. AI taking jobs is not a meme anymore." And @JesseCohenInv posted a speculative look at 2036 where 80% of jobs have been replaced, which felt less speculative than it would have six months ago.
What makes this moment different from previous automation waves is the speed. @GodsBurnt captured the whiplash perfectly: companies told workers to go remote in 2020, demanded they return in 2024, and replaced them in 2026. The cycle from "AI will change things eventually" to "your job is gone today" compressed into roughly 18 months of real capability gains. Whether Block's bet pays off long-term is an open question, but the market's instant approval means the playbook is now public.
Anthropic Draws a Line on Military AI
In a move that generated significant attention, Anthropic CEO Dario Amodei published a formal statement refusing Pentagon demands to enable Claude for mass surveillance and autonomous weapons systems. @AnthropicAI shared the statement directly, and the reaction across the tech community was substantial.
@cryptopunk7213 summarized the key points: Dario described the Pentagon's efforts to force compliance and responded that "mass surveillance is not democratic and Claude isn't good enough to enable autonomous weapons." Perhaps most notably, Amodei offered to help the government transition to a new provider if they choose to blacklist Anthropic entirely. That's a remarkable stance for a company that could use government revenue to compete with OpenAI and Google's deep pockets.
The statement reframes a conversation that usually plays out behind closed doors. Most AI companies quietly negotiate military contracts and publish careful press releases afterward. Anthropic published the pressure itself, the demands, and its refusal. Whether you see this as principled leadership or a calculated brand play (or both), it establishes a public benchmark that other AI companies will now be measured against. The "Department of War" framing in Anthropic's own tweet was also notable, using language that hasn't been common in official U.S. nomenclature since before 1947.
Claude Code Ships Auto-Memory as the Dev Tool Race Accelerates
Claude Code's auto-memory feature dropped and immediately became the most discussed developer tool update of the day. The feature lets Claude remember what it learns across sessions, including project context, debugging patterns, and preferred approaches, without requiring users to manually document anything. It's accessible via the /memory command.
@trq212 announced it directly: "Claude now remembers what it learns across sessions... and recalls it later without you having to write anything down." @omarsar0's reaction was succinct: "Claude Code now supports auto-memory. This is huge!" And @cgtwts captured the general developer sentiment about Anthropic's release pace: "Someone please tell Anthropic to take a day off so the rest of us can catch up. At this point I'm still processing the previous update."
@oikon48 posted the full Claude Code 2.1.59 changelog, which included improvements beyond auto-memory: smarter prefix suggestions for compound bash commands, better task list ordering, reduced memory usage in multi-agent sessions, and fixes for MCP OAuth token refresh race conditions. The changelog reflects a product that's maturing quickly, moving from "impressive demo" territory into "daily driver" reliability improvements. For developers already using Claude Code, the auto-memory feature removes one of the biggest friction points: having to re-explain your project every time you start a new session.
The Age of Personalized AI Software
@EsotericCofe shared one of the more creative AI projects of the day: a personalized daily news briefing delivered through a voice-cloned Angela Merkel "posing as a news anchor with a heavy German accent no one understands." The technical breakdown was equally interesting. OpenClaw fetches current news, then calls a @krea_ai node app that uses Qwen voice clone and Fabric to generate the video.
Beyond the humor, the project points at something real. @EsotericCofe declared "the age of PERSONALIZED SOFTWARE is HERE," and the demonstration backs it up. We've crossed a threshold where an individual developer can build a custom media pipeline, complete with voice cloning, news aggregation, and video generation, using a handful of API calls and open-source tools. The barrier between "fun weekend project" and "product that would have required a team of 20 three years ago" has effectively collapsed. The same tools powering Block's productivity gains are enabling solo developers to build things that previously required entire studios.
Source Posts
We've rolled out a new auto-memory feature. Claude now remembers what it learns across sessions — your project context, debugging patterns, preferred approaches — and recalls it later without you having to write anything down. https://t.co/c7PyGaukNQ
Perplexity just became the the first Al company to truly go head-to-head with the Bloomberg Terminal... Using Perplexity Computer (with no local setup or single LLM limitation), it was able to build me a terminal with real-time data to analyze $NVDA using Perplexity Finance: https://t.co/S3l5F5MRiv