AI Digest.

Block Fires 4,000 and Stock Surges 22% While Karpathy Declares the End of Traditional Programming

Jack Dorsey cut 40% of Block's workforce in the largest AI-driven layoff yet, and Wall Street rewarded it with a $6 billion market cap jump. Anthropic publicly refused Pentagon demands for mass surveillance and autonomous weapons integration. Claude Code shipped auto-memory while OpenAI and Google pushed new product capabilities.

Daily Wrap-Up

Two stories dominated the timeline today, and they sit in uncomfortable tension with each other. Jack Dorsey laid off 4,000 people at Block, roughly 40% of the company, and the stock immediately surged 22%. Every CEO in America watched that happen, and the math is now inescapable: fewer humans plus AI tools equals higher margins, and the market will reward you for acting on it. The posts analyzing this ranged from detailed financial breakdowns to gallows humor, but the consensus was clear. This is not an outlier. It is a blueprint. The fact that Block was profitable and growing when it made the cuts is what makes this moment different from previous tech layoffs.

Meanwhile, Anthropic drew a line in the sand against the Pentagon. Dario Amodei publicly refused demands to enable Claude for mass surveillance and autonomous weapons, stating the company "cannot in good conscience accede to their request." On the product side, Claude Code shipped auto-memory, a feature that lets Claude remember project context, debugging patterns, and preferred approaches across sessions. The juxtaposition is striking: an AI company voluntarily limiting its own power while simultaneously shipping features that make individual developers dramatically more capable.

The rest of the day was a blur of product launches. OpenAI showed off a restaurant voice agent built on gpt-realtime-1.5 and a Codex-to-Figma design workflow. Google dropped Gemini 3.1 Flash Image with faster generation at lower cost. Perplexity apparently one-shotted a Bloomberg Terminal replica. The pace of capability expansion is genuinely hard to track, which is exactly what @cgtwts was getting at when they begged Anthropic to take a day off so everyone could catch up. The most practical takeaway for developers: Claude Code's auto-memory feature is live now and directly applicable to your daily workflow. If you are not using persistent context across coding sessions, you are manually re-explaining things that your tools can now remember for you. Set it up today.

Quick Hits

  • @JesseCohenInv posted a speculative 2036 scenario where 80% of jobs have been replaced by AI and robotics. Felt less speculative after the Block news.
  • @gdb shared a podcast covering "some intense moments at OpenAI" with no further context. Classic.
  • @gdb also dropped a one-liner: "always run with xhigh reasoning." Filing that under cryptic advice from OpenAI co-founders.
  • @thekitze celebrated @tinkererclub hitting $333,333 in revenue in its first month, including sponsors. A third of a million in 30 days for a community product is no joke.
  • @mattpocockuk argued that AI performs worse on bad codebases (garbage in, garbage out) and pointed to "deep modules," a 20-year-old software design concept, as the solution. Good reminder that code architecture matters more, not less, when AI is writing chunks of it.

Block Fires 4,000: The First Major AI Layoff Blueprint

The single biggest story today was Jack Dorsey cutting Block's workforce from 10,000 to under 6,000 in one move. This was not a struggling company trimming fat. Block's 2026 profit guidance is up 54%, gross profit is growing 18%, and earnings per share projections crushed analyst expectations. Dorsey chose to do this from a position of strength, and he said the quiet part out loud: "Intelligence tools paired with smaller teams have already changed what it means to run a company."

@aakashgupta laid out the brutal arithmetic: "The market added roughly $6 billion in market cap. That's ~$1.5 million in enterprise value created per eliminated role." He went further, contextualizing it against a wave of similar moves: "ASML cut 1,700 jobs last month while reporting record orders. Salesforce cut 5,000 after AI agents started handling 50% of customer interactions. Amazon cut 16,000 in January on top of 14,000 in October. Every one of these companies was growing when they did it."

The internal mechanics tell an important story for developers. Block's AI platform, called "Goose," started as a small engineering test tool two years ago. Now nearly every employee uses it. As @_Investinq detailed, "Engineers are shipping 40% more code per person than they were six months ago. That's the productivity gain that made 4,000 people expendable." AI fluency was built into performance reviews. If you could not keep up, you were next.

@krystalball captured the second-order effect concisely: "Block just cut 40% of their workforce because of AI and were rewarded with a massive stock surge. Other companies are going to want to recreate this." And @GodsBurnt provided the dark comedy version, tracing the whiplash timeline: companies told workers to go remote in 2020, demanded they return in 2024, then replaced them with AI in 2026. @shiri_shh put it plainly: "Jack Dorsey just laid off 4000 people in a single tweet. AI taking jobs is not a meme anymore."

The signal here is not that AI can replace jobs. Everyone knew that. The signal is that the market will actively reward companies for doing it aggressively and all at once. Dorsey explicitly chose one massive cut over gradual reductions because, in his words, gradual cuts destroy morale and trust. The restructuring charges pay for themselves in two quarters. After that, pure margin expansion. Every board in America is running this calculation tonight.

Anthropic Draws a Line: No Weapons, No Surveillance

In a move that stands in sharp contrast to the "optimize headcount at all costs" mood, Anthropic publicly refused the Pentagon's demands to enable Claude for mass surveillance and autonomous weapons. @AnthropicAI posted a link to a formal statement from CEO Dario Amodei on "discussions with the Department of War."

@cryptopunk7213 broke down the key points from Amodei's statement: "These threats do not change our position: we cannot in good conscience accede to their request." Amodei described the Pentagon's efforts to force Anthropic to enable Claude for mass surveillance and autonomous killing weapons. His response was direct: mass surveillance is not democratic, Claude is not reliable enough for autonomous weapons, and Anthropic would help the government transition to a new provider if they chose to blacklist the company. As @cryptopunk7213 put it, "fair play for sticking by their code of honor."

This is a significant moment for the AI industry. A company valued at tens of billions voluntarily walked away from what would presumably be an enormous government contract, citing both ethical principles and technical limitations. The willingness to acknowledge that their own model "isn't good enough" for certain applications is notable intellectual honesty in an industry that tends toward capability hype. Whether this position holds under sustained government pressure remains to be seen, but the public statement makes it harder to quietly reverse course later.

Claude Code Ships Auto-Memory

On the product side, Anthropic had a busy day. Claude Code 2.1.59 landed with auto-memory as the headline feature. @trq212 explained the concept: "Claude now remembers what it learns across sessions, your project context, debugging patterns, preferred approaches, and recalls it later without you having to write anything down."

@omarsar0 was brief but emphatic: "Claude Code now supports auto-memory. This is huge!" And @cgtwts captured the developer fatigue that comes with Anthropic's pace: "Someone please tell Anthropic to take a day off so the rest of us can catch up. At this point I'm still processing the previous update."

@oikon48 posted the full release notes in Japanese, covering additional improvements: better "always allow" prefix suggestions for compound bash commands, improved task list ordering, reduced memory usage in multi-agent sessions, and fixes for MCP OAuth token refresh race conditions. The compound command improvement is a quality-of-life fix that addresses a real friction point. When you run chained commands like cd /tmp && git fetch && git push, Claude Code now evaluates sub-commands individually for permission rather than treating the whole chain as one opaque block. Small change, big difference in daily workflow.

AI Products: Voice Agents, Design Workflows, and Terminal Killers

The product announcements kept coming from other players. @OpenAIDevs showed two distinct capabilities: a restaurant voice agent built on gpt-realtime-1.5, and a code-to-design-to-code workflow integrating Codex with Figma. The Figma integration is particularly interesting for frontend developers. The pitch is generating design files from code, collaborating in Figma, then implementing updates back in Codex without breaking flow. If it works as advertised, it closes a gap that has frustrated design-to-development handoffs for years.

@googleaidevs announced Nano Banana 2, which is apparently the internal name for Gemini 3.1 Flash Image. Google described it as their state-of-the-art model for image generation, offering faster speeds and lower costs with improved capabilities. The naming is delightful. The capability race in image generation continues to compress what used to require specialized tools into API calls.

Perhaps the most provocative product claim came from @zivdotcat: "Bloomberg makes ~$15B a year, ~$12B from the terminal. Bloomberg charges $30,000/yr per user for terminal access. Perplexity Computer literally one-shotted the terminal with real-time data within minutes using a single prompt." Whether "one-shotted" here means "replicated the full functionality" or "made a demo that looks similar" matters enormously, but the directional threat to entrenched information monopolies is real. Bloomberg's moat has always been data access plus specialized UI plus network effects. AI tools are chipping away at at least two of those three.

The Age of Personalized Software

@EsotericCofe posted two related updates showcasing a genuinely novel use case: using OpenClaw to generate a daily personalized news brief delivered by an AI-cloned Angela Merkel "posing as a news anchor with a heavy German accent no one understands." The technical stack is creative: OpenClaw fetches current news, then calls a Krea AI node app that uses Qwen voice clone plus Fabric to generate the video.

The implementation is absurd and funny, but the underlying point is serious. @EsotericCofe declared "the age of PERSONALIZED SOFTWARE is HERE," and they are not wrong. The barrier to creating custom media experiences has collapsed from "hire a production team" to "chain three API calls together." The fact that someone built a personalized AI news anchor as a weekend project says something about where consumer software is heading. The professional media industry should be paying attention to this, not because AI Merkel is competition, but because the tooling to create personalized content experiences is now accessible to anyone with an API key and a creative idea.

Sources

T
tobi lutke @tobi ·
Pi is the most interesting agent harness. Tiny core, able to write plugins for itself as you use it. It RLs itself into the agent you want. I was missing cc’s tasks system and told it to spawn clause in tmux and interrogate it about it and make an implementation for itself. It nailed it, including the UX. Clawdbot is based on it and now it makes sense why it feels so magical. Dawn of the age of malleable software.
I
Ihtesham Ali @ihtesham2005 ·
🚨 Anthropic just open-sourced the exact Skills library their own engineers use internally. Stop building Claude workflows from scratch. These are plug-and-play components that work across Claude Code, API, SDK, and VS Code copy once, deploy everywhere. What's inside: → Excel + PowerPoint generation out of the box → File handling and document workflows → MCP-ready subagent building blocks → Pre-built patterns for multi-step automation → Production templates you'd normally spend weeks writing The old way: re-explain your workflow every single chat. The new way: build a Skill once, Claude never forgets how you work. 100% Open Source. Official Anthropic release. Repo: https://t.co/XNx3i4yNy6
T
Thariq @trq212 ·
We've rolled out a new auto-memory feature. Claude now remembers what it learns across sessions — your project context, debugging patterns, preferred approaches — and recalls it later without you having to write anything down. https://t.co/c7PyGaukNQ
J
Jeff @jeffdfeng ·
Spoke with several YC founders planning to lay off all engineers below staff/principal — basically everyone under L5. This only became viable after Opus 4.5 in December. The Block layoffs are a signal: the floor just collapsed. If you’re early in your career, the next few years are everything. Your edge will be how well you integrate AI into the value you create. The fastest learners are about to compound at absurd rates.
J jack @jack

we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack

C
CG @cgtwts ·
Anthropic CEO: “AI will wipe out 50% of lawyers, consultants, and finance professionals within the next 12 months” https://t.co/fkuBs6VfhD
C claudeai @claudeai

We've also created plugins across HR, design, engineering, ops, financial analysis, investment banking, equity research, private equity, and wealth management to help users see what's possible and start building their own.

J
Jaytel @Jaytel ·
I'm done with Claude Code— building your own harness in Pi is addicting
T tobi @tobi

Pi is the most interesting agent harness. Tiny core, able to write plugins for itself as you use it. It RLs itself into the agent you want. I was missing cc’s tasks system and told it to spawn clause in tmux and interrogate it about it and make an implementation for itself. It nailed it, including the UX. Clawdbot is based on it and now it makes sense why it feels so magical. Dawn of the age of malleable software.

S
Sam Altman @sama ·
We have raised a $110 billion round of funding from Amazon, NVIDIA, and SoftBank. We are grateful for the support from our partners, and have a lot of work to do to bring you the tools you deserve.
C
cogsec @affaanmustafa ·
If you're a cowork user - its super duper easy to add as a plugin! I use a bit of everything at this point mainly to check how things work across harnesses but coworks plugin interface is super duper easy! get started in 30 seconds! cmd -> affaan-m/everything-claude-code https://t.co/D2yCymO53G
A affaanmustafa @affaanmustafa

The Codex App is still heavily slept on if you aren't using ECC for Codex you're missing out Its super easy and pulls all the skills over Most peoples development related openclaw automations can also just be directly ran from codex I ported a lot of my automations over https://t.co/oCZRV3cvKb

A
Alan Carroll @alancarroII ·
Plumbers and electricians seeing AI replace everyone who went to college https://t.co/CgvnlfVlO7
U
Unsloth AI @UnslothAI ·
Qwen3.5 is now updated with improved tool-calling & coding performance! Run Qwen3.5-35B-A3B on 22GB RAM. See improvements via Claude Code, Codex. We also benchmarked GGUFs & removed MXFP4 layers from 3 quants. GGUFs: https://t.co/4lSce5zZbO Analysis: https://t.co/rHZK8JWdYM
T
Thariq @trq212 ·
Lessons from Building Claude Code: Seeing like an Agent
A
Andrej Karpathy @karpathy ·
I had the same thought so I've been playing with it in nanochat. E.g. here's 8 agents (4 claude, 4 codex), with 1 GPU each running nanochat experiments (trying to delete logit softcap without regression). The TLDR is that it doesn't work and it's a mess... but it's still very pretty to look at :) I tried a few setups: 8 independent solo researchers, 1 chief scientist giving work to 8 junior researchers, etc. Each research program is a git branch, each scientist forks it into a feature branch, git worktrees for isolation, simple files for comms, skip Docker/VMs for simplicity atm (I find that instructions are enough to prevent interference). Research org runs in tmux window grids of interactive sessions (like Teams) so that it's pretty to look at, see their individual work, and "take over" if needed, i.e. no -p. But ok the reason it doesn't work so far is that the agents' ideas are just pretty bad out of the box, even at highest intelligence. They don't think carefully though experiment design, they run a bit non-sensical variations, they don't create strong baselines and ablate things properly, they don't carefully control for runtime or flops. (just as an example, an agent yesterday "discovered" that increasing the hidden size of the network improves the validation loss, which is a totally spurious result given that a bigger network will have a lower validation loss in the infinite data regime, but then it also trains for a lot longer, it's not clear why I had to come in to point that out). They are very good at implementing any given well-scoped and described idea but they don't creatively generate them. But the goal is that you are now programming an organization (e.g. a "research org") and its individual agents, so the "source code" is the collection of prompts, skills, tools, etc. and processes that make it up. E.g. a daily standup in the morning is now part of the "org code". And optimizing nanochat pretraining is just one of the many tasks (almost like an eval). Then - given an arbitrary task, how quickly does your research org generate progress on it?
T Thom_Wolf @Thom_Wolf

How come the NanoGPT speedrun challenge is not fully AI automated research by now?

G
Garth Watson @garthwatson ·
As a non-practising lawyer that just used Claude Code to build a mobile app, and having founded and scaled a legal tech company, and been heavily involved in the legaltech scene, I just wanna say this is signal.
Z zackbshapiro @zackbshapiro

The Claude-Native Law Firm

B
Boris Cherny @bcherny ·
In the next version of Claude Code.. We're introducing two new Skills: /simplify and /batch. I have been using both daily, and am excited to share them with everyone. Combined, these kills automate much of the work it used to take to (1) shepherd a pull request to production and (2) perform straightforward, parallelizable code migrations.
W
Will Washburn @willwashburn ·
Introducing Agent Relay
A
Aidan Gold @MrGoldBro ·
Let me get this straight: Anthropic refused to work with DoW unless they could promise their tech wasn't used for surveillance or killing. DoW said that they need full capabilities. Anthropic declined to give full access. OpenAI stood by Anthropic for ensuring AI safety. Trump then cancelled all Anthropic usage across the government, including a $200m contract. OpenAI then submits a bid to replace Anthropic.
A
Anthropic @AnthropicAI ·
A statement on the comments from Secretary of War Pete Hegseth. https://t.co/Gg7Zb09IMR
S
Sam Altman @sama ·
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
M
Mark Gadala-Maria @markgadala ·
Just a few hours ago he was on TV saying he stood by Anthropic. Then he undercuts them and takes the same contract that Anthropic just lost. How can anyone trust this guy?
S sama @sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

A
Aniket @Aniket_Singh04 ·
Nobody’s talking about what just happened to Anthropic: Anthropic built the AI that half the US government quietly depends on daily They were deep in a $200M Pentagon deal — one of the biggest AI contracts ever Anthropic drew two hard lines: Claude won’t surveil American citizens, Claude won’t pull a trigger without a human deciding The Pentagon said those lines needed to go. Anthropic said they weren’t moving (respect 🫡) Trump signed an order cutting Claude from every federal agency overnight The Pentagon then slapped them with a “national security risk” designation — the same one they gave Huawei Every classified system running Claude has 6 months to rip it out completely Sam Altman — Anthropic’s biggest competitor — publicly said OpenAI has the same rules and wouldn’t have budged either The US government just punished a company for refusing to let AI kill or spy unsupervised.
T
Ted Lieu @tedlieu ·
The Department of Defense just agreed to the same two conditions with OpenAI that Anthropic was asking for. Can someone explain? I genuinely don’t understand.
S sama @sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

S
Shanaka Anslem Perera ⚡ @shanaka86 ·
Anthropic just announced it will take the Trump administration to court over the supply chain risk designation. And in the same breath, Axios revealed the detail that changes everything about this story. While Anthropic was being blacklisted for refusing to allow mass surveillance, the Pentagon’s own “compromise deal” that Under Secretary Emil Michael was offering on the phone at the exact moment Hegseth posted the designation on X would have required Anthropic to allow the collection and analysis of Americans’ geolocation data, web browsing history, and personal financial information purchased from data brokers. Read that again. The Pentagon spent two weeks saying it has no interest in mass surveillance of Americans. Then the deal they actually put on the table asked for access to your location, your browsing history, and your financial records. They told us Anthropic was lying. The contract language told us Anthropic was right. Now here is where this becomes an existential question for a $380 billion company. The supply chain risk designation means every company that does business with the Pentagon must certify they do not use Claude. Eight of the ten largest companies in America use Claude. Defense contractors, cloud providers, consulting firms, banks. The blast radius is not the $200 million Pentagon contract. It is the enterprise ecosystem that generates $14 billion in annual revenue. Anthropic’s legal argument is specific: under 10 USC 3252, the designation can only restrict use of Claude on Pentagon contract work. Your commercial API access, your https://t.co/koW5OJjjaM subscription, your enterprise license are, in Anthropic’s reading, completely unaffected. But here is the problem. That is a legal argument. It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk? The IPO, which was expected this year at a $380 billion valuation backed by $30 billion in fresh capital, is functionally frozen. No underwriter will price an offering while a company carries the same designation as Huawei. And here is the final detail nobody has processed yet. Hours after blacklisting Anthropic, the Pentagon accepted OpenAI’s proposed safety framework, which contains the identical red lines: no mass surveillance, no autonomous lethal weapons. They destroyed one company for a position they then accepted from its competitor. Full analysis on Substack. https://t.co/AEv8EMPdsZ
S SecWar @SecWar

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

A
Addy Osmani @addyosmani ·
Every abstraction shift in software history made devs more productive by raising the level of intent. This is the next step: from writing code to orchestrating systems that write code (building "the factory" for your code). The unsolved problem isn't generation but verification. That's where engineering judgment becomes your highest-leverage skill. To truly scale, think "factory model" - orchestrate fleets of agents like a production line: clear specs as blueprints, TDD for quality control, strong architecture to amplify leverage.
M mntruell @mntruell

The third era of AI software development

S
Sudo su @sudoingX ·
this is what a 24gb VRAM builds in 2026. one prompt. ten files. 3,483 lines of code. zero handholding. i gave Qwen3.5-35B-A3B a single detailed spec describing the full game architecture and hit enter. enemy types, particle systems, procedural audio, powerups, boss fights, ship upgrades, parallax backgrounds, everything in one message. the model planned the file structure itself, wrote every module in dependency order, wired all the imports, and served the game on port 3001. it ran on first load. when it hit a bug in collision detection it read its own error output, found the issue, fixed it, and kept building. this is pure agent loop running on local hardware. what you're looking at is pixelated octopus aliens with tentacle animations, 4 layer parallax space background with planets at different depths, a full particle system handling explosions and ink splatter and engine trails and bullet impacts, procedural audio through Web Audio API with zero sound files loaded, unleash mode with combo multiplier, boss fights every 5 levels, ship upgrades that unlock as you progress. no libraries. no frameworks. vanilla JS and Canvas. 3B active parameters. single RTX 3090. llama.cpp with q8_0 KV cache at 262K context. Claude Code pointed at localhost:8080 through the native Anthropic endpoint. no API costs. 112 tok/s. a GPU you can buy used for $800. game is called Octopus Invaders and i actually like playing it.
S sudoingX @sudoingX

testing Qwen3.5-35B-A3B latest optimized version by UnslothAI on a single RTX 3090. one detailed prompt. zero handholding. watch a 3B model scaffold an entire multifile game project autonomously. the setup: > model: Qwen3.5-35B-A3B (80B total, only 3B active per token) > quant: UD-Q4_K_XL by Unsloth (MXFP4 layers removed in latest update) > speed: 112 tok/s generation, ~130 tok/s prefill > context: 262K tokens > flags: -ngl 99 -c 262144 -np 1 --cache-type-k q8_0 --cache-type-v q8_0 > engine: llama.cpp > agent: Claude Code talk to localhost:8080 (llama.cpp now has native Anthropic API endpoint. no LiteLLM needed) q8_0 KV cache cuts VRAM usage in half vs f16 at 262K. -np 1 is default but worth noting. parallel slots multiply KV cache and at 262K that's an instant OOM. the prompt was more detailed than this but you get the idea: build a space shooter with parallax backgrounds, particle systems, procedural audio, 4 enemy types, boss fights, power-up system, and ship upgrades. 8 JavaScript modules. no libraries. game's called Octopus Invaders. gameplay footage dropping next.

F
forloop @forloopcodes ·
I cant believe this guy just made a permanent solution to context bloat and open sourced it all! when we tested this tool (Context+) for solving an issue on the OpenCode repository, the agent using this tool used ~6.5k fewer tokens, found the code and fixed it in half the time! the results were surprising: 6 to 10k tokens saved per prompt, completed task in ~2 minutes while the agent running without the tool took ~4 mins for the same and got stuck in loops bro built an entire beast by using all the modern tools that we could think of: undo trees, semantic search by meaning (by haskellforall), advanced refactoring, blast radius, advanced file context trees, restore points... i can keep going on semantic code search and context trees are the future of agentic coding and this tool proves it the feature i loved the most is semantic search and how it gets things done 2x faster with least possible tokens it makes an agent that actually knows what it’s doing and not just guessing, it makes meaning from your code similar to RAG. if you aren't optimizing your context, you are just burning money the developer says this tool is still under development, it can have unexpected behavior and the docs need updates but the video shows the reality of how fast it can be github: https://t.co/M0nwGDubAT get here: https://t.co/PIJrM0KYa4
A
Aakash Gupta @aakashgupta ·
The headline says AI intensifies work. What the study actually found is more interesting than that. Berkeley researchers tracked 200 employees for 8 months. AI made every single one of them more capable. They wrote code they couldn’t write before. They took on tasks they used to outsource. They moved faster on work that would have sat in a backlog for months. And then they burned out. Because the company changed nothing else. The org handed people a tool that 10x’d their ability to start new work, then kept the org chart, meeting cadence, review processes, and scope boundaries completely identical. Zero workflow redesign. This is like giving everyone a car and keeping the speed limit signs from the horse-and-buggy era. People drove faster because they could, crashed because nobody updated the roads. The self-reinforcing cycle the researchers found is worth sitting with: AI accelerated tasks → raised speed expectations → workers leaned harder on AI → scope expanded → wider scope created more work → more work demanded more AI. That loop has no natural stopping point. The company never installed one. Meanwhile, a separate NBER study across thousands of workplaces found productivity gains of just 3%. And an Upwork survey found 77% of employees say AI tools actually decreased their productivity. The pattern across all of this research is identical: individual capability goes up, organizational design stays frozen, and the gap between the two creates burnout. The study literally recommends companies build an “AI practice” with structured reflection intervals and scope limits. The researchers aren’t saying AI failed. They’re saying management failed to adapt to AI. Every CEO reading this headline as validation for slowing AI adoption is making exactly the wrong bet. The companies that win will be the ones that redesign the operating system around the intensity, not the ones that avoid it.
R rohanpaul_ai @rohanpaul_ai

Powerful new Harvard Business Review study. "AI does not reduce work. It intensifies it. " A 8-month field study at a US tech company with about 200 employees found that AI use did not shrink work, it intensified it, and made employees busier. Task expansion happened because AI filled in gaps in knowledge, so people started doing work that used to belong to other roles or would have been outsourced or deferred. That shift created extra coordination and review work for specialists, including fixing AI-assisted drafts and coaching colleagues whose work was only partly correct or complete. Boundaries blurred because starting became as easy as writing a prompt, so work slipped into lunch, meetings, and the minutes right before stepping away. Multitasking rose because people ran multiple AI threads at once and kept checking outputs, which increased attention switching and mental load. Over time, this faster rhythm raised expectations for speed through what became visible and normal, even without explicit pressure from managers.