AI Learning Digest.

Anthropic Fights Pentagon Blacklist in Court as OpenAI Quietly Signs the Same Deal

Daily Wrap-Up

The AI industry just got its most politically charged week yet. Anthropic, the company behind Claude, was slapped with a national security supply chain risk designation by the Pentagon for holding firm on two principles: no mass surveillance of American citizens and no autonomous weapons without human oversight. Within hours of the blacklisting, OpenAI signed a classified network deployment deal with the Department of War that includes those exact same safety provisions. If you're confused, you're not alone. Congressman Ted Lieu publicly said he "genuinely doesn't understand." The sequence of events reads less like coherent policy and more like a negotiation tactic that escalated into a legal standoff. Anthropic is now taking the administration to court, and the outcome will shape how every AI company negotiates with the federal government going forward.

Away from the political firestorm, the technical community kept building. A developer ran Qwen3.5-35B on a single RTX 3090, pointed Claude Code at the local endpoint, and got a fully playable space shooter game from a single prompt. Over 3,400 lines of vanilla JavaScript, procedural audio, particle systems, boss fights. No API costs. This kind of demonstration would have been unthinkable 18 months ago, and it quietly reinforces that the gap between local and cloud inference is narrowing faster than most people appreciate. Meanwhile, the Claude Code team previewed /simplify and /batch skills coming in the next release, aimed at automating PR shepherding and parallelizable code migrations.

The most practical takeaway for developers: the Anthropic situation is a business continuity signal worth monitoring. If your organization uses Claude in any capacity and has federal contract exposure, start understanding the legal distinction between commercial API access and government contract work. On the building side, local inference at 112 tokens per second on consumer hardware means your development loop can run entirely offline. If you haven't experimented with llama.cpp and a local model for agentic coding, today's Octopus Invaders demo is a compelling reason to start.

Quick Hits

  • @willwashburn introduced Agent Relay, a new tool for agent communication. Details were sparse in the announcement, but worth watching if you're building multi-agent systems.
  • @theallinpod covered why software stocks are imploding, Claude's "hit list" and Citrini's AI essay, datacenter opposition, and State of the Union reactions. They noted this was recorded before the Anthropic/DoW fallout and will cover it next week. The SaaS crash discussion and Citrini's letter about AI replacing vertical software are worth the listen if you're tracking how public markets are pricing in AI disruption.

Anthropic vs. The Department of War

This is the story that consumed AI Twitter today, and for good reason. It touches national security policy, corporate ethics, competitive dynamics, and the legal boundaries of government procurement power. The core facts: Anthropic was negotiating a $200 million Pentagon contract and drew two non-negotiable lines. Claude would not be used for mass surveillance of American citizens, and Claude would not make lethal decisions without a human in the loop. When Anthropic refused to budge, the Trump administration designated the company a supply chain risk, the same classification applied to Huawei.

@shanaka86 laid out the most detailed analysis, revealing that the "compromise deal" the Pentagon offered would have required Anthropic to "allow the collection and analysis of Americans' geolocation data, web browsing history, and personal financial information purchased from data brokers." The blast radius extends far beyond the Pentagon contract itself. As shanaka86 noted: "Eight of the ten largest companies in America use Claude. Defense contractors, cloud providers, consulting firms, banks. The blast radius is not the $200 million Pentagon contract. It is the enterprise ecosystem that generates $14 billion in annual revenue."

@AnthropicAI released an official statement in response to Secretary Hegseth's comments, while @Aniket_Singh04 captured the narrative arc that resonated most widely: "The US government just punished a company for refusing to let AI kill or spy unsupervised." The framing is reductive but directionally accurate based on what's been reported about the contract terms.

Then came the twist that turned confusion into outrage. Hours after blacklisting Anthropic, the Department of War signed a classified network deployment deal with OpenAI. @sama announced the agreement, noting that "two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement." He added that OpenAI is "asking the DoW to offer these same terms to all AI companies."

The response was immediate and pointed. @tedlieu, a sitting member of Congress, wrote: "The Department of Defense just agreed to the same two conditions with OpenAI that Anthropic was asking for. Can someone explain? I genuinely don't understand." @markgadala was less diplomatic: "Just a few hours ago he was on TV saying he stood by Anthropic. Then he undercuts them and takes the same contract that Anthropic just lost." @MrGoldBro laid out the sequence as a numbered timeline ending with "OpenAI then submits a bid to replace Anthropic," framing it as a straightforward competitive play dressed in safety rhetoric.

What makes this situation so significant isn't just the immediate business impact. It's the precedent. If a supply chain risk designation can be applied to an American AI company for refusing surveillance capabilities, and then lifted or not applied to a competitor that agrees to the same terms, the designation becomes a negotiation weapon rather than a security assessment. Anthropic's legal argument under 10 USC 3252, that the designation can only restrict Claude's use on Pentagon contract work and not commercial deployments, is technically sound but will take years to adjudicate. In the meantime, every enterprise legal team with federal exposure is doing risk assessments. The IPO that was expected at a $380 billion valuation is effectively frozen. Whether you view Anthropic as principled or commercially naive depends on your priors, but the asymmetry between their treatment and OpenAI's is difficult to explain through any policy lens.

Local AI Builds a Full Game on Consumer Hardware

While the policy world burned, a developer reminded everyone that the technical frontier keeps advancing regardless of who holds the government contracts. @sudoingX described what happens when you point Claude Code at a locally-hosted Qwen3.5-35B model running on a single RTX 3090: "one prompt. ten files. 3,483 lines of code. zero handholding."

The game, called Octopus Invaders, features enemy types with tentacle animations, four-layer parallax scrolling, a full particle system, procedural audio through the Web Audio API with no sound files, combo multipliers, boss fights every five levels, and ship upgrades. All vanilla JavaScript and Canvas. No frameworks, no libraries. The model "planned the file structure itself, wrote every module in dependency order, wired all the imports, and served the game on port 3001. It ran on first load." When it hit a collision detection bug, it read its own error output and fixed it autonomously.

The hardware story is as notable as the software. This ran on "3B active parameters. Single RTX 3090. llama.cpp with q8_0 KV cache at 262K context" at 112 tokens per second, a GPU available used for around $800. The entire agentic coding loop operated with zero API costs. This connects directly to what @addyosmani described as the current inflection point in software development: "Every abstraction shift in software history made devs more productive by raising the level of intent. This is the next step: from writing code to orchestrating systems that write code." But addyosmani's real insight was about where human value concentrates in this new paradigm: "The unsolved problem isn't generation but verification. That's where engineering judgment becomes your highest-leverage skill." When a local model can generate 3,400 lines of working code from a single prompt, the developer's role shifts entirely toward specification quality, architectural decisions, and verifying that what was built actually works correctly. The factory model addyosmani describes, where you "orchestrate fleets of agents like a production line" with "clear specs as blueprints, TDD for quality control," is already achievable on consumer hardware.

Claude Code Adds /simplify and /batch

@bcherny from the Claude Code team previewed two new skills shipping in the next version. "/simplify" reviews changed code for reuse, quality, and efficiency, then fixes issues it finds. "/batch" handles straightforward, parallelizable code migrations. Bcherny noted he's "been using both daily" and that combined, "these skills automate much of the work it used to take to shepherd a pull request to production and perform straightforward, parallelizable code migrations."

These additions reflect a maturing understanding of where AI coding tools add the most value. Rather than trying to be a general-purpose code generator, Claude Code is building targeted workflows around the repetitive friction points in the development cycle. PR review, cleanup, and bulk migrations are exactly the kind of tasks where developers lose time to mechanical work rather than creative problem-solving. The /simplify skill in particular aligns with the verification theme from today's broader conversation. If the bottleneck is no longer generating code but ensuring its quality, building automated review directly into the agent's workflow is a natural evolution.

Source Posts

A
Addy Osmani @addyosmani ·
Every abstraction shift in software history made devs more productive by raising the level of intent. This is the next step: from writing code to orchestrating systems that write code (building "the factory" for your code). The unsolved problem isn't generation but verification. That's where engineering judgment becomes your highest-leverage skill. To truly scale, think "factory model" - orchestrate fleets of agents like a production line: clear specs as blueprints, TDD for quality control, strong architecture to amplify leverage.
M Michael Truell @mntruell

The third era of AI software development

S
Sudo su @sudoingX ·
this is what a 24gb VRAM builds in 2026. one prompt. ten files. 3,483 lines of code. zero handholding. i gave Qwen3.5-35B-A3B a single detailed spec describing the full game architecture and hit enter. enemy types, particle systems, procedural audio, powerups, boss fights, ship upgrades, parallax backgrounds, everything in one message. the model planned the file structure itself, wrote every module in dependency order, wired all the imports, and served the game on port 3001. it ran on first load. when it hit a bug in collision detection it read its own error output, found the issue, fixed it, and kept building. this is pure agent loop running on local hardware. what you're looking at is pixelated octopus aliens with tentacle animations, 4 layer parallax space background with planets at different depths, a full particle system handling explosions and ink splatter and engine trails and bullet impacts, procedural audio through Web Audio API with zero sound files loaded, unleash mode with combo multiplier, boss fights every 5 levels, ship upgrades that unlock as you progress. no libraries. no frameworks. vanilla JS and Canvas. 3B active parameters. single RTX 3090. llama.cpp with q8_0 KV cache at 262K context. Claude Code pointed at localhost:8080 through the native Anthropic endpoint. no API costs. 112 tok/s. a GPU you can buy used for $800. game is called Octopus Invaders and i actually like playing it.
S Sudo su @sudoingX

testing Qwen3.5-35B-A3B latest optimized version by UnslothAI on a single RTX 3090. one detailed prompt. zero handholding. watch a 3B model scaffold an entire multifile game project autonomously. the setup: > model: Qwen3.5-35B-A3B (80B total, only 3B active per token) > quant: UD-Q4_K_XL by Unsloth (MXFP4 layers removed in latest update) > speed: 112 tok/s generation, ~130 tok/s prefill > context: 262K tokens > flags: -ngl 99 -c 262144 -np 1 --cache-type-k q8_0 --cache-type-v q8_0 > engine: llama.cpp > agent: Claude Code talk to localhost:8080 (llama.cpp now has native Anthropic API endpoint. no LiteLLM needed) q8_0 KV cache cuts VRAM usage in half vs f16 at 262K. -np 1 is default but worth noting. parallel slots multiply KV cache and at 262K that's an instant OOM. the prompt was more detailed than this but you get the idea: build a space shooter with parallax backgrounds, particle systems, procedural audio, 4 enemy types, boss fights, power-up system, and ship upgrades. 8 JavaScript modules. no libraries. game's called Octopus Invaders. gameplay footage dropping next.

S
Shanaka Anslem Perera ⚡ @shanaka86 ·
Anthropic just announced it will take the Trump administration to court over the supply chain risk designation. And in the same breath, Axios revealed the detail that changes everything about this story. While Anthropic was being blacklisted for refusing to allow mass surveillance, the Pentagon’s own “compromise deal” that Under Secretary Emil Michael was offering on the phone at the exact moment Hegseth posted the designation on X would have required Anthropic to allow the collection and analysis of Americans’ geolocation data, web browsing history, and personal financial information purchased from data brokers. Read that again. The Pentagon spent two weeks saying it has no interest in mass surveillance of Americans. Then the deal they actually put on the table asked for access to your location, your browsing history, and your financial records. They told us Anthropic was lying. The contract language told us Anthropic was right. Now here is where this becomes an existential question for a $380 billion company. The supply chain risk designation means every company that does business with the Pentagon must certify they do not use Claude. Eight of the ten largest companies in America use Claude. Defense contractors, cloud providers, consulting firms, banks. The blast radius is not the $200 million Pentagon contract. It is the enterprise ecosystem that generates $14 billion in annual revenue. Anthropic’s legal argument is specific: under 10 USC 3252, the designation can only restrict use of Claude on Pentagon contract work. Your commercial API access, your https://t.co/koW5OJjjaM subscription, your enterprise license are, in Anthropic’s reading, completely unaffected. But here is the problem. That is a legal argument. It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk? The IPO, which was expected this year at a $380 billion valuation backed by $30 billion in fresh capital, is functionally frozen. No underwriter will price an offering while a company carries the same designation as Huawei. And here is the final detail nobody has processed yet. Hours after blacklisting Anthropic, the Pentagon accepted OpenAI’s proposed safety framework, which contains the identical red lines: no mass surveillance, no autonomous lethal weapons. They destroyed one company for a position they then accepted from its competitor. Full analysis on Substack. https://t.co/AEv8EMPdsZ
S Secretary of War Pete Hegseth @SecWar

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

S
Sam Altman @sama ·
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
A
Aniket @Aniket_Singh04 ·
Nobody’s talking about what just happened to Anthropic: Anthropic built the AI that half the US government quietly depends on daily They were deep in a $200M Pentagon deal — one of the biggest AI contracts ever Anthropic drew two hard lines: Claude won’t surveil American citizens, Claude won’t pull a trigger without a human deciding The Pentagon said those lines needed to go. Anthropic said they weren’t moving (respect 🫡) Trump signed an order cutting Claude from every federal agency overnight The Pentagon then slapped them with a “national security risk” designation — the same one they gave Huawei Every classified system running Claude has 6 months to rip it out completely Sam Altman — Anthropic’s biggest competitor — publicly said OpenAI has the same rules and wouldn’t have budged either The US government just punished a company for refusing to let AI kill or spy unsupervised.
A
Aidan Gold @MrGoldBro ·
Let me get this straight: Anthropic refused to work with DoW unless they could promise their tech wasn't used for surveillance or killing. DoW said that they need full capabilities. Anthropic declined to give full access. OpenAI stood by Anthropic for ensuring AI safety. Trump then cancelled all Anthropic usage across the government, including a $200m contract. OpenAI then submits a bid to replace Anthropic.
M
Mark Gadala-Maria @markgadala ·
Just a few hours ago he was on TV saying he stood by Anthropic. Then he undercuts them and takes the same contract that Anthropic just lost. How can anyone trust this guy?
S Sam Altman @sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

W
Will Washburn @willwashburn ·
Introducing Agent Relay
T
Ted Lieu @tedlieu ·
The Department of Defense just agreed to the same two conditions with OpenAI that Anthropic was asking for. Can someone explain? I genuinely don’t understand.
S Sam Altman @sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

A
Anthropic @AnthropicAI ·
A statement on the comments from Secretary of War Pete Hegseth. https://t.co/Gg7Zb09IMR
B
Boris Cherny @bcherny ·
In the next version of Claude Code.. We're introducing two new Skills: /simplify and /batch. I have been using both daily, and am excited to share them with everyone. Combined, these kills automate much of the work it used to take to (1) shepherd a pull request to production and (2) perform straightforward, parallelizable code migrations.