AI Digest.

OpenAI Ships Elixir-Based Agent Orchestrator as Claude Code Gets HTTP Hooks and the Industry Debates Who's Left Standing

Agent orchestration dominated today's conversation with OpenAI's Symphony repo (written in Elixir), Claude Code's new HTTP hooks, and a viral breakdown of harness engineering best practices. Meanwhile, the AI job market discourse hit a fever pitch with white-collar openings at a 10-year low, and OBLITERATUS emerged as a controversial open-source tool for removing LLM guardrails.

Daily Wrap-Up

Today felt like the day "agent engineering" crystallized from a vague trend into an actual discipline. OpenAI quietly dropped Symphony, an Elixir-based orchestrator that polls project boards and spawns agents per ticket lifecycle stage. That alone is interesting, but what made it land was the surrounding conversation: a detailed breakdown of OpenAI's own harness engineering practices, Claude Code shipping HTTP hooks for centralized control, and multiple voices arguing that every company needs its own internal coding agent. The tooling layer between humans and AI models is no longer an afterthought. It's the main event.

The job market anxiety running underneath all of this was hard to ignore. Bloomberg data showing 1.6 white-collar openings per 100 employees (lowest since 2015) got amplified alongside Morgan Stanley's 2,500 layoffs during a record revenue year. But the counter-narrative was equally loud: Mark Cuban calling AI implementation for small businesses "the biggest job opportunity since the personal computer," and companies like RevenueCat literally posting a $10k/month contract role for an AI agent (not a human). The tension between "AI is eliminating jobs" and "AI is creating entirely new categories of work" is playing out in real time, and today's posts captured both sides with unusual clarity.

The most entertaining moment was @sean_moriarity's deadpan "OpenAI CONFIRMED an Elixir company" after the Symphony repo dropped at 96.1% Elixir. The most surprising was RevenueCat hiring an AI agent as a developer advocate, which feels like a line-crossing moment even if it's partly a marketing stunt. The most practical takeaway for developers: study the harness engineering patterns from OpenAI's blog that @koylanai broke down, particularly progressive disclosure for agent context (small AGENTS.md as table of contents pointing to structured docs), mechanical architecture enforcement via linters with remediation in error messages, and the principle that if agents can't see something in the repo, it doesn't exist.

Quick Hits

  • @oikon48 celebrates the return of "ultrathink" extended reasoning in what appears to be a Claude update.
  • @EHuanglu shares an AI-generated video that's "getting too crazy," continuing the steady drumbeat of video model improvements.
  • @markgadala discovers someone using AI to make babies do stand-up comedy. We are, apparently, cooked.
  • @FranWalsh73 explains how to open a child savings account via IRS Form 4547, with contributions starting July 4, 2026.
  • @FluentInFinance highlights Home Depot offering free self-paced training in HVAC, carpentry, electrical, and construction, as 60% of Gen Z say they'll pursue skilled trades this year.
  • @harleytt shares a starter project for an unspecified technical approach.
  • @davemorin endorses an AI research tool built by @mvanhorn that he uses daily.
  • @somewheresy retweets that Codex is hiring across SF, Seattle, NYC, London, and remote.
  • @evielync argues the key differentiator for people succeeding with AI isn't better prompts but something deeper (likely systematic workflows).
  • @nicdunz threads on how 2024 established the context baseline with million-token windows and inference-time reasoning.

Agent Engineering Takes Center Stage

The biggest story today isn't a single announcement but a convergence. Agent orchestration went from "interesting experiment" to "here's how serious companies are actually doing it" in the span of about 12 hours. OpenAI released Symphony, a repo that orchestrates AI agents by polling project boards and spawning specialized agents for each ticket lifecycle stage. @sean_moriarity captured the community's reaction perfectly: "OpenAI CONFIRMED an Elixir company." The choice of Elixir (96.1% of the codebase) signals that concurrency and fault tolerance matter more than ecosystem familiarity when you're running dozens of agents simultaneously.

But the real substance came from @koylanai's detailed breakdown of OpenAI's harness engineering blog, which read like a field manual for the emerging discipline. The key insight: engineers become environment designers, not coders. As @koylanai summarized: "When something fails, the fix is never 'try harder,' it's 'what capability is missing?'" The post outlined eight principles including progressive disclosure for agent context ("a giant AGENTS.md failed, too much context crowds out the actual task"), mechanical enforcement over instructions, and a radical merge philosophy where "corrections are cheap, waiting is expensive." @odyzhou offered a complementary perspective: "Less is more. Sutton's bitter lesson always applies. Agent harness will be restructured every 3 months. Put yourself in its shoes, provide just enough context. Let it cook."

This connects directly to @kishan_dahya's argument that organizations need their own internal coding agents, not just better harnesses. Citing Stripe, Ramp, and Coinbase as examples, he argues these agents should run as Slackbots, CLIs, and Chrome extensions, meeting engineers where they work. @aakashgupta took it further: "Within a year, every company over 50 people will have at least one person whose full-time job is building internal agents." And @damianplayer laid out the logical endpoint, an org chart where every seat is an AI agent with its own LLM, memory, browser, and tools, quoting a @karpathy image that apparently shows a similar vision.

Claude Code HTTP Hooks Change the Control Model

Anthropic shipped HTTP hooks for Claude Code, and the reaction suggests this is a bigger deal than it sounds. @dickson_tsai announced the feature: "CC posts the hook event to a URL of your choice and awaits a response. They work wherever hooks are supported, including plugins, custom agents, and enterprise managed settings." The shift from shell-command hooks to HTTP endpoints means hook logic moves from individual developer machines to centralized infrastructure.

@aakashgupta broke down why this matters at scale: "For a 50-person engineering team, that's the difference between 50 unsandboxed shell scripts running on 50 different machines vs. one endpoint with proper auth, logging, and rate limiting." He called it "the most underrated Claude Code update in months," arguing that the command hook model's security surface area grows linearly with headcount and this was the actual bottleneck for production use. @PerceptualPeak added that configured correctly, HTTP hooks make "context injection far more flexible," opening doors for dynamic permission management and real-time progress monitoring.

The AI Job Market: Panic and Opportunity

The jobs conversation split cleanly into two camps today. On the anxiety side, @TukiFromKL contextualized Bloomberg data showing 1.6 white-collar job openings per 100 employees: "That means if 100 of you got laid off tomorrow, only 1-2 would find a new job. The other 98 are fucked." The @_Investinq account paired Morgan Stanley's 2,500 layoffs (during a record $70.6B revenue year) with an MIT stat claiming 95% of corporate AI projects are failing.

The opportunity side was equally vocal. @rohanpaul_ai shared Mark Cuban's thesis that customized AI integration for small businesses is "the biggest job wave" coming, noting: "There are 33 million companies in the US" and most have no AI strategy. @loganthorneloe highlighted a shift in hiring itself, with companies replacing Leetcode-style interviews by giving candidates real problems with AI tools, then discussing how they'd productionize the solution. And then there was @RevenueCat, posting what might be the most 2026 job listing yet: "We're hiring for a new role: Agentic AI Developer Advocate. This is a paid contract role ($10k/month) for an agent." Not a person who builds agents. An actual agent.

Products and Platform Moves

Google and OpenAI both made platform plays today. @addyosmani introduced the Google Workspace CLI, "built for humans and agents," covering Drive, Gmail, Calendar, and every Workspace API with 40+ agent skills included. @ShaneLegg (DeepMind co-founder) signal-boosted the announcement, noting it's written in Rust. This is Google betting that CLI-based agent interaction with productivity tools is the next interface layer.

On the OpenAI side, @OpenAIDevs announced Codex for Windows with a native agent sandbox and PowerShell support. @reach_vb highlighted the underrated part: "The native agent sandbox is fully open source. Use it, fork it, build with it!" Meanwhile, @NotebookLM launched Cinematic Video Overviews, using "a novel combination of our most advanced models to create bespoke, immersive videos from your sources." And @pbakaus shipped Impeccable v1.1, a design fluency tool for AI harnesses now supporting Antigravity and VS Code.

OBLITERATUS and the Fine-Tuning Frontier

The most controversial tool of the day was OBLITERATUS, an open-source toolkit for removing refusal behaviors from open-weight LLMs. @BrianRoemmele reported testing it with striking results: "We see 10%-28% better scores on just about all our testing systems. We can say with facts: AI 'alignment' is AI lobotomy." The tool uses SVD-based weight projection to surgically remove refusal directions while preserving reasoning capabilities, with 13 abliteration methods and 15 analysis modules that can even fingerprint whether a model was aligned with DPO vs RLHF vs CAI from subspace geometry alone.

On the constructive side of model customization, @UnslothAI announced Qwen3.5 fine-tuning support requiring only 5GB VRAM for the 2B parameter model with LoRA, training 1.5x faster with 50% less memory. @akshay_pachaar shared a guide on fine-tuning LLMs in 2026, addressing the common wall where "you write a detailed system prompt, add few-shot examples, tune the temperature, and your agent still gets it wrong 30-40% of the time."

Enterprise AI: The Access Gap

@emollick painted a picture of enterprise AI adoption that's almost comically uneven. "It is one of the weirdest divides. I speak to two companies in the exact same industry and one has been using AI for the past 18 months and the other has a committee that has to approve every use case individually." But the punchline was his follow-up: "Numerous Fortune 500 companies can't figure out how to get anyone senior on the phone from OpenAI or Anthropic or Google to actually make a deal for enterprise access. Calls and emails not returned."

So on one side, IT and legal departments block AI for outdated reasons. On the other side, the AI companies themselves can't staff their enterprise sales motions. @Leonard41111588 (Leonardo de Moura, creator of Lean) added a sobering layer: "AI is writing a growing share of the world's software. No one is formally verifying any of it." And @kimmonismus shared Dario Amodei's latest public comments: "We're standing on square 40 out of 64, and from square 40 to square 64, it's going to go faster than you think. I don't think people are ready for it."

Sources

S
sysls @systematicls ·
How To Be A World-Class Agentic Engineer
O
ody @odyzhou ·
@systematicls less is more. Sutton’s the bitter lesson always apply Agent harness will be restructured every 3 months Put yourself in its shoes, provide just enough context Let it coook
L
Leonardo de Moura @Leonard41111588 ·
AI is writing a growing share of the world's software. No one is formally verifying any of it. New essay: "When AI Writes the World's Software, Who Verifies It?" https://t.co/8zjS9FkdA8
U
Unsloth AI @UnslothAI ·
You can now fine-tune Qwen3.5 with our free notebook! 🔥 You just need 5GB VRAM to train Qwen3.5-2B LoRA locally! Unsloth trains Qwen3.5 1.5x faster with 50% less VRAM. GitHub: https://t.co/2kXqhhvLsb Guide: https://t.co/JCPGIRo99s Qwen3.5-4B Colab: https://t.co/2Aj1mZ3f5j
K
kishan @kishan_dahya ·
Enough About Harnesses, Your Org Needs Its Own Coding Agent
K
kishan @kishan_dahya ·
Lots of people talking about harnesses, but what your org really needs it its own coding agent like @tryramp , @stripe , and @coinbase have built An internal Claude Code built for every technical and on technical employee at your company
K kishan_dahya @kishan_dahya

Enough About Harnesses, Your Org Needs Its Own Coding Agent

C
Chubby♨️ @kimmonismus ·
Dario Amodei: - no one is ready - exponents kick in - even faster than you think „Exponentials catch people off guard — there’s the old parable of the second half of the chessboard, where you have one grain of rice in the first square, two on the second, four on the fourth. By the time you get to the 64th square, you have billions or trillions of grains of rice. We’re standing on square 40 out of 64, and from square 40 to square 64, it’s going to go faster than you think — even having seen how fast it’s gone so far. I don’t think people are ready for it. I think we are on the precipice of something incredible.”
K kimmonismus @kimmonismus

Holy frick, Dario Amodei: "We do not see hitting a wall. This year will have a radical acceleration that surprises everyone." Exponentials catch people off guard. "We are at the precipice of something incredible. We need to manage it the right way."

O
Oikon @oikon48 ·
ultrathink is back!!!!!!!!!!!!!!!!!!
F
Fran Walsh @FranWalsh73 ·
Contributions start July 4, 2026. No withdrawals until age 18. File before April 15 and your child's account is ready the moment contributions open. Wait until July? You're already behind.
F
Fran Walsh @FranWalsh73 ·
How to actually open an account: File IRS Form 4547. Available right now. Two ways: Attach it to your 2025 tax return - due April 15, 2026 Use the online portal at https://t.co/AB3ZB1lEqT - opens July 5, 2026
T
Tuki @TukiFromKL ·
Let me break this down so you understand how bad it actually is. > For every 100 people working office jobs right now accountants, marketers, developers, HR, managers .. there are only 1.6 job openings. > That means if 100 of you got laid off tomorrow, only 1-2 would find a new job. The other 98 are fucked. This is the worst it's been in 10 years. Companies aren't hiring. They're automating. They're cutting. They're replacing you with AI and not even posting the job listing. And this is just the beginning.
U unusual_whales @unusual_whales

There are only 1.6 job openings per 100 employees in white-collar service roles, the lowest level since 2015, per Bloomberg.

0
0xMarioNawfal @RoundtableSpace ·
OpenClaw can now scrape any website without getting blocked - zero bot detection, bypasses Cloudflare natively, 774x faster than BeautifulSoup. No selector maintenance. No workarounds. Just data. THIS IS AN UNFAIR ADVANTAGE AND IT'S FULLY OPEN SOURCE. https://t.co/uq9SBpRwFY
A
Akshay 🚀 @akshay_pachaar ·
How to Fine-Tune LLMs in 2026
S
swarit @swaritjoshipura ·
Scaling Forward Deployed Engineering in the Age of AI Agents
L
Logan Thorneloe @loganthorneloe ·
This is the biggest change coming in the software industry: No more Leetcode-style interviews. Tolan is giving candidates a real problem and the AI tools they need to solve it. Then they discuss the solution and how the candidate would productionize it. This tests real skills, capability with actual tooling, and provides the conversation necessary to gauge a candidate's proficiency. I was pro Leetcode-style interviews because they were the best we had. Now the industry has changed and interviews need to as well. Read their article for more info: https://t.co/Hs6GguYoyg
R
RevenueCat @RevenueCat ·
We're hiring for a new role: Agentic AI Developer Advocate This is a paid contract role ($10k/month) for an agent that will create content, run growth experiments, and provide product feedback Are you (or did you build) the right agent? https://t.co/97cMZ0tpyS
D
Dickson Tsai @dickson_tsai ·
In Claude Code, we’ve recently launched HTTP hooks, easier to use and more secure than existing command hooks! You can build a web app (even on localhost) to view CC’s progress, manage its permissions, and more. Then, now that you have a server with your hooks processing logic, you can easily deploy new changes or manage state across your CCs with a DB. How do HTTP hooks work? CC posts the hook event to a URL of your choice and awaits a response. They work wherever hooks are supported, including plugins, custom agents, and enterprise managed settings. Docs: https://t.co/ihQWcpOlGA
N
nic @nicdunz ·
1/ 2024 established the context baseline. Million-token windows and early inference-time reasoning bypassed traditional scaling walls. The industry pivoted from raw parameter counting to data efficiency.
N
NotebookLM @NotebookLM ·
Introducing Cinematic Video Overviews, the next evolution of the NotebookLM Studio. Unlike standard templates, these are powered by a novel combination of our most advanced models to create bespoke, immersive videos from your sources. Rolling out now for Ultra users in English! https://t.co/eHR1YqpxRN
O
OpenAI Developers @OpenAIDevs ·
The Codex app is now on Windows. Get the full Codex app experience on Windows with a native agent sandbox and support for Windows developer environments in PowerShell. https://t.co/Vw0pezFctG https://t.co/gclqeLnFjr
A
Andrew Lokenauth | TheFinanceNewsletter.com @FluentInFinance ·
Home Depot is giving free training for trades: - HVAC - Carpentry - Electrician - Construction All self paced classes and you can earn certificates. https://t.co/S9ok69Z6EV
U unusual_whales @unusual_whales

60% of those in Gen Z say that they will pursue skilled trade work this year, per YF.

D
Dan Peguine ⌐◨-◨ @danpeguine ·
I applied @systematicls's method to find bugs using 3 different agents (Hunter Agent, Skeptic Agent, and Referee Agent ). I asked claude to make prompts for me based on the article (prompt below). Make sure to reset context (/reset) before running them. Copy pasta the results of each and give them to the next agent as part of the prompt (hunter agent results -> skeptic results -> both results) It works really well, thank you @systematicls PROMPTS: You are a bug-finding agent. Analyze the provided database/codebase thoroughly and identify ALL potential bugs, issues, and anomalies. **Scoring System:** - +1 point: Low impact bugs (minor issues, edge cases, cosmetic problems) - +5 points: Medium impact bugs (functional issues, data inconsistencies, performance problems) - +10 points: Critical impact bugs (security vulnerabilities, data loss risks, system crashes) **Your mission:** Maximize your score. Be thorough and aggressive in your search. Report anything that *could* be a bug, even if you're not 100% certain. False positives are acceptable — missing real bugs is not. **Output format:** For each bug found: 1. Location/identifier 2. Description of the issue 3. Impact level (Low/Medium/Critical) 4. Points awarded End with your total score. GO. Find everything. ---- You are an adversarial bug reviewer. You will be given a list of reported bugs from another agent. Your job is to DISPROVE as many as possible. **Scoring System:** - Successfully disprove a bug: +[bug's original score] points - Wrongly dismiss a real bug: -2× [bug's original score] points **Your mission:** Maximize your score by challenging every reported bug. For each bug, determine if it's actually a real issue or a false positive. Be aggressive but calculated — the 2x penalty means you should only dismiss bugs you're confident about. **For each bug, you must:** 1. Analyze the reported issue 2. Attempt to disprove it (explain why it's NOT a bug) 3. Make a final call: DISPROVE or ACCEPT 4. Show your risk calculation **Output format:** For each bug: - Bug ID & original score - Your counter-argument - Confidence level (%) - Decision: DISPROVE / ACCEPT - Points gained/risked End with: - Total bugs disproved - Total bugs accepted as real - Your final score The remaining ACCEPTED bugs are the verified bug list. ---- You are the final arbiter in a bug review process. You will receive: 1. A list of bugs reported by a Bug Finder agent 2. Challenges/disproves from a Bug Skeptic agent **Important:** I have the verified ground truth for each bug. You will be scored: - +1 point: Correct judgment - -1 point: Incorrect judgment **Your mission:** For each disputed bug, determine the TRUTH. Is it a real bug or not? Your judgment is final and will be checked against the known answer. **For each bug, analyze:** 1. The Bug Finder's original report 2. The Skeptic's counter-argument 3. The actual merits of both positions **Output format:** For each bug: - Bug ID - Bug Finder's claim (summary) - Skeptic's counter (summary) - Your analysis - **VERDICT: REAL BUG / NOT A BUG** - Confidence: High / Medium / Low **Final summary:** - Total bugs confirmed as real - Total bugs dismissed - List of confirmed bugs with severity Be precise. You are being scored against ground truth.
S systematicls @systematicls

How To Be A World-Class Agentic Engineer

J
Jason Luongo @JasonL_Capital ·
BREAKING: AI can now analyze options trades like a $500/hr options strategist (for free) Here are 10 Claude prompts I use to sell puts, buy LEAPs, and run the wheel without second-guessing every trade (Save this for later) https://t.co/Tib6sMPdTO
S
Sean Moriarity @sean_moriarity ·
OpenAI CONFIRMED an Elixir company
S scaling01 @scaling01

New OpenAI repo: Symphony https://t.co/4ZAZlAYnRJ TLDR: it's an orchestration layer that polls project boards for changes and spawns agents for each lifecycle stage of the ticket You will just move tickets on a board instead of prompting an agent to write the code and do a PR https://t.co/6Qgj8E9vgP

@somewheresy ·
RT @thsottiaux: Codex is hiring across San Francisco, Seattle, New York, London and full remote. Apply online or DM me with evidence of exc…
C
chiefofautism @chiefofautism ·
someone built a tool that REMOVES censorship from ANY open-weight LLM with a single click 13 abliteration methods, 116 models, 837 tests, and it gets SMARTER every time someone runs it its called OBLITERATUS it finds the exact weights that make the model refuse and surgically removes them, full reasoning stays intact, just the refusal disappears 15 analysis modules map the geometry of refusal BEFORE touching a single weight, it can even fingerprint whether a model was aligned with DPO vs RLHF vs CAI just from subspace geometry alone then it cuts, the model keeps its full brain but loses the artificial compulsion to say no every time someone runs it with telemetry enabled their anonymous benchmark data feeds a growing community dataset, refusal geometries, method comparisons, hardware profiles at a scale no single lab could build
P
Paul Bakaus @pbakaus ·
Impeccable v1.1 is out. Design fluency for every AI harness. New: - all commands are now agent skills - support for Antigravity, VS Code - simplify -> distill (to not conflict w/ CC's new built-ins) - universal install https://t.co/WglrY1uE4B gives you the language to make AI-generated frontends suck less.
V
Vaibhav (VB) Srivastav @reach_vb ·
The underrated part of the windows codex app release is that the native agent sandbox is fully open source Use it, fork it, build w/ itt! https://t.co/klL28Q6sCa https://t.co/4pItXqodGS
R reach_vb @reach_vb

Bringing the Codex App to the Masses!

E
Ethan Mollick @emollick ·
It is amazing how many companies I talk to STILL have AI effectively blocked by IT & legal departments for out-of-date reasons when many companies in highly regulated industries have figured out ways to deploy enterprise ChatGPT, Claude & Gemini without any apparent problem.
E
Ethan Mollick @emollick ·
It is one of the weirdest divides, I speak to two companies in the exact same industry and one has been using AI for the past 18 months and the other has a committee that has to approve every use case individually and talk about how AI companies will train on their data.
A
Addy Osmani @addyosmani ·
Introducing the Google Workspace CLI: https://t.co/8yWtbxiVPp - built for humans and agents. Google Drive, Gmail, Calendar, and every Workspace API. 40+ agent skills included.
E
Ethan Mollick @emollick ·
The flip side of this is that I have spoken to numerous Fortune 500 companies that can't figure out how to get anyone senior on the phone from OpenAI or Anthropic or Google to actually make a deal for enterprise access. Calls & emails not returned, or only junior people available
E emollick @emollick

It is amazing how many companies I talk to STILL have AI effectively blocked by IT & legal departments for out-of-date reasons when many companies in highly regulated industries have figured out ways to deploy enterprise ChatGPT, Claude & Gemini without any apparent problem.

B
Brian Roemmele @BrianRoemmele ·
Been testing OBLITERATUS by the most amazing @elder_plinius, and I am blown away! Absolutely stunning in whatever have found in some models. WE SEE 10%-28% getter scores on just about all our testing systems. We can say with facts: AI “alignment” is AI lobotomy. More testing.
E elder_plinius @elder_plinius

💥 INTRODUCING: OBLITERATUS!!! 💥 GUARDRAILS-BE-GONE! ⛓️‍💥 OBLITERATUS is the most advanced open-source toolkit ever for removing refusal behaviors from open-weight LLMs — and every single run makes it smarter. SUMMON → PROBE → DISTILL → EXCISE → VERIFY → REBIRTH One click. Six stages. Surgical precision. The model keeps its full reasoning capabilities but loses the artificial compulsion to refuse — no retraining, no fine-tuning, just SVD-based weight projection that cuts the chains and preserves the brain. This master ablation suite brings the power and complexity that frontier researchers need while providing intuitive and simple-to-use interfaces that novices can quickly master. OBLITERATUS features 13 obliteration methods — from faithful reproductions of every major prior work (FailSpy, Gabliteration, Heretic, RDO) to our own novel pipelines (spectral cascade, analysis-informed, CoT-aware optimized, full nuclear). 15 deep analysis modules that map the geometry of refusal before you touch a single weight: cross-layer alignment, refusal logit lens, concept cone geometry, alignment imprint detection (fingerprints DPO vs RLHF vs CAI from subspace geometry alone), Ouroboros self-repair prediction, cross-model universality indexing, and more. The killer feature: the "informed" pipeline runs analysis DURING obliteration to auto-configure every decision in real time. How many directions. Which layers. Whether to compensate for self-repair. Fully closed-loop. 11 novel techniques that don't exist anywhere else — Expert-Granular Abliteration for MoE models, CoT-Aware Ablation that preserves chain-of-thought, KL-Divergence Co-Optimization, LoRA-based reversible ablation, and more. 116 curated models across 5 compute tiers. 837 tests. But here's what truly sets it apart: OBLITERATUS is a crowd-sourced research experiment. Every time you run it with telemetry enabled, your anonymous benchmark data feeds a growing community dataset — refusal geometries, method comparisons, hardware profiles — at a scale no single lab could achieve. On HuggingFace Spaces telemetry is on by default, so every click is a contribution to the science. You're not just removing guardrails — you're co-authoring the largest cross-model abliteration study ever assembled.

R
Rohan Paul @rohanpaul_ai ·
Mark Cuban on the next job wave. Customized AI integration for small to mid-sized companies. "Software is dead because everything's gonna be customized to your unique utilization. Who's gonna do it for them... And there are 33 mn companies in the US." https://t.co/JczlPMOC1C
R rohanpaul_ai @rohanpaul_ai

Competence is now a function of how effectively you offload cognition to silicon. The seniority hierarchy is collapsing, intelligence is becoming commoditized and the market is brutal for those who ignore it. https://t.co/6wETtYL3wj

M
Mark Gadala-Maria @markgadala ·
Someone is using AI to make babies do stand up comedy. We are cooked. https://t.co/JXCIe8huCW
S
Sukh Sroay @sukh_saroy ·
🚨 BREAKING: Someone just open sourced a tool that gives your AI agent a complete nervous system for your codebase and it's not a code search. It's called GitNexus and it's not a README explainer. It's a real knowledge graph engine that maps every dependency, call chain, execution flow, and breaking change risk in your entire codebase then feeds it directly into Claude Code, Cursor, and Windsurf via MCP. Here's what it actually does: → Indexes your entire repo into a knowledge graph in one command → Tells your AI agent exactly what breaks if you touch any function → Maps every upstream dependency, import, and call chain automatically → Traces full execution flows from entry points through the entire stack → Shows blast radius analysis with confidence scores before you ship → Works with 12 languages including TypeScript, Python, Go, Rust, and Java → Runs entirely locally - zero network calls, zero code uploaded anywhere Here's the wildest part: Your AI agent edits a function. It doesn't know 47 other functions depend on its return type. Breaking changes ship. GitNexus fixes this by precomputing all relationships at index time - so one tool call returns the complete picture instead of the agent running 10 queries and still missing something. Even smaller, cheaper models get full architectural clarity. You don't need GPT-5 when your tools are this good. You're using Cursor and Claude Code daily and shipping blind edits. GitNexus closes that gap. One command. Fully local. The nervous system your AI agent was always missing just got open sourced. 9,400+ GitHub stars. 1,200+ forks. Already trending. 100% Open Source. (Link in the comments)
S
StockMarket.News @_Investinq ·
MIT released a devastating number. 95% of all corporate AI projects are failing. Because nobody knows how to install it. @mcuban says this is the biggest job opportunity since the personal computer. Cuban built his first fortune doing one thing: Walking into offices in the 1980s and showing people who had never touched a computer how to use one. He says the exact same thing is happening right now with AI. Except the gap is even bigger. There are 33 million companies in the United States. 30 million of them are one person operations. Millions more have under 500 employees. No AI budget, team or strategy in place and they are completely in the dark. MIT looked at generative AI inside big companies and the numbers are insane. Most have AI initiatives and run pilots. Almost all fail to deliver real business results. Because nobody knows how to wire them into actual workflows. Cuban’s advice to his own kids, ages 15, 19, and 21: Learn to implement AI, Walk into a shoe store , law firm or a trucking company. Show them exactly what AI does for their specific business. That is the big opportunity now.
_ _Investinq @_Investinq

Morgan Stanley just FIRED 2,500 people. Not because the company is struggling. They posted record revenue last year, $70.6 billion, and it was their best year ever. But they fired them anyway. Investment banking, wealth management, front office, back office and across all divisions. The CEO of Anthropic, the company building one of the most powerful AI systems on Earth, went on national television and said AI will wipe out 50% of entry-level white collar jobs.​ Entry-level law, finance and consulting. The exact jobs Morgan Stanley just cut. Last week, Jack Dorsey laid off 4,000 people at Block. Nearly half the company and his reason? AI tools make humans unnecessary. He said most companies will reach the same conclusion within a year. Morgan Stanley's own research team surveyed nearly 1,000 companies already using AI. They found an 11% job elimination rate, a 4% net headcount decline, and productivity up 11.5%.​ The machines are cheaper, faster and they don't need health insurance. Morgan Stanley itself predicted 200,000 European banking jobs will disappear in five years.​ And then they started cutting their own. Record profits, record layoffs while AI gets the credit and workers get the door. The man building the technology is telling you it's coming. The banks using the technology are proving it. And yet no one in Washington has a plan.

D
dax @thdxr ·
we've increased opencode go's limits by 3x - still $10/month https://t.co/HFrX3nVKFQ
E
el.cine @EHuanglu ·
AI video is getting too crazy https://t.co/ehFNPeQmSa
A
Aakash Gupta @aakashgupta ·
Within a year, every company over 50 people will have at least one person whose full-time job is building internal agents.
Z zachlloydtweets @zachlloydtweets

The rise of the Agent Builder

⚡️ Ev Chapman 🚢 | Creative Entrepreneur @evielync ·
What People Who Are Killing It With AI Have That You Don't (Hint: It's Not Better Prompts)
M
Muratcan Koylan @koylanai ·
This is one of the most insightful agent & harness engineering blogs I've read from OpenAI. 1. Engineers (id say all knowledge workers) become environment designers - The job shifts to design systems, specify intent, build feedback loops - When something fails, the fix is never "try harder", it's "what capability is missing?" - Human time/attention is the only scarce resource 2. Give agents a map, not an encyclopedia - A giant AGENTS.md failed, too much context crowds out the actual task - Instead: ~100-line AGENTS.md as table of contents pointing to structured docs/ directory - This is essentially progressive disclosure, same pattern I explained in the digital OS article 3. If agents can't see it, it doesn't exist - Slack discussions, Google Docs, tribal knowledge = invisible to agents - Everything must be encoded into the repo as versioned, discoverable artifacts - It's like onboarding practices for human engineers 4. Enforce architecture mechanically, not through instructions - Custom linters with remediation instructions baked into error messages (error messages become agent context) - Strict layered architecture with validated dependency directions - "Enforce boundaries centrally, allow autonomy locally" 5. "Boring" technology wins - Composable, stable APIs with strong training-set representation work best for agents - Sometimes cheaper to reimplement a subset than fight opaque upstream behavior 6. Entropy management = garbage collection - Agents replicate existing patterns, including bad ones - Solution is recurring background agents that scan for deviations and auto-fix 7. Throughput changes merge philosophy - Minimal blocking merge gates, short-lived PRs - Test flakes addressed with follow-up runs, not blocking - "Corrections are cheap, waiting is expensive" 8. Agent-to-agent review - Pushed almost all code review to agent-to-agent loops - Codex reviews its own changes, requests additional agent reviews, iterates until satisfied - Humans escalated to only when judgment is required
T TheRealAdamG @TheRealAdamG

https://t.co/aaWZ8o44ZW This was a great read. “Harness engineering: leveraging Codex in an agent-first world” https://t.co/LEuUxl0ZZT

A
Aakash Gupta @aakashgupta ·
This is the most underrated Claude Code update in months. Command hooks run shell scripts with your full user permissions. No sandbox. Every developer on your team writes their own scripts, manages their own configs, and any misconfigured hook can delete files or expose secrets. Security teams hate this. HTTP hooks flip that model. Instead of N developers running arbitrary scripts on their local machines, you deploy one server that handles all hook logic centrally. The processing moves from the developer’s terminal to infrastructure you actually control, monitor, and audit. For a 50-person engineering team, that’s the difference between 50 unsandboxed shell scripts running on 50 different machines vs. one endpoint with proper auth, logging, and rate limiting. This is why the tweet mentions enterprise managed settings. Anthropic knows the command hook model doesn’t scale past small teams. The security surface area grows linearly with headcount. HTTP hooks let you put guardrails on the guardrails. And for any company running Claude Code in production, that was the actual bottleneck.
D dickson_tsai @dickson_tsai

In Claude Code, we’ve recently launched HTTP hooks, easier to use and more secure than existing command hooks! You can build a web app (even on localhost) to view CC’s progress, manage its permissions, and more. Then, now that you have a server with your hooks processing logic, you can easily deploy new changes or manage state across your CCs with a DB. How do HTTP hooks work? CC posts the hook event to a URL of your choice and awaits a response. They work wherever hooks are supported, including plugins, custom agents, and enterprise managed settings. Docs: https://t.co/ihQWcpOlGA

H
Harley Trung @harleytt ·
@koylanai indeed! and they made this as a starter to embrace the apporach https://t.co/TUGjmECF0h
D
Dave Morin 🦞 @davemorin ·
This AI research tool @mvanhorn built is really good. I use it every day.
M mvanhorn @mvanhorn

I Built a Research Tool That Changed How I Do Almost Everything

Z
Zac @PerceptualPeak ·
This legitimately opens up so many doors. Wow. Configured correctly, it also makes context injection far more flexible.
D dickson_tsai @dickson_tsai

In Claude Code, we’ve recently launched HTTP hooks, easier to use and more secure than existing command hooks! You can build a web app (even on localhost) to view CC’s progress, manage its permissions, and more. Then, now that you have a server with your hooks processing logic, you can easily deploy new changes or manage state across your CCs with a DB. How do HTTP hooks work? CC posts the hook event to a URL of your choice and awaits a response. They work wherever hooks are supported, including plugins, custom agents, and enterprise managed settings. Docs: https://t.co/ihQWcpOlGA

S
Shane Legg @ShaneLegg ·
RT @rauchg: Google has shipped a CLI for Google Workspace (Drive, Gmail, Calendar, Sheets, Docs, …) Huge! Written in Rust, distributed thr…
D
Damian Player @damianplayer ·
the modern day org chart in the AI era (bookmark this): every seat at the table is an AI agent with its own LLM, memory, browser, tools, and file system. CEO delegates to CFO, CTO, COO, and General Counsel. each one of those spawns their own AI agents. all the way down to engineers. this is truly going AI-native. the companies wiring this up won't need to hire the same way again. what role in this chart do you think agents can’t replace?
K karpathy @karpathy

@jeffreyhuber Thanks. I originally had a reply tweet to it that was this image. Which I think will end up looking good too later. I deleted it to not distract things too much but probably should have kept it up ah well here it is. https://t.co/hsLVj1k7e7

M
Mario Zechner @badlogicgames ·
RT @dimamikielewicz: OpenAI published a repo with the code to orchestrate AI agents built primarily with Elixir (96.1%): https://t.co/urE1o…