AI Learning Digest.

Coding Agents Go Multi-Model as Context Engineering Replaces Prompt Hacking

Daily Wrap-Up

The most striking pattern across today's posts is that the coding agent space has entered its "ensemble" era. Developers are no longer asking which AI coding tool is best. Instead, they're running Claude Code, Codex, and Gemini simultaneously, using git worktrees for isolation, and feeding each model's analysis back into the others. It's a brute-force approach that feels inelegant but apparently works. The fact that @ClementDelangue is showing HuggingFace skills that let these same coding agents train ML models suggests we're approaching a recursion point where AI tools build the next generation of AI tools. Whether that's exciting or terrifying probably depends on your job security.

On the prompting front, @EXM7777 dominated the feed with three separate posts, and the interesting tension is that two of them offer specific prompting techniques while the third tells you to stop chasing prompting hacks and learn fundamentals instead. That contradiction actually captures the current moment perfectly. The real signal came from @_philschmid, whose context engineering guide argues that the discipline isn't about stuffing more information into prompts but finding the minimal effective context for each step. That framing shift from "more is better" to "less but right" feels like the field maturing past its initial land-grab phase.

The day's most surprising development was AG-UI protocol adoption hitting all three major cloud providers. @techNmak noted that Google, Microsoft, and AWS are all integrating with the Agent-User Interaction protocol, which standardizes how agentic backends talk to frontends. For a protocol most developers haven't heard of yet, that's remarkably fast enterprise adoption. The most practical takeaway for developers: if you're building agent-based tools, invest time now in learning context engineering principles and multi-agent orchestration patterns. The single-model, single-prompt approach is rapidly becoming the "jQuery of AI" while the industry moves toward composable, multi-model architectures.

Quick Hits

  • @benpixel shared a link that apparently left them speechless. Sometimes the reaction emoji is the whole post.
  • @jlongster found a tool for exploring ideas through AI-generated diagrams that update in real time as you ask follow-up questions. Called it "SUCH a clever way to use AI to explore ideas."
  • @PythonPr shared a generative AI project structure diagram by Brij Kishore Pandey, useful reference architecture for anyone starting a new GenAI project.
  • @amarchenkova praised a research paper's writing style with the aspirational "we should all write papers like this."
  • @aleenaamiir posted a Gemini workflow for turning selfies into professional headshots using the Nano Banana image model with thinking mode enabled.

Coding Agents Go Multi-Model

The single biggest theme today was the shift from using one AI coding tool to orchestrating several simultaneously. The approach ranges from practical to almost absurdly thorough, but the underlying logic is sound: different models catch different things, and cross-pollination produces better results than any single model alone.

@vasuman laid out the maximalist version: "Just open up 3 cursor prompt windows, one with Gemini 3.0 Pro, one with Claude Opus 4.5, one with Codex 5.1 High Pro. Ask each one to audit your codebase and store it in a markdown. Then feed each one the other two's docs." It reads like parody but reflects a genuine workflow emerging among power users. Meanwhile, @unwind_ai_ highlighted an open-source tool that runs 10 coding agents like Claude Code and Codex on a single machine, using git worktrees for isolation so agents don't step on each other's changes.

On the tooling side, @__morse demonstrated reviewing GitHub diffs directly in the browser and submitting reviews through @opencode, showing how coding agents are moving beyond just writing code into the full development lifecycle. And @SwiftyAlex pointed to agent-based coding being transformed by turning instructional articles into structured agent instructions. The meta-point across all these posts is that coding agents are no longer standalone tools. They're becoming components in larger orchestration systems, and the developers who figure out how to compose them effectively will have a significant edge. @ClementDelangue's demonstration of using Claude Code, Codex, and Gemini CLI to train AI models via HuggingFace skills pushes this even further: "After changing the way we build software, AI might start to change the way we build AI."

Context Engineering Over Prompt Hacking

Three posts from @EXM7777 and one from @_philschmid painted a fascinating picture of where the prompting discourse is headed. The tension between tactical tips and strategic thinking played out in real time across the feed.

@EXM7777 offered one genuinely useful creative technique for role definition: instead of generic roles like "you're a copywriter," they advocate for deeply specific characters like "you're a burned-out ad exec who realized emotional triggers sell 10x better than features." That's a real technique with real results. But the same account also posted: "STOP IT NOW. Stop bookmarking tweets and looking for prompt engineering hacks. Instead, study the fundamentals: model architecture differences, attention mechanism behavior and how it affects prompt structure."

The most substantive contribution came from @_philschmid, whose context engineering overview reframes the entire discipline: "Context Engineering is not about adding more context. It is about finding the minimal effective context required for the next step." The guide covers context compaction, summarization to prevent what they call "Context Rot," and strategies for sharing context efficiently. This is the kind of structural thinking that separates engineers who use AI effectively from those who just throw tokens at problems and hope for the best.

Agent Infrastructure Matures

The agent ecosystem is rapidly moving from experimental to enterprise-grade, with standardization efforts gaining real traction and new primitives emerging for building durable agent systems.

@techNmak tracked the AG-UI protocol's adoption trajectory: "First Google, then Microsoft, and now AWS! It seems like every week one of the tech giants is integrating with the same protocol." AG-UI, the Agent-User Interaction protocol, provides a standard way to connect any agentic backend to a frontend, which solves one of the messiest integration problems in the current agent landscape.

@ryancarson highlighted DurableAgents as a framework that ships with resumability, observability, and deterministic tool calls out of the box: "You literally just deploy with zero config and it all works." That zero-config pitch is appealing given how much boilerplate current agent frameworks require. On the retrieval side, @Python_Dv drew a sharp line between basic RAG and what comes next: "Most RAG systems today are just fancy search engines, fetching chunks and hoping the model figures it out. That's not intelligence. The real upgrade is Agentic RAG." The distinction matters because agentic RAG systems can reason about what information they need, execute multi-step retrieval strategies, and validate their own results rather than dumping context and hoping for the best.

New Model Releases Push Boundaries

Two notable model releases hit the feed today, each pushing capabilities in different directions.

The anonymous post about Gemini 3 showcased interactive 3D webpage generation from simple text prompts, including particle systems you can control with hand gestures. The claim that it takes "just a few simple text prompts" to generate all the code for controlling millions of particles is the kind of demo that looks magical but raises questions about how robust the output actually is in production.

On the voice side, @minchoi covered Microsoft's release of VibeVoice-Realtime-0.5B, an open-source realtime TTS model that "starts talking in ~300 ms." The combination of streaming support, long-form generation, and sub-second latency at only 0.5B parameters makes this particularly interesting for local deployment scenarios. Open-source voice models at this quality level lower the barrier significantly for developers building conversational interfaces without relying on cloud APIs.

AI's Social Friction

Two posts touched on the increasingly uncomfortable social dynamics around AI's impact on creative professions and personal identity.

@bfioca offered a raw and honest take on the personal cost of working in AI: "Pretty sure I've lost artist/game industry friends over my work. Best case we avoid talking about it. I can't tell if it's moral panic or a strange local kind of economic/social conservatism or head-in-sand-ism." It's a reminder that the people building these tools exist in communities that are being disrupted by them, and the social fallout is real and ongoing.

On a different but related note, @svpino covered Second Me, a platform that creates an AI identity clone from your photos, voice, and notes. The concept of a "virtual copy" of yourself raises obvious questions about consent, deepfakes, and identity ownership, but it also represents a genuine product category that's emerging around personal AI agents that can act on your behalf. The line between useful personal automation and uncanny digital twins remains blurry, and products like Second Me are forcing that conversation into the mainstream.

Source Posts

P
Philipp Schmid @_philschmid ·
Context Engineering is not about adding more context. It is about finding the minimal effective context required for the next step. Here is a short overview guide with the latest research: 1. Context Compaction and Summarization prevent Context Rot 2. Share Context by… https://t.co/DlQq849hq5
S
Santiago @svpino ·
This is one of the craziest concepts I've seen so far: Second Me is a platform that creates an AI identity based on you: • It takes your photos • It takes your voice • It takes your notes And it creates a second you (a virtual copy). It's an AI-powered identity that sounds… https://t.co/e7QLzetr6W
?
Unknown ·
oh my.. this is over for developers Gemini 3 can create interactive 3D webpage in mins, just a few simple text prompts, it generates all the code you can control millions of particles with your hands and make them form any shape you want tutorial and prompts below: https://t.co/bBgx3Bp2pa https://t.co/Nn95aFZIKP
M
Min Choi @minchoi ·
Microsoft just dropped VibeVoice-Realtime-0.5B Open-source realtime TTS AI model that starts talking in ~300 ms Streaming, long-form and insanely fast. https://t.co/SGzyXo21Nn
M
Machina @EXM7777 ·
use this system prompt in gemini to consistently write humanized content: https://t.co/dFOHFUL8jZ
v
vas @vasuman ·
Just open up 3 cursor prompt windows, one with Gemini 3.0 Pro, one with Claude Opus 4.5, one with Codex 5.1 High Pro Ask each one to audit your codebase and store it in a markdown called [MODEL_NAME]-[TODAY'S_DATE].md Then feed each one the other two's docs Then feed all of…
B
Brian Fioca @bfioca ·
Pretty sure I've lost artist/game industry friends over my work - best case we avoid talking about it. I can't tell if it's moral panic or a strange local kind of economic/social conservatism or head-in-sand-ism. I'm most afraid of the coming shift landing hard on people who…
T
Tommy D. Rossi @__morse ·
using https://t.co/cG7QBcB8tG to review github diff in the browser and submit a review, via @opencode https://t.co/izjf21ylyc
A
Anastasia Marchenkova @amarchenkova ·
We should all write papers like this: https://t.co/a2GGbTk3KY
M
Machina @EXM7777 ·
STOP IT NOW i mean, right now, stop bookmarking tweets & looking for prompt engineering hacks... instead, study the fundamentals: - model architecture differences (transformers vs diffusion vs retrieval) - attention mechanism behavior and how it affects prompt structure -…
T
Tech with Mak @techNmak ·
First Google, then Microsoft, and now AWS! It seems like every week one of the tech giants is integrating with the same protocol. If you haven’t been following - I’m talking about AG-UI AG-UI (the Agent-User Interaction protocol) connects any agentic backend to the frontend. It… https://t.co/VU8ENUJmWI
c
clem 🤗 @ClementDelangue ·
We managed to get Claude code, Codex and Gemini CLI to train good AI models thanks to @huggingface skills and you can too even (especially?) if you've never trained a model before 🤯🤯🤯 After changing the way we build software, AI might start to change the way we build AI… https://t.co/m0w0vpsRHR
a
alex @SwiftyAlex ·
If you take Paul’s article and turn it into an https://t.co/XQ8vggUQmH, your agent based coding will transform https://t.co/LjbZDC2NT3
U
Unwind AI @unwind_ai_ ·
Run 10 coding agents like Claude Code and Codex on your machine. Spin up new tasks while others run, switch between them when they need input. Uses git worktrees to keep each agent isolated. 100% open-source. https://t.co/I1DyFO0zN6
R
Ryan Carson @ryancarson ·
DurableAgents are wild. Out of the box you get … 1) Resumability (no state management) 2) Observability (you literally just deploy with zero config and it all works) 3) Deterministic tool calls as “steps” https://t.co/I1Hvxc133T
A
Aleena Amir @aleenaamiir ·
Turn a regular selfie into a pro headshot and save money. • Take a well-lit, front-facing selfie. • In Gemini: Create images (Nano Banana) → set model to Thinking. • Paste the prompt below and generate. Boom 🤯 Studio-style, sharp, neutral background. Prompt 👇 https://t.co/WlIczwO5OV
P
Python Developer @Python_Dv ·
RAG was supposed to make LLMs smarter. Ground them in facts. Give them memory. But the truth? Most RAG systems today are just fancy search engines—fetching chunks and hoping the model figures it out. That’s not intelligence. The real upgrade is Agentic RAG. Tools like Glean,… https://t.co/sc3HNSdsDL
b
benpixel @benpixel ·
https://t.co/9QQ3cU4Xyl 😲
M
Machina @EXM7777 ·
here's how to get AI outputs that nobody else gets: you need to go absolutely INSANE with role definition, unleash your creativity instead of "you're a copywriter" write something like: "you're a burned-out ad exec who realized emotional triggers sell 10x better than features…
P
Python Programming @PythonPr ·
Generative AI Project Structure Image Credit: Brij Kishore Pandey https://t.co/NgXveguqPS
J
James Long @jlongster ·
this should be blowing up even more: https://t.co/tyqb7lTGyT this is SUCH a clever way to use AI to explore ideas. I wasn't exactly sure how it would be different from chat with ability to draw diagrams, but when I asked follow-up questions and the fairies went in and changed… https://t.co/T0MTuM9u4N https://t.co/e4ID2zD9UG