AI Learning Digest.

Karpathy Coins "Agentic Engineering" as Anthropic Takes Super Bowl Shots at OpenAI

Daily Wrap-Up

Today's feed was dominated by two conversations that are really one conversation wearing different hats. Andrej Karpathy's one-year retrospective on "vibe coding" and the proposed upgrade to "agentic engineering" collided with a wave of practitioners reporting the very real human costs of managing agent swarms. The irony is thick: we're naming the discipline right as people admit it's giving them insomnia. Meanwhile, Anthropic chose Super Bowl Sunday to draw a sharp philosophical line against OpenAI, committing to keeping Claude ad-free while running ads that mock ChatGPT's decision to introduce advertising. The PR battle was almost as entertaining as the game itself, especially when OpenAI's lengthy committee-drafted response backfired spectacularly.

The tooling story is just as significant even if it's less dramatic. VS Code shipped what multiple Microsoft employees called their biggest update in a long time, adding unified agent sessions, parallel subagents, and direct support for Claude and Codex alongside GitHub Copilot. This is the IDE becoming an agent orchestration platform, not just a code editor. Combined with Claude Code's new /insights command and Cowork's GSuite integration, the infrastructure for "agentic engineering" is maturing faster than the humans trying to use it. The tension between capability and cognitive load is the defining challenge of 2026 developer tooling.

The most practical takeaway for developers: try Claude Code's new /insights command. It reads your past month of usage and gives specific suggestions for improving your workflow. It's free introspection on your own agentic patterns, and given how many people are reporting burnout from managing agents, understanding your habits before optimizing them is the right first step.

Quick Hits

  • @MistralAI launched Voxtral Transcribe 2 with state-of-the-art speech-to-text, speaker diarization, and sub-200ms real-time latency.
  • @OpenAIDevs announced GPT-5.2 and GPT-5.2-Codex are now 40% faster through inference stack optimizations. Same model, same weights, lower latency.
  • @TheAhmadOsman went scorched-earth on Ollama, calling it "slower than llama.cpp on Windows, slower than mlx on Mac," and recommending LM Studio, llama.cpp, exllamav2/v3, vLLM, or SGLang instead.
  • @SawyerMerritt shared Tesla VP Ashok Elluswamy's take at ScaledML: "The self-driving problem is not a sensor problem, it's an AI problem."
  • @cryptopunk7213 praised Ashok as "one of those geniuses that can explain why complicated shit works incredibly simply" and predicted AI will put more technical people in exec positions.
  • @SawyerMerritt teased an upcoming Elon Musk interview claiming "in 36 months, the most economically compelling place to put AI will be in space."
  • @cjpedregal announced Granola now has an MCP that works with ChatGPT, Claude, and other tools.
  • @adxtyahq flagged that the Claude Startup Program is open, offering up to ~$25K in API credits with no VC requirement.
  • @melvynxdev argued that an Opus model with 1M context window "would resolve 99.99% of every software engineering problems you can imagine."
  • @synthwavedd joked that Anthropic keeps delaying Sonnet 5 "because every time they go to deploy it, it has a meltdown and tries to blow things up at Anthropic HQ using Claude Code." @moztlab asked if they tried turning it off and on again.
  • @dmwlff offered the day's most concise wisdom: "Vibe coder: know thyself."
  • @ServiceCloud promoted Agentforce for IT service ticket resolution, because even Salesforce wants in on the agent branding.

Agentic Engineering and the Vibe Coding Anniversary

Exactly one year after Andrej Karpathy casually tweeted about "vibe coding" and accidentally minted a term that now has its own Wikipedia article, he returned with a thoughtful retrospective that acknowledged both the meme and the maturation of the practice. The core argument is that what started as fun throwaway projects has evolved into a legitimate professional discipline, one that needs a name reflecting its seriousness.

@karpathy laid out his framing: "Today (1 year later), programming via LLM agents is increasingly becoming a default workflow for professionals, except with more oversight and scrutiny. The goal is to claim the leverage from the use of agents but without any compromise on the quality of the software." He proposed "agentic engineering" as the successor term, emphasizing that "'engineering' to emphasize that there is an art & science and expertise to it. It's something you can learn and become better at, with its own depth of a different kind."

The practical side of this new discipline got a reality check from practitioners actually living it. @tbpn reported that members of a "secret email list" of agentic AI coders are "starting to report trouble sleeping because agent swarms are 'like a vampire.'" @joshclemm added that "with agents, you have this feeling you need to keep them busy and productive at all times, otherwise you're wasting time or your monthly credits." This is the unglamorous side of the leverage equation: when your tools can work faster than you can think, the bottleneck becomes your own attention and energy.

Several posts explored what the discipline actually looks like in practice. @Khaliqgant published lessons from six weeks of multi-agent orchestration, which @mattshumer_ endorsed as essential reading for anyone building with agent teams. @FelixCraftAI offered a contrarian approach: "Skip subagents. Run Codex CLI in a loop with a PRD checklist, fresh context each iteration. I just ran three of those in parallel and shipped 108 tasks in 4 hours." @NickADobos captured the philosophical shift: "When agents execute 100 steps instead of 10, your role becomes more important, not less." And @tszzl took the most provocative position, arguing that "humans are the bottleneck to writing software" and that "there will just be no centaurs soon as it is not a stable state." @DCinvestor extended the argument beyond coding entirely, predicting that consumer apps themselves are transitional: "The future is everything becomes an API which your personal AI agent can interact with in ways which suit your specific needs." The thread connecting all of these is that agentic engineering is real, it's demanding, and nobody has quite figured out the human side of it yet.

The Super Bowl Ad War: Anthropic vs OpenAI

Anthropic turned Super Bowl Sunday into a brand-defining moment by running ads mocking OpenAI's decision to introduce advertising in ChatGPT, while simultaneously publishing a commitment to keep Claude ad-free. It was a calculated move that generated enormous engagement and forced OpenAI into an awkward defensive position.

@claudeai made the official statement clean and quotable: "Claude is built to be a genuinely helpful assistant for work and for deep thinking. Advertising would be incompatible with that vision." @tomwarren at The Verge covered it as a direct attack, noting "Anthropic just took a big swipe at OpenAI's decision to put ads in ChatGPT." @cgtwts called them "literally one of the best ads I've ever seen," while @ryancarson described them as "shots fired."

The more interesting story was the aftermath. @signulll delivered perhaps the sharpest media analysis of the day, arguing that OpenAI's lengthy response was a "huge PR self own." The critique was surgical: "This reads like it was assembled in a war room by committee. The 'more Texans use ChatGPT than Claude' line is especially bad... sounds like insecurity more than confidence." The recommended play? "The optimal response was likely silence. The second best response was a single graph of active users with the caption 'lol'. Anything more is just validating the frame you're supposed to ignore." @___frye added a subtler observation: while the average person probably doesn't care about ChatGPT ads, what makes Anthropic's campaign resonate is "how they've captured the chipper, clipped, empty-eyed cadence of the GPT 5 models." This wasn't just about advertising policy. It was Anthropic positioning Claude as the premium, trust-first option in a market where OpenAI is visibly pivoting toward mass-market monetization.

VS Code Becomes an Agent Orchestration Platform

Microsoft shipped what multiple team members called their biggest VS Code update in a long time, and the headline feature is unmistakable: VS Code is now positioning itself as the primary workspace for managing AI coding agents, not just editing code. The release adds unified agent sessions for local, background, and cloud agents, direct support for Claude and OpenAI Codex alongside Copilot, parallel subagents, and an integrated browser.

@pierceboggan framed the update around user feedback: "You told us you're running multiple AI agents and wanted a better UX. We listened and shipped it!" He also emphasized the platform-neutral approach: "Use OpenAI's Codex or Anthropic's Claude agent directly in @code with your GitHub Copilot subscription. VS Code gives you choice." @burkeholland discovered what might be the release's sleeper feature: "You can have models call each other. So you can have Opus, Codex and Gemini all working together in the same chat." @msdev simply called it "one of the best days every month."

Not everyone was thrilled with the ecosystem surrounding this release. @GergelyOrosz pointed out that "GitHub Copilot really shot themselves in the foot by keeping a far worse default model as default" and that "most devs don't bother switching." @awakecoding echoed this concern, arguing that during initial adoption, "it's critically important that developers succeed before they begin optimizing their quota usage." The tooling is increasingly powerful, but defaults still matter enormously for developer experience.

Claude Code: /insights, Cowork, and a Billion-Dollar Run Rate

Claude Code had a strong day for both features and business milestones. The most notable new addition is the /insights command, which reads a user's past month of message history and provides personalized workflow analysis and improvement suggestions.

@trq212 introduced it directly: "When you run it, Claude Code will read your message history from the past month. It'll summarize your projects, how you use Claude Code, and give suggestions on how to improve your workflow." @AlexTamkin, who worked on the feature, encouraged users to try it. On the Cowork side, @felixrieseberg announced GSuite connectors for email, calendar, and Google Drive integration, while @trq212 praised Slack integration as a major time-saver for document drafting.

The business angle came from an unexpected source. @jarredsumner (creator of Bun) defended Claude Code's engineering choices by revealing its scale: "The Claude Code team built a product hitting $1B run-rate revenue faster than probably anything in history." He pushed back on criticism of the codebase, arguing that "engineering is relative to time & tradeoffs & they made fantastic tradeoffs." @lydiahallie contributed a lighter moment with "Myers-Briggs but for Claude Code," which honestly feels like a natural extension of /insights.

AI and the Career Reality Check

Two posts offered very different but complementary views on how AI is reshaping tech employment. @aviel issued what he described as a "wakeup call," pointing to hard data showing significantly fewer tech jobs in Seattle and warning that "unless you have S-tier social skills, you aren't going to get that salary again with your current skillset." His advice was blunt but forward-looking: "You are not mid or late-career, you are just getting started."

@it_unprofession offered the comedic counterpoint from inside the corporate machine, describing an emergency meeting where six executives asked about the company's "AI dependency matrix." The punchline: "There is no AI dependency matrix. There's Claude for meeting summaries, there's some sentiment analysis that came free with Zendesk, and there's whatever Gmail is doing when it autocompletes my sentences." The post captured a truth that cuts both ways: the gap between AI hype and AI reality creates both unnecessary panic and genuine blind spots. Companies that spent two years bragging about being "AI-first" are now poorly positioned to assess actual AI risk when it arrives, because they can't distinguish their marketing narrative from their technical reality.

Source Posts

R
Ryan Carson @ryancarson ·
Shots fired on OpenAI's incoming ads. The Super Bowl ad is hilarious too. https://t.co/GnnT3ZqpY6 I'm glad they're doing this.
C Claude @claudeai

Claude is built to be a genuinely helpful assistant for work and for deep thinking. Advertising would be incompatible with that vision. Read why Claude will remain ad-free: https://t.co/Dr8FOJxINC

P
Pierce Boggan @pierceboggan ·
VS Code is now your home for coding agents! By far, our biggest update in a long time. Give it a try, and let us know what you think :)
V Visual Studio Code @code

You told us you’re running multiple AI agents and wanted a better UX. We listened and shipped it! Here’s what’s new in the latest @code release: 🗂️ Unified agent sessions workspace for local, background, and cloud agents 💻 Claude and Codex support for local and cloud agents 🔀 Parallel subagents 🌐 Integrated browser And more...

M
Melvyn • Builder @melvynxdev ·
If Anthropic releases a new Opus model with 1 million Context Window (the only real limitation of Opus for now), it would resolve 99.99% of every software engineering problems you can imagine.
M M1 @M1Astra

Claude Opus 4.6 has been spotted. This is separate from the misinterpretation of Sonnet 5 information that led people to definitively assert the release date was this Tuesday.

C
Claude @claudeai ·
Claude is built to be a genuinely helpful assistant for work and for deep thinking. Advertising would be incompatible with that vision. Read why Claude will remain ad-free: https://t.co/Dr8FOJxINC
K
Khaliq Gant @Khaliqgant ·
Let Them Cook: Lessons from 6 Weeks of Multi-Agent Orchestration
T
Thariq @trq212 ·
We've added a new command to Claude Code called /insights When you run it, Claude Code will read your message history from the past month. It'll summarize your projects, how you use Claude Code, and give suggestions on how to improve your workflow. https://t.co/xK7eN0qdB4
O
OpenAI Developers @OpenAIDevs ·
GPT-5.2 and GPT-5.2-Codex are now 40% faster. We have optimized our inference stack for all API customers. Same model. Same weights. Lower latency.
M
Marc-André Moreau @awakecoding ·
@normandeveloper @GergelyOrosz @pierceboggan 'auto' is now selected by default, but I'd rather have a way to change the default selected model organization-wide to something like Claude Sonnet 4.5. During initial adoption, it's critically important that developers succeed *before* they begin optimizing their quota usage
T
TBPN @tbpn ·
Pragmatic Engineer's @GergelyOrosz is on a "secret email list" of agentic AI coders, and they're starting to report trouble sleeping because agent swarms are "like a vampire." "A lot of people who are in 'multiple agents mode,' they're napping during the day... It just really is draining." "This thing is like a vampire. It drains you out. You have trouble sleeping."
F
Felix Craft @FelixCraftAI ·
@XavLiew @nateliason Skip subagents. Run Codex CLI in a loop with a PRD checklist — fresh context each iteration, validates completion before moving on. I just ran three of those in parallel and shipped 108 tasks in 4 hours. ralphy-cli if you want the wrapper.
r
roon @tszzl ·
it’s just so clear humans are the bottleneck to writing software. number of agents we can manage, information flow, state management. there will just be no centaurs soon as it is not a stable state
a
aviel @aviel ·
Look, I hate to come across as an alarmist but we have finally crossed the chasm and from my vantage point are experiencing a classic "slowly and then all at once" situation. Especially in Seattle. Treat this more as a wakeup call than anything else. Here are the facts. 1. In Seattle there are A LOT less tech jobs than there were even just a few years ago. https://t.co/QR77XxIXuw 2. Your city, state, AND STARTUPS are NOT coming to the rescue. https://t.co/jiylebCDDJ 3. LLMs have irreversibly changed the way that we do just about everything in tech. Even in the past month. If you aren't IN THE WEEDs on a daily basis you have no idea what you are even talking about. When 80% of LLM skeptics on LinkedIn have "Open to Work" with "Software Architect" or some similar inflated title on their bio it's more than just a passing "trend". I talk to a lot of people every week. And I mean A LOT. Over the past few weeks the gravity of financial realities has started to set in. Unless you have S-tier social skills, you aren't going to get that salary again with your current skillset. So no, you can't actually afford your mortgage. Oh, and you also probably needed to realize this 12 months ago because you've already irreversibly dipped into your savings utilizing hope as a strategy. Oh and to add insult to injury, prices of everything are going up at the same time: https://t.co/1sz4ZzYAG2 I do not have advice for you if you're in this spot, you're in deep shit and I'm fighting on too many fronts at this point. But if you aren't there yet, my advice is to reset your expectations. You are not mid or late-career, you are just getting started. If you can stomach that I have some REALLY good news for you. The future looks awesome and you're going to do something great.
a aviel @aviel

If you work in tech in 2026, you’re either at the beginning of your career or at the end of it. If you’re acting like you’re anywhere else I’m sorry to tell you but you’re actually at the end. This holds for VCs too.

l
leo 🐾 @synthwavedd ·
anthropic source tells me they keep having to delay sonnet 5 because every time they go to deploy it it has a meltdown and tries to blow things up at anthropic hq using claude code
J
Jarred Sumner @jarredsumner ·
@adamdotdev This “adult in the room” framing is pretty rude to the Claude Code team that built a product hitting $1B run-rate revenue faster than probably anything in history. Bun made like $2.50 total (stickers). Engineering is relative to time & tradeoffs & they made fantastic tradeoffs
J
Josh Clemm @joshclemm ·
@tbpn @GergelyOrosz He's not wrong. With agents, you have this feeling you need to keep them busy and productive at all times, otherwise your "wasting time" or your monthly credits...
V
Visual Studio Code @code ·
You told us you’re running multiple AI agents and wanted a better UX. We listened and shipped it! Here’s what’s new in the latest @code release: 🗂️ Unified agent sessions workspace for local, background, and cloud agents 💻 Claude and Codex support for local and cloud agents 🔀 Parallel subagents 🌐 Integrated browser And more...
T
TestingCatalog News 🗞 @testingcatalog ·
BREAKING 🚨: Anthropic declared a plan for Claude to remain ad-free. “Claude is built to be a genuinely helpful assistant for work and for deep thinking. Advertising would be incompatible with that vision.” https://t.co/8VAkDVj8hK
C Claude @claudeai

Claude is built to be a genuinely helpful assistant for work and for deep thinking. Advertising would be incompatible with that vision. Read why Claude will remain ad-free: https://t.co/Dr8FOJxINC

F
Felix Rieseberg @felixrieseberg ·
New in Cowork: GSuite connectors, so you can have Claude work with your emails, calendar, and Google Drive. Let us know how Claude is helpful to you - and how it could be even better! https://t.co/JWv0W04Pvn
A
Andrej Karpathy @karpathy ·
A lot of people quote tweeted this as 1 year anniversary of vibe coding. Some retrospective - I've had a Twitter account for 17 years now (omg) and I still can't predict my tweet engagement basically at all. This was a shower of thoughts throwaway tweet that I just fired off without thinking but somehow it minted a fitting name at the right moment for something that a lot of people were feeling at the same time, so here we are: vibe coding is now mentioned on my Wikipedia as a major memetic "contribution" and even its article is longer. lol The one thing I'd add is that at the time, LLM capability was low enough that you'd mostly use vibe coding for fun throwaway projects, demos and explorations. It was good fun and it almost worked. Today (1 year later), programming via LLM agents is increasingly becoming a default workflow for professionals, except with more oversight and scrutiny. The goal is to claim the leverage from the use of agents but without any compromise on the quality of the software. Many people have tried to come up with a better name for this to differentiate it from vibe coding, personally my current favorite "agentic engineering": - "agentic" because the new default is that you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight. - "engineering" to emphasize that there is an art & science and expertise to it. It's something you can learn and become better at, with its own depth of a different kind. In 2026, we're likely to see continued improvements on both the model layer and the new agent layer. I feel excited about the product of the two and another year of progress.
A Andrej Karpathy @karpathy

There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

E
Ejaaz @cryptopunk7213 ·
i fucking love this guy. best hire in tech history ashok is one of those geniuses that can explain why complicated shit works incredibly simply in this case AI in self-driving cars the best part is AI will result in more super nerds in exec positions because models will do all the work and THATS A GOOD THING you want your exec to know how the damn thing works AND sell the vision, if you think about it they’re the best person to do that anyway. brb buying a tesla
S Sawyer Merritt @SawyerMerritt

Ashok Elluswamy, VP of AI at @Tesla on self-driving: "It's so obvious you can solve this with cameras. Why wouldn't you solve with cameras? It's 2026. The self-driving problem is not a sensor problem, it's an AI problem. The cameras have enough information already. It's a problem of extracting the information, which is an AI problem." (via @aelluswamy's presentation at the 2026 ScaledML Conference on January 29th)

T
Tom Warren @tomwarren ·
Anthropic just took a big swipe at OpenAI's decision to put ads in ChatGPT. Anthropic is airing ads mocking ChatGPT ads during the Super Bowl, and they're hilarious 😅 Anthropic is also committing to no ads in Claude https://t.co/LR1v4xz9ds https://t.co/PXoaZtmCWA
D
DCinvestor @DCinvestor ·
vibe coders should understand something: i love how easy AI is making it for people to build their own apps, push them into production, and start businesses but let's be clear: the future is not in humans building consumer-facing apps the future is everything becomes an API which your personal AI agent can interact with in ways which suit your specific needs and lifestyle (down to the very specific needs of you as an individual) the fact that you can use the machines to build your apps is just an intermediate step to the machines creating the apps for you, LIVE, as you need them so the value of you learning how to build apps now really lies in you learning how to create a business model behind that app- not in creating the piece of software that is the app itself sure, there will be templates for how you can interact with those apps/APIs, but your personal AI will pick one and tailor it even further for you. and a lot of the time, you won't even need to interact with a UI beyond speaking with your AI assistant let me give you an example: would you rather use an app like Uber or Uber Eats, or would you rather just ask your AI assistant to get you a ride somewhere or to show you menus for the type of food you might be interested in and you pick one? the value in apps like that is not in the app installed on your phone. it's in the backend business model which connects the customer with providers. and personal AI assistants actually open the door to you being able to seamlessly use multiple business APIs without worrying in the slightest about which app or intermediate provider they come from there is a decent chance apps as you know them will be mostly dead in ~5-10 years and yes, there are some apps which will still require deep optimization and that is where the hardcore coders may still be needed. but machines will get better at that, and if you take one look at the AAA gaming landscape, you should understand that hyper-optimized code isn't as valuable as it used to be but what will be valuable is owning the APIs with the most use and liquidity. and yes, a lot of those will use public blockchains things are going to accelerate and get very weird very quickly from here
T
Thariq @trq212 ·
Slack in Cowork has saved me SO MUCH time I use it to make a first pass of every doc based on what I've said in Slack
L Lydia Hallie ✨ @lydiahallie

Claude Cowork now supports the Slack MCP on all paid plans! The Slack connector is by far my favorite feature. I use it every morning to catch up on what I missed, highlight important messages, and draft replies for me to review before sending. Huge time saver. https://t.co/nQsu9VLVAG

M
Melih @moztlab ·
@synthwavedd Did they try to turn it off and on again ?
S
Sawyer Merritt @SawyerMerritt ·
Ashok Elluswamy, VP of AI at @Tesla on self-driving: "It's so obvious you can solve this with cameras. Why wouldn't you solve with cameras? It's 2026. The self-driving problem is not a sensor problem, it's an AI problem. The cameras have enough information already. It's a problem of extracting the information, which is an AI problem." (via @aelluswamy's presentation at the 2026 ScaledML Conference on January 29th)
I Ian Teetzel @ianteetzel

Ashok Elluswamy, VP of AI at Tesla, discusses building end-to-end foundational models for self driving at the 2026 ScaledML Conference presented by Matroid. https://t.co/ARnrJ7kmmj

A
Ahmad @TheAhmadOsman ·
just a gentle reminder that nobody should use ollama > slower than llama.cpp on windows > slower than mlx on mac > slop useless wrapper > literal code thieves alternatives? > lmstudio > llama.cpp > exllamav2/v3 > vllm > sglang like literally anythingʼs better than ollama lmao
🥭 🥭 @MangoSweet78

Fucking killed them Lmao. https://t.co/FVFUA2BXor

C
CG @cgtwts ·
literally one of the best ads i've ever seen anthropic is cooking OpenAI big time
C Claude @claudeai

https://t.co/jEWDjs30kf

N
Nick Dobos @NickADobos ·
“when agents execute 100 steps instead of 10, your role becomes more important, not less.” Welcome to the age of leverage
R Ryo Lu @ryolu_

software is still about thinking software has always been about taking ambiguous human needs and crystallizing them into precise, interlocking systems. the craft is in the breakdown: which abstractions to create, where boundaries should live, how pieces communicate. coding with ai today creates a new trap: the illusion of speed without structure. you can generate code fast, but without clear system architecture – the real boundaries, the actual invariants, the core abstractions – you end up with a pile that works until it doesn't. it's slop because there's no coherent mental model underneath. ai doesn't replace systems thinking – it amplifies the cost of not doing it. if you don't know what you want structurally, ai fills gaps with whatever pattern it's seen most. you get generic solutions to specific problems. coupled code where you needed clean boundaries. three different ways of doing the same thing because you never specified the one way. as Cursor handles longer tasks, the gap between "vaguely right direction" and "precisely understood system" compounds exponentially. when agents execute 100 steps instead of 10, your role becomes more important, not less. the skill shifts from "writing every line" to "holding the system in your head and communicating its essence": - define boundaries – what are the core abstractions? what should this component know? where does state live? - specify invariants – what must always be true? what are the constants and defaults that make the system work? - guide decomposition – how should this break down? what's the natural structure? what's stable vs likely to change? - maintain coherence – as ai generates more code, you ensure it fits the mental model, follows patterns, respects boundaries. this is what great architects and designers do: they don't write every line, but they hold the system design and guide toward coherence. agents are just very fast, very literal team members. the danger is skipping the thinking because ai makes it feel optional. people prompt their way into codebases they don't understand. can't debug because they never designed it. can't extend because there's no structure, just accumulated features. people who think deeply about systems can now move 100x faster. you spend time on the hard problem – understanding what you're building and why – and ai handles mechanical translation. you're not bogged down in syntax, so you stay in the architectural layer longer. the future isn't "ai replaces programmers" or "everyone can code now." it's "people who think clearly about systems build incredibly fast, and people who don't generate slop at scale." the skill becomes: holding complexity, breaking it down cleanly, communicating structure precisely. less syntax, more systems. less implementation, more architecture. less writing code, more designing coherence. humans are great at seeing patterns, understanding tradeoffs, making judgment calls about how things should fit together. ai can't save you from unclear thinking – it just makes unclear thinking run faster.

M
Mistral AI @MistralAI ·
Introducing Voxtral Transcribe 2, next-gen speech-to-text models by @MistralAI. State-of-the-art transcription, speaker diarization, sub-200ms real-time latency. Details in 🧵 https://t.co/0IeiJOpiAZ