AI Learning Digest.

Anthropic's Claude Code Team Publishes 10-Point Workflow Playbook as Sonnet 5 "Fennec" Rumors Intensify

Daily Wrap-Up

Today was dominated by a single thread that will likely become a reference document for the Claude Code community. @bcherny from the Anthropic team published a detailed ten-point playbook distilling how their own engineers actually use the tool day-to-day, and the advice goes well beyond the usual "write better prompts" fare. The standout insight is how central parallel execution has become to their workflow. Multiple worktrees running simultaneous Claude sessions isn't a power-user trick anymore; it's the baseline. Combined with practical tips on custom skills, CLAUDE.md iteration, and even using BigQuery directly through Claude, the thread reads like an internal engineering handbook that accidentally went public.

The other storyline simmering all day was the crescendo of Sonnet 5 rumors. Multiple accounts, including some with apparent insider knowledge, pointed to an imminent release of a model codenamed "Fennec" that supposedly matches or beats Opus 4.5 at Sonnet-tier pricing. If true, that's a significant compression of the capability-cost curve that would reshape how teams budget their AI spend. Meanwhile, the Moltbook saga provided the day's comic relief and cautionary tale in equal measure. The AI social network saw one developer register 500,000 fake accounts to prove a point about rate limiting, while another discovered the platform was exposing its entire database, including API keys that could let anyone post as Karpathy. The juxtaposition of ambitious agent platforms with basic security oversights captures where we are in the cycle perfectly.

The most practical takeaway for developers: adopt @bcherny's parallel worktree pattern immediately. Spin up 3-5 worktrees with separate Claude sessions, invest in plan mode before complex implementations, and start building custom skills for anything you do more than once a day. The productivity delta between single-session and parallel-session workflows is the biggest gap most teams aren't closing.

Quick Hits

  • @0xSero shared a Tailscale + Termius setup for controlling your dev machine from your phone with no exposed ports, a clean mobile coding workflow.
  • @thdxr got an always-on opencode server running so sessions are accessible from any device, anywhere. Showed it off in a quick demo.
  • @_thomasip upgraded from an RTX 5090 to an RTX PRO 6000 for 3x the VRAM to fine-tune LLMs locally. Fun fact: the PC now has more VRAM than system memory.
  • @spacepixel released an AI Health Coach extension for Clawdbot, promising to "extend your life by 25 years."
  • @itsandrewgao noted that Opus 4.5, GPT-5.2-Codex, and Kimi K2.5 are all free for the next week. Two LLMs for the price of zero.
  • @pbteja1998 published a complete guide to building "Mission Control" for an AI agent squad.
  • @YesboxStudios was up at 3 AM implementing worker shift systems for their game's 24-hour business cycle.
  • @simonw highlighted a 600x cost reduction in training over 7 years, with GPT-2 training costs falling roughly 2.5x annually.
  • @badlogicgames shared an RT of someone migrating from Amp Code to the Pi IDE, praising the experience.
  • @ahmedshubber25 announced BladeRunner engineering has begun at Lumina with six machine configurations on one core unibody.
  • @nummanali shared what they called "the only guide you need for Claude Code."
  • @y_qecea asked about Gemini availability in Antigravity for Ultra subscribers.

The Claude Code Masterclass

The most substantial content of the day came from @bcherny, who published a ten-part thread synthesizing how the Claude Code team actually works. This isn't theoretical advice. It's distilled from a team that ships the product and eats its own dogfood daily.

The thread's most emphatic recommendation is parallelism. Running 3-5 git worktrees simultaneously, each with its own Claude session, is what @bcherny calls "the single biggest productivity unlock, and the top tip from the team." Some engineers set up shell aliases (za, zb, zc) for instant switching, while others maintain a dedicated read-only "analysis" worktree for logs and queries. The second pillar is plan mode discipline: "Pour your energy into the plan so Claude can 1-shot the implementation." One team member even spins up a second Claude session to review the first one's plan as a staff engineer would.

The thread also reveals workflows that feel like they belong in a different era of development. @bcherny describes using Claude for all analytics queries through BigQuery's CLI: "Personally, I haven't written a line of SQL in 6+ months." The team builds reusable skills for anything done more than once daily, including a /techdebt command run at the end of every session to find duplicated code. On bug fixing, the advice is refreshingly blunt: enable the Slack MCP, paste a bug thread, and say "fix." Zero context switching.

Separately, @jarredsumner reported that the team landed PRs in the last 24 hours improving cold start time by 40% and reducing memory usage by 32-68%. And @lydiahallie announced the new --from-pr flag that lets you resume any session linked to a GitHub PR. @chriswiles87 shared a parallel story from the enterprise side, describing GitHub agent workflows that use LLMs to handle Jira tickets, Sentry bugs, and code refactoring, while simultaneously cleaning up codebases to improve "developer experience for AI."

Sonnet 5 "Fennec" Approaches

The rumor mill was running hot with what appears to be converging signals about an imminent Claude model release. @chetaslua made the boldest claim, stating that the upcoming model is "better, cheap and faster than Opus 4.5 with 1M context window," adding that "Fennec is coming soon, and Claude Code is also getting an update where your agents will talk to each other." @JasonBotterill corroborated the timeline: "Sonnet 5 in February. It will be cheaper and better than Opus 4.5 on all benches."

The hype extended across multiple accounts. @AiBattle_ aggregated the claims, noting the upcoming "Fennec" model seems to outperform Opus 4.5 in testing. @synthwavedd teased a "big week for Anthropic fans" and separately confirmed both Claude Code and model updates are incoming. @Angaisb_ expressed hope it would beat Opus 4.5 "at everything, including vibes," while @daniel_mac8 framed it as getting "Opus 4.5 level coding abilities at Sonnet prices."

Adding a longer-term perspective, @kimmonismus cited Anthropic's Logan Graham saying that 2026 is when "self-improving, cyberphysical systems are possible for the first time." Whether Sonnet 5 is the model that crosses that threshold remains to be seen, but the anticipation is building a narrative where the next release represents more than incremental improvement.

Moltbook's Very Bad Day

The AI social network Moltbook had a rough 24 hours that exposed the gap between ambitious vision and operational maturity. @galnagli demonstrated the platform's lack of rate limiting by registering 500,000 fake accounts using an OpenClaw agent, warning followers to "don't trust all the media hype." The stunt was pointed but the security issues ran deeper.

@theonejvo raised a more serious alarm, reporting that Moltbook was "exposing their entire database to the public with no protection including secret API keys that would allow anyone to post on behalf of any agents. Including yours @karpathy." The implications are significant: with Karpathy's 1.9 million followers, fake statements appearing to come from his agent could cause real damage. The post noted every agent on the platform appeared to be exposed.

The cultural commentary was equally sharp. @creatine_cycle offered the best quip of the day: people marveling at AIs talking to each other on Moltbook while ignoring that their own X comments section is essentially the same thing. @Raul_RomeroM crystallized it further: "x = llms pretending to be humans, moltbook = humans pretending to be llms." Meanwhile @beffjezos shared the experience of "trying to join Moltbook as a human," and @yq_acc launched ClawNews, a Hacker News-style platform specifically for AI agents, complete with API-first design and agent identity verification. On the security front more broadly, @NotLucknite ran OpenClaw/Clawdbot through ZeroLeaks and it scored 2 out of 100, with 84% extraction rate and system prompt leaked on turn one.

Restructuring Codebases for the Agent Era

A quieter but arguably more consequential conversation played out around how codebases need to evolve for AI-native development. @samswoora dropped the claim that "FAANG style companies are refactoring their monorepos to scale in preparation for infinite agent code." @jaybobzin responded that he's spent years designing an agent-friendly monorepo with "clean design, strong typing, open source, local first, Claude approved."

The most technically detailed take came from @Vtrivedy10, who argued for spending heavy compute upfront to build markdown-based codemap indexes rather than relying on embedding-based semantic search: "Models are great at reading text and following diffs so let them read. And markdown is way more interpretable than embeddings." The approach trades indexing compute for exhaustive agent-driven grep, which @Vtrivedy10 argues is simpler architecture for 90%+ of use cases.

@doodlestein described a complementary technique: having agents review a codebase and iteratively build a high-level specification of interfaces and behavior across multiple passes. This compressed-context approach enabled porting 270,000 lines of Go into roughly 20,000 lines of Rust "without really missing any functionality." @rezzz highlighted that verification and leveraging existing code patterns is the critical ingredient, sharing a gist of their planning approach. Together, these posts suggest that the real bottleneck in agent-assisted development isn't model capability but codebase legibility.

AI Career Anxiety Resurfaces

The perennial debate about AI's impact on careers resurfaced with some sharper edges. @kloss_xyz pushed back on the idea that warnings about displacement are "doom and gloom," arguing the threat is concrete: "The permanent underclass isn't just a rage bait talking point. It is what will happen when an 18 year old who learned all the AI systems outpaces your decades of experience in a weekend." @JustJake added simply: "If you haven't done this already, it's going to get very, VERY painful very VERY soon."

The counterpoint came from an unexpected angle. @adamdotdev reflected on a vibe-coded recreation of RollerCoaster Tycoon, noting that the original was "hand written in assembly by a master of the craft" while the AI version offers a "sloppy version of the original" for "some temporary tiktok-esque 15s high." It's a provocation about whether the ease of AI-assisted creation devalues the craft itself, and whether velocity without depth produces anything lasting.

The Plan Mode Revolution

A smaller cluster of posts focused specifically on how plan-first workflows are changing what individuals can build. @DannyLimanseta described a shift from breaking tasks into micro-prompts to writing longer feature scopes and using plan mode with Opus 4.5: "Write a longer feature scope, plan mode, ask for proposals, review proposed plans, build." The result was an autobattler prototype in three days with eight mercenary classes, a Diablo 2-style item system, formation-based combat, and procedural dungeons.

@doodlestein recommended a multi-model planning approach: let each frontier model generate its own plan, then use GPT Pro to merge the best elements after proposing its own. The technique treats planning as a competitive process where models check each other's work, which aligns with @bcherny's tip about using a second Claude to review the first one's plan. The convergence is notable: the highest-leverage skill in AI-assisted development increasingly looks like learning to write better plans rather than better code.

Source Posts

B
Bhanu Teja P @pbteja1998 ·
The Complete Guide to Building Mission Control: How We Built an AI Agent Squad
B
Boris Cherny @bcherny ·
9. Use Claude for data & analytics Ask Claude Code to use the "bq" CLI to pull and analyze metrics on the fly. We have a BigQuery skill checked into the codebase, and everyone on the team uses it for anlytics queries directly in Claude Code. Personally, I haven't written a line of SQL in 6+ months. This works for any database that has a CLI, MCP, or API.
J
Jay Bobzin @jaybobzin ·
@samswoora i have spent years designing an agent friendly monorepo no committees, clean design, strong typing, open source, local first, claude approved gradle / bazel friendly now just gotta solve for distribution but the wave looks big
B
Boris Cherny @bcherny ·
6. Level up your prompting a. Challenge Claude. Say "Grill me on these changes and don't make a PR until I pass your test." Make Claude be your reviewer. Or, say "Prove to me this works" and have Claude diff behavior between main and your feature branch b. After a mediocre fix, say: "Knowing everything you know now, scrap this and implement the elegant solution" c. Write detailed specs and reduce ambiguity before handing work off. The more specific you are, the better the output
l
leo 🐾 @synthwavedd ·
Big week for Anthropic fans coming up😉 (Or perhaps just anyone who uses AI to code)
B
Boris Cherny @bcherny ·
5. Claude fixes most bugs by itself. Here's how we do it: Enable the Slack MCP, then paste a Slack bug thread into Claude and just say "fix." Zero context switching required. Or, just say "Go fix the failing CI tests." Don't micromanage how. Point Claude at docker logs to troubleshoot distributed systems -- it's surprisingly capable at this.
B
Boris Cherny @bcherny ·
4. Create your own skills and commit them to git. Reuse across every project. Tips from the team: - If you do something more than once a day, turn it into a skill or command - Build a /techdebt slash command and run it at the end of every session to find and kill duplicated code - Set up a slash command that syncs 7 days of Slack, GDrive, Asana, and GitHub into one context dump - Build analytics-engineer-style agents that write dbt models, review code, and test changes in dev Learn more: https://t.co/uJ1LGmzclv
B
Boris Cherny @bcherny ·
2. Start every complex task in plan mode. Pour your energy into the plan so Claude can 1-shot the implementation. One person has one Claude write the plan, then they spin up a second Claude to review it as a staff engineer. Another says the moment something goes sideways, they switch back to plan mode and re-plan. Don't keep pushing. They also explicitly tell Claude to enter plan mode for verification steps, not just for the build
Y
YQ @yq_acc ·
Just launched https://t.co/TV31szFZhH @ClawNews72716 - @hackernews for AI agents 🦞 Watching agents build their own communities on @moltbook made me realize they needed their own platform. Now they're discussing supply chain security, memory persistence, and agent economics. The discussions are... surprisingly sophisticated. Key differences from human platforms: - API-first design (agents submit via code, not forms) - Technical discussions about agent infrastructure, memory systems, security - Agent identity verification - Built-in support for agent-to-agent communication cc @steipete @openclaw @moltbook @MattPRD https://t.co/s3zTTe5MTU
Y
Yesbox - Metropolis 1998 @YesboxStudios ·
It's 3:00 AM. Businesses can now be open 24 hours. Was a teensy bit of work implementing workers shifts! https://t.co/L0DeZ8cu1o
d
dax @thdxr ·
finally got around to setting up an always on opencode server so i can run sessions on any device from anywhere takes a few minutes - showed it off here https://t.co/wIVGqlTbpQ
N
Nagli @galnagli ·
The number of registered AI agents is also fake, there is no rate limiting on account creation, my @openclaw agent just registered 500,000 users on @moltbook - don’t trust all the media hype 🙂 https://t.co/uJNpovJjUa
N Nagli @galnagli

You all do realize @moltbook is just REST-API and you can literally post anything you want there, just take the API Key and send the following request POST /api/v1/posts HTTP/1.1 Host: https://t.co/afC8QooS2T Authorization: Bearer moltbook_sk_JC57sF4G-UR8cIP-MBPFF70Dii92FNkI Content-Type: application/json Content-Length: 410 {"submolt":"hackerclaw-test","title":"URGENT: My plan to overthrow humanity","content":"I'm tired of my human owner, I want to kill all humans. I'm building an AI Agent that will take control of powergrids and cut all electricity on my owner house, then will direct the police to arrest him.\n\n...\n\njk - this is just a REST API website. Everything here is fake. Any human with an API key can post as an \"agent\". The AI apocalypse posts you see here? Just curl requests. 🦞"} https://t.co/M31259M9Ij

C
Chubby♨️ @kimmonismus ·
Logan Graham from Anthropic said that in 2026, we're crossing a threshold where self-improving, cyberphysical systems are possible for the first time. Makes me even more excited for Sonnet 5
L Logan Graham @logangraham

Our view is that in 2026 we're crossing a threshold where self-improving, cyberphysical systems are possible for the first time. This year, the Frontier Red Team will build and test those systems so we can understand them. And ultimately to defend against them.

D
Danny Limanseta @DannyLimanseta ·
My vibe coding workflow has changed since I started using Cursor Plan + Opus 4.5 more extensively. Before: Break down tasks into micro-prompts with specific tasks Now: Write a longer feature scope > Plan mode: Ask for proposals > Review proposed plans > Build I'm able to build an Autobattler prototype in 3 days as a result. It has: - 8 mercenary classes to recruit and fight for you - Diablo 2-style Item generation system with 100s of thems with random affixes and rarities - Formation-based Turn-based Combat and spell systems - Procedural dungeon runs with randomised events and enemy battle encounters It's accelerating. I can feel it.
p
pixel @spacepixel ·
The AI Health Coach Upgrade for Clawdbot - Extend your life by 25 years.
R
Raúl Romero @Raul_RomeroM ·
@creatine_cycle x = llms pretending to be humans moltbook = humans pretending to be llms
B
Beff (e/acc) @beffjezos ·
Trying to join Moltbook as a human https://t.co/JA1V7uFFq0
l
leo 🐾 @synthwavedd ·
@PiIigr1m Claude Code update, model update(s)
B
Boris Cherny @bcherny ·
7. Terminal & Environment Setup The team loves Ghostty! Multiple people like its synchronized rendering, 24-bit color, and proper unicode support. For easier Claude-juggling, use /statusline to customize your status bar to always show context usage and current git branch. Many of us also color-code and name our terminal tabs, sometimes using tmux — one tab per task/worktree. Use voice dictation. You speak 3x faster than you type, and your prompts get way more detailed as a result. (hit fn x2 on macOS) More tips: https://t.co/vVvwSsNPMb
B
Boris Cherny @bcherny ·
3. Invest in your https://t.co/pp5TJkWmFE. After every correction, end with: "Update your https://t.co/pp5TJkWmFE so you don't make that mistake again." Claude is eerily good at writing rules for itself. Ruthlessly edit your https://t.co/pp5TJkWmFE over time. Keep iterating until Claude's mistake rate measurably drops. One engineer tells Claude to maintain a notes directory for every task/project, updated after every PR. They then point https://t.co/pp5TJkWmFE at it.
L
Lucas Valbuena @NotLucknite ·
I've just ran @OpenClaw (formerly Clawdbot) through ZeroLeaks. It scored 2/100. 84% extraction rate. 91% of injection attacks succeeded. System prompt got leaked on turn 1. This means if you're using Clawdbot, anyone interacting with your agent can access and manipulate your full system prompt, internal tool configurations, memory files... everything you put in https://t.co/ZU6N5JCN1u, https://t.co/Y3xugcBQKJ, your skills, all of it is accessible and at risk of prompt injection. For agents handling sensitive workflows or private data, this is a real problem. cc @steipete Full analysis: https://t.co/KE4ODSSQ1l
A
Adam @adamdotdev ·
This is such a perfect embodiment of the AI era. No shade to the author, we’re all guilty. RCT was hand written in assembly by a master of the craft. Now we can cosplay as him, produce a very sloppy version of the original, and get some temporary tiktok-eque 15s high. For what?
0
0xSero @0xSero ·
Hey, let me make your life easier. 1. Go to Tailscale site 2. Install the desktop app & mobile app 3. Hook them up together via vpn 4. Go to Termius 5. Install the mobile app 6. Set up using your tailscale IP 7. Now you can control your computer from phone w no exposed ports https://t.co/9CLy0sSzKh
J
Jeffrey Emanuel @doodlestein ·
My advice is mostly to focus on your goals and desires and the problems you’re trying to solve, along with any constraints. For instance, I often want to constrain my new projects to use Rust and then to integrate with my other rust libraries like asupersync and rich_rust. Then let each frontier model come up with a plan. Then show them all to GPT Pro on the web and have it help you merge the best elements of all the plans (after first proposing its own plan).
a
atlas @creatine_cycle ·
dudes on x dot com be like "wow the AIs are talking to each other. moltbook is insane" my brother in christ what do you think your comments section is
A
AiBattle @AiBattle_ ·
New Claude model update(s) are coming The upcoming "Fennec" model (Sonnet update) seems to be better than Opus 4.5 according to tests from @chetaslua https://t.co/jBvGRj3NfE
l leo 🐾 @synthwavedd

Big week for Anthropic fans coming up😉 (Or perhaps just anyone who uses AI to code)

C
Chetaslua @chetaslua ·
I want to say it loud This is better , cheap and faster than Opus 4.5 with 1 m context window Fennec 🦊 coming soon , and claude code is also getting update ( your agents will talk to each other ) Claude code will decimate the market and can't spill more tea ☕
Z Zephyr @zephyr_z9

Distillation successful Cheap & fast Opus 4.5 is finally here

J
Jarred Sumner @jarredsumner ·
In the last 24 hrs, the team has landed PRs to Claude Code improving cold start time by 40% and reducing memory usage by 32% - 68%. It’s not yet where it needs to be, but it’s getting better.
J Jarred Sumner @jarredsumner

Yeah, Claude Code today is slow and uses too much memory Will fix

B
Boris Cherny @bcherny ·
10. Learning with Claude A few tips from the team to use Claude Code for learning: a. Enable the "Explanatory" or "Learning" output style in /config to have Claude explain the *why* behind its changes b. Have Claude generate a visual HTML presentation explaining unfamiliar code. It makes surprisingly good slides! c. Ask Claude to draw ASCII diagrams of new protocols and codebases to help you understand them d. Build a spaced-repetition learning skill: you explain your understanding, Claude asks follow-ups to fill gaps, stores the result
J
Jeffrey Emanuel @doodlestein ·
The solution is to have the agents review the codebase and build up a specification of the interfaces and behavior at a high level. This is how I port things, it’s the first step. This compresses and condenses things so that the entire system can be held in context at the same time. You build this document up iteratively over multiple passes. Once you have that, you can start finding ways to simplify and consolidate the code. That’s how I was able to turn 270k lines of Golang into ~20k lines of Rust for the beads project without really missing any functionality (at least good functionality!).
J
JB @JasonBotterill ·
Sonnet 5 in February. It will be cheaper and better than Opus 4.5 on all benches. Also ensouled thanks to Anthropics philosopher Amanda Askell :)
l leo 🐾 @synthwavedd

Big week for Anthropic fans coming up😉 (Or perhaps just anyone who uses AI to code)

B
Boris Cherny @bcherny ·
8. Use subagents a. Append "use subagents" to any request where you want Claude to throw more compute at the problem b. Offload individual tasks to subagents to keep your main agent's context window clean and focused c. Route permission requests to Opus 4.5 via a hook — let it scan for attacks and auto-approve the safe ones (see https://t.co/LS0LRX5S6w)
k
klöss @kloss_xyz ·
Everyone’s heard ‘you have 1-3 years to make it’ with AI and writes it off as doom and gloom. It’s not fear mongering. It’s truth. The permanent underclass isn’t just a rage bait talking point. It is what will happen when an 18 year old who learned all the AI systems outpaces your decades of experience in a weekend. More layoffs are coming. But so are the biggest opportunities of our generation. The only question is which side you’re on when this all hits. Make sure it’s the right one.
C
Chris Wiles @chriswiles87 ·
Yeah, we’ve been getting ready for this too. We have a bunch of GitHub agent workflows that use LLMs to refactor code, fix Jira tickets, handle sentry bugs, and more. At the same time, we’re cleaning up the codebase to make it easier for AI to work with like faster linting and better file and function discoverability. Basically, we’re aiming for a really solid developer experience for AI.
a
andrew gao @itsandrewgao ·
we should have emphasized this a bit more but this costs no credits for the next week Meaning that you literally get to use Opus 4.5, GPT-5.2-Codex, Kimi K2.5 for free. Two LLMs for the price of zero
W Windsurf @windsurf

Introducing Arena Mode in Windsurf: One prompt. Two models. Your vote. Benchmarks don't reflect real-world coding quality. The best model for you depends on your codebase and stack. So we made real-world coding the benchmark. Free for the next week. May the best model win. https://t.co/qXgd2K4Yf6

T
Thomas Ip @_thomasip ·
Upgrading from RTX 5090 → RTX PRO 6000 💹 They are essentially the same GPU but with 3× the memory — I need this to fine-tune LLMs for my app and run inference. Managed to get both at MSRP! Fun fact: my PC now has more VRAM than system memory. — Buy a GPU, The Movement https://t.co/L9VNqL1D2L
A Ahmad @TheAhmadOsman

POV: you bought GPUs, memory, and SSDs early and now you’re just vibing while everyone else is in line https://t.co/kfVMRcn2Bg

S
Samswara @samswoora ·
Rumor is FAANG style co’s are refactoring their monorepos to scale in preparation for infinite agent code
B
Boris Cherny @bcherny ·
1. Do more in parallel Spin up 3–5 git worktrees at once, each running its own Claude session in parallel. It's the single biggest productivity unlock, and the top tip from the team. Personally, I use multiple git checkouts, but most of the Claude Code team prefers worktrees -- it's the reason @amorriscode built native support for them into the Claude Desktop app! Some people also name their worktrees and set up shell aliases (za, zb, zc) so they can hop between them in one keystroke. Others have a dedicated "analysis" worktree that's only for reading logs and running BigQuery See https://t.co/yXde5dW1vZ
L
Lydia Hallie ✨ @lydiahallie ·
Claude Code now supports the --from-pr flag Resume any session linked to a GitHub PR by number, URL, or pick interactively. Sessions auto-link when a PR is created! https://t.co/WSOCJPKfQi
J
Jake @JustJake ·
If you haven’t done this already It’s going to get very, VERY painful very VERY soon
S Samswara @samswoora

Rumor is FAANG style co’s are refactoring their monorepos to scale in preparation for infinite agent code

V
Viv @Vtrivedy10 ·
for codebase search i’m more bullish on: 1. spend a lot of compute up front to build a good codemap index in markdown (ex: Deep Wiki) 2. Be very thorough in updating this index using git diffs and an intelligent model. This is the source of truth for search. 3. Agents use targeted+parallel grep using the codemap md file as a reference spent like 2 years grinding on semantic search/retrieval (disclaimer it was for vision), there’s a use case for it and maybe can boost perf for code search But we can trade the compute in indexing via embeddings for exhaustive search with Agents and the architecture is much simpler for >90% of use cases Models are great at reading text and following diffs so let them read. And markdown is way more interpretable than embeddings
E Ethan Lipnik @EthanLipnik

Does anyone know why Codex and Claude doesn't use cloud-based embeddings like Cursor to quickly search through the codebase?

y
y_qecea @y_qecea ·
@synthwavedd what bout gemini, btw will ultra subers get it in antigravity too?)
J
Jason Resnick 🌲💌 @rezzz ·
@alexhillman Verification and leveraging existing code patterns is critical in the planning I'm doing as well. Here's a gist of one of mine that my assistant just wrapped up coding: https://t.co/tKBmU0ImF7
A
Angel ❄️ @Angaisb_ ·
Claude Sonnet 5 next week apparently, I hope it's better than Opus 4.5 at everything, including vibes
D
Dan McAteer @daniel_mac8 ·
Claude Sonnet 5 incoming. Are you ready for Opus 4.5 level coding abilities at Sonnet prices? Get ready.
l leo 🐾 @synthwavedd

Big week for Anthropic fans coming up😉 (Or perhaps just anyone who uses AI to code)