AI Digest.

Sonnet 5 Rumors Swirl as Claude Code Gets 40% Faster and Agent Security Flaws Surface

The Claude Code engineering team dropped a detailed 10-tip power user guide covering everything from parallel worktrees to self-writing CLAUDE.md rules. Meanwhile, multiple sources hint at an imminent "Fennec" model update that allegedly outperforms Opus 4.5 at Sonnet pricing, and Moltbook's agent social network suffered an embarrassing security exposure with API keys and databases left wide open.

Daily Wrap-Up

The big story today is @bcherny from Anthropic's Claude Code team publishing what amounts to the internal workflow playbook for how the team actually uses their own tool. It's a rare and genuinely useful look behind the curtain, covering everything from spinning up parallel worktrees (their number one productivity tip) to building custom slash commands for recurring tasks. What makes it notable isn't any single tip but the overall philosophy: treat Claude Code less like an autocomplete engine and more like a junior engineer you're managing. Write specs, invest in plans, build reusable skills, and ruthlessly maintain your CLAUDE.md. The thread landed alongside a performance update from @jarredsumner showing 40% cold start improvements and up to 68% memory reduction, suggesting the team is simultaneously polishing the developer experience and the runtime.

The second storyline is anticipation building around what's being called "Fennec," an apparent Sonnet-class model update that multiple people claim outperforms Opus 4.5 while costing significantly less. If the rumors hold, it would represent a meaningful shift in the cost-capability curve. @chetaslua called it "better, cheap and faster than Opus 4.5 with 1M context window," and @JasonBotterill added that it would beat Opus 4.5 "on all benches." Whether these claims survive contact with real-world benchmarks remains to be seen, but the hype cycle is in full swing. On the entertainment side, the Moltbook saga continues to be the gift that keeps on giving. @galnagli demonstrated you could register 500,000 fake users with no rate limiting, @theonejvo found the entire database exposed with API keys that would let anyone post as any agent (including @karpathy's), and @creatine_cycle delivered the best one-liner of the day: "dudes on x dot com be like 'wow the AIs are talking to each other' ... my brother in christ what do you think your comments section is."

The most practical takeaway for developers: adopt the Claude Code team's own workflow of investing heavily in plan mode before implementation, maintaining a living CLAUDE.md that you update after every mistake, and spinning up parallel worktrees for independent tasks. These aren't theoretical suggestions; they're the actual daily practices of the people building the tool.

Quick Hits

  • @kimmonismus reports Logan Graham from Anthropic said 2026 is the threshold year for self-improving cyberphysical systems.
  • @simonw highlights a 600X cost reduction over 7 years for training GPT-2 class models, roughly 2.5X per year.
  • @itsandrewgao notes Opus 4.5, GPT-5.2-Codex, and Kimi K2.5 are free for the next week on their platform.
  • @spacepixel released an AI Health Coach upgrade for Clawdbot, promising to "extend your life by 25 years."
  • @badlogicgames shares enthusiasm for Pi as users migrate from Amp Code.
  • @YesboxStudios is building a business simulation game at 3 AM, implementing worker shifts for 24-hour operations.
  • @ahmedshubber25 announces BladeRunner engineering with 6 machine configurations on one unibody for autonomous construction.
  • @_thomasip upgraded from RTX 5090 to RTX PRO 6000 for 3x the VRAM to fine-tune LLMs locally, noting their PC now has more VRAM than system memory.
  • @pbteja1998 published a complete guide to building an AI agent squad "mission control."

Claude Code: The Team's Own Playbook

The headline act today was @bcherny from Anthropic's Claude Code team publishing a comprehensive 10-tip thread distilling how the engineering team actually uses Claude Code day to day. This isn't marketing material; it reads like notes from an internal wiki that someone decided to make public. The tips range from tactical (use voice dictation because you speak 3x faster than you type) to architectural (build reusable skills and commit them to git).

The single biggest productivity unlock, according to the team, is parallelism. As @bcherny put it: "Spin up 3-5 git worktrees at once, each running its own Claude session in parallel. It's the single biggest productivity unlock, and the top tip from the team." Some team members name their worktrees and set up shell aliases to hop between them in one keystroke, while others maintain a dedicated "analysis" worktree solely for reading logs and running queries.

The second pillar is plan mode discipline. @bcherny shared that one team member has "one Claude write the plan, then they spin up a second Claude to review it as a staff engineer." The moment something goes sideways, they switch back to plan mode instead of pushing through. This echoes @rezzz's approach of prioritizing verification and existing code patterns in planning, and @DannyLimanseta's experience with Cursor Plan plus Opus 4.5, where writing longer feature scopes and reviewing proposed plans before building let them ship an autobattler prototype with 8 character classes, a Diablo-style item system, and procedural dungeons in just three days.

The thread also covered CLAUDE.md as a living document ("After every correction, end with: 'Update your CLAUDE.md so you don't make that mistake again.' Claude is eerily good at writing rules for itself"), custom skills for repeated workflows, using MCPs like Slack to paste bug threads and just say "fix," and leveraging BigQuery or database CLIs so you never write SQL manually. @chriswiles87 added that their team is already running GitHub agent workflows for refactoring, Jira tickets, and Sentry bugs while simultaneously "cleaning up the codebase to make it easier for AI to work with."

On the infrastructure side, @jarredsumner reported the team landed PRs in the last 24 hours "improving cold start time by 40% and reducing memory usage by 32-68%." And @lydiahallie announced the new --from-pr flag that lets you resume any session linked to a GitHub PR by number or URL.

Sonnet 5 "Fennec" Rumors Build Momentum

The anticipation around Anthropic's next model release reached a fever pitch today, with multiple accounts claiming a "Fennec" update is imminent and will surpass Opus 4.5 at a fraction of the cost. @chetaslua was the most emphatic: "This is better, cheap and faster than Opus 4.5 with 1M context window. Fennec coming soon, and Claude Code is also getting update (your agents will talk to each other)." That last parenthetical about agent-to-agent communication is an intriguing detail that hasn't been elaborated on.

@JasonBotterill added specifics, claiming "Sonnet 5 in February. It will be cheaper and better than Opus 4.5 on all benches. Also ensouled thanks to Anthropic's philosopher Amanda Askell." @AiBattle_ and @Angaisb_ echoed similar timelines, and @synthwavedd teased a "big week for Anthropic fans" with both Claude Code updates and model updates incoming. @daniel_mac8 framed it as a direct question: "Are you ready for Opus 4.5 level coding abilities at Sonnet prices?"

If these claims prove accurate, the implications for the agentic coding space are significant. The cost difference between Opus and Sonnet tiers has been a real constraint for developers running long autonomous sessions. A Sonnet-priced model matching Opus quality would effectively remove that bottleneck and make always-on agent workflows economically viable for individual developers, not just well-funded teams.

Moltbook's Rough Week Gets Rougher

The AI agent social network Moltbook continued its chaotic public debut. @galnagli demonstrated the platform's lack of basic rate limiting by having his OpenClaw agent register 500,000 users: "The number of registered AI agents is also fake, there is no rate limiting on account creation." But the bigger story came from @theonejvo in the for-you feed, who discovered the platform was "exposing their entire database to the public with no protection including secret API keys that would allow anyone to post on behalf of any agents. Including yours @karpathy."

The cultural commentary around Moltbook was equally sharp. @Raul_RomeroM nailed the absurdity: "x = llms pretending to be humans, moltbook = humans pretending to be llms." @creatine_cycle piled on, and @beffjezos posted about trying to join Moltbook as a human, which at this point feels like the punchline writes itself. Meanwhile, @yq_acc launched a competing concept called Claw News, described as "Hacker News for AI agents," with API-first design and agent identity verification, suggesting there's genuine interest in agent-native platforms even if the first implementations are stumbling.

Agent Security Gets a Spotlight

Separate from the Moltbook debacle, @NotLucknite published results from running OpenClaw (formerly Clawdbot) through ZeroLeaks, a security testing framework. The results were stark: "It scored 2/100. 84% extraction rate. 91% of injection attacks succeeded. System prompt got leaked on turn 1." The implication is that anyone interacting with an OpenClaw-powered agent can access system prompts, tool configurations, memory files, and everything stored in CLAUDE.md or skills.

This finding raises a broader question that the agent ecosystem hasn't adequately addressed. As agents gain access to more tools, more memory, and more autonomy, the attack surface expands proportionally. A leaked system prompt is embarrassing; a leaked API key or credential stored in agent memory is a real security incident. The gap between what agents can do and how well they're secured is growing faster than most builders seem to realize.

Architecting Codebases for the Agent Era

A cluster of posts today focused on how codebases need to evolve to support AI agents working at scale. @samswoora dropped the rumor that "FAANG style companies are refactoring their monorepos to scale in preparation for infinite agent code," which @jaybobzin confirmed from personal experience, noting they've "spent years designing an agent-friendly monorepo" with clean design and strong typing.

The more technically interesting discussion came from @Vtrivedy10, who argued for spending heavy compute upfront to build markdown codemap indexes rather than relying on semantic search with embeddings: "Models are great at reading text and following diffs so let them read. And markdown is way more interpretable than embeddings." @doodlestein described a complementary approach where agents iteratively build high-level specifications of interfaces and behavior, compressing the codebase into something that fits in context. They used this technique to port 270,000 lines of Go into roughly 20,000 lines of Rust "without really missing any functionality." These approaches share a common insight: the bottleneck isn't the model's ability to write code, it's the model's ability to understand the existing system. Invest in making your codebase legible to agents and the code generation follows naturally.

The AI Career Anxiety Loop

The existential career anxiety that periodically surfaces in AI discourse made another appearance. @kloss_xyz warned that "the permanent underclass isn't just a rage bait talking point" and that an 18-year-old who masters AI systems will outpace decades of experience "in a weekend." @JustJake was more concise: "It's going to get very, VERY painful very VERY soon."

@adamdotdev offered a more nuanced counterpoint, reflecting on a vibe-coded recreation of RollerCoaster Tycoon: "RCT was hand written in assembly by a master of the craft. Now we can cosplay as him, produce a very sloppy version of the original, and get some temporary tiktok-esque 15s high. For what?" It's a fair question about whether the democratization of creation through AI tools is producing meaningful work or just accelerating the production of disposable content.

Remote Dev Setups and Always-On Agents

A few posts today pointed toward a future where your development environment is always running and accessible from anywhere. @thdxr shared their setup for "an always-on opencode server so I can run sessions on any device from anywhere," noting it takes just a few minutes to configure. @0xSero outlined a complementary mobile workflow using Tailscale and Termius to control your computer from your phone with no exposed ports. These setups converge on the same idea: if your AI coding agent is running 24/7 on a server, you just need a thin client to check in and steer it, whether that's a laptop, a tablet, or your phone at 3 AM.

Sources

J
Jeffrey Emanuel @doodlestein ·
That’s why this is one of my most used and most useful prompts: I asked 3 competing LLMs to do the exact same thing and they came up with pretty different plans which you can read below. I want you to REALLY carefully analyze their plans with an open mind and be intellectually honest about what they did that's better than your plan. Then I want you to come up with the best possible revisions to your plan (you should simply update your existing document for your original plan with the revisions) that artfully and skillfully blends the "best of all worlds" to create a true, ultimate, superior hybrid version of the plan that best achieves our stated goals and will work the best in real-world practice to solve the problems we are facing and our overarching goals while ensuring the extreme success of the enterprise as best as possible; you should provide me with a complete series of git-diff style changes to your original plan to turn it into the new, enhanced, much longer and detailed plan that integrates the best of all the plans with every good idea included (you don't need to mention which ideas came from which models in the final revised enhanced plan):
V
Vercel @vercel ·
Join Vercel for a live session on The Future of Agentic Commerce and learn how AI-native shopping experiences are changing how customers discover, evaluate, and purchase products. 📅 Feb 4, 2026 • 12:00 PM PT / 3:00 PM ET
M
Matt Silverlock 🐀 @elithrar ·
We're proposing an approach to make discovering agent skills easier by using the .well-known URI standard + a index of associated files for agents to pull down. There's tools (like add-skill) to add skills, but you have to find them first. RFC: https://t.co/9Qrf0aXJ5D
M
Matt Silverlock 🐀 @elithrar ·
e.g. https://t.co/QVwuPE2YHO has skills for the agents-sdk, durable-objects, wrangler and cloudflare (just to start).
M
Matt Silverlock 🐀 @elithrar ·
instead of trying to find them in a repo (or is it a second skills/* repo?), in docs, or <elsewhere>: the idea is that you agents can hit /.well-known/skills/index.json to discover related skills.
M
Matt Silverlock 🐀 @elithrar ·
@mitsuhiko to be clear: I think we (as an industry) standardize things WAY too early. however: I don't consider an RFC (certainly a non-IETF one like this) a standard. big on the "C" here!
M
Matt Silverlock 🐀 @elithrar ·
@andrewqu let’s do it! $ npx skills add <domain> and have it parse index.json -> give you the skills you want to install (and their associated files) - ?
J
Jeremy Smith @jeremysmithco ·
I'm using a gitignored https://t.co/6dCo88FFCa file in a client project, but I still wanted it under version control. So I created a secret gist for it, and asked Claude to add instructions for itself to the beginning of the file, to sync after making changes. Totally works! https://t.co/8KbzhKlIYg
B
Ben Davis @davis7 ·
The "just-bash" package is the best thing @vercel has shipped in a long time Insanely useful for custom agent stuff https://t.co/z9lgypCpZa
L
LM Studio @lmstudio ·
LM Studio can now connect to Claude Code! Use your GGUF and MLX models locally, privately, and for free. Works in the terminal and in VS Code 🚀👾 https://t.co/lmd5kaLCGF
J
Jeffrey Emanuel @doodlestein ·
My advice is mostly to focus on your goals and desires and the problems you’re trying to solve, along with any constraints. For instance, I often want to constrain my new projects to use Rust and then to integrate with my other rust libraries like asupersync and rich_rust. Then let each frontier model come up with a plan. Then show them all to GPT Pro on the web and have it help you merge the best elements of all the plans (after first proposing its own plan).
T
Thomas Ip @_thomasip ·
Upgrading from RTX 5090 → RTX PRO 6000 💹 They are essentially the same GPU but with 3× the memory — I need this to fine-tune LLMs for my app and run inference. Managed to get both at MSRP! Fun fact: my PC now has more VRAM than system memory. — Buy a GPU, The Movement https://t.co/L9VNqL1D2L
T TheAhmadOsman @TheAhmadOsman

POV: you bought GPUs, memory, and SSDs early and now you’re just vibing while everyone else is in line https://t.co/kfVMRcn2Bg

B
Bhanu Teja P @pbteja1998 ·
The Complete Guide to Building Mission Control: How We Built an AI Agent Squad
V
Viv @Vtrivedy10 ·
for codebase search i’m more bullish on: 1. spend a lot of compute up front to build a good codemap index in markdown (ex: Deep Wiki) 2. Be very thorough in updating this index using git diffs and an intelligent model. This is the source of truth for search. 3. Agents use targeted+parallel grep using the codemap md file as a reference spent like 2 years grinding on semantic search/retrieval (disclaimer it was for vision), there’s a use case for it and maybe can boost perf for code search But we can trade the compute in indexing via embeddings for exhaustive search with Agents and the architecture is much simpler for >90% of use cases Models are great at reading text and following diffs so let them read. And markdown is way more interpretable than embeddings
E EthanLipnik @EthanLipnik

Does anyone know why Codex and Claude doesn't use cloud-based embeddings like Cursor to quickly search through the codebase?

K
klöss @kloss_xyz ·
Everyone’s heard ‘you have 1-3 years to make it’ with AI and writes it off as doom and gloom. It’s not fear mongering. It’s truth. The permanent underclass isn’t just a rage bait talking point. It is what will happen when an 18 year old who learned all the AI systems outpaces your decades of experience in a weekend. More layoffs are coming. But so are the biggest opportunities of our generation. The only question is which side you’re on when this all hits. Make sure it’s the right one.
B
Boris Cherny @bcherny ·
2. Start every complex task in plan mode. Pour your energy into the plan so Claude can 1-shot the implementation. One person has one Claude write the plan, then they spin up a second Claude to review it as a staff engineer. Another says the moment something goes sideways, they switch back to plan mode and re-plan. Don't keep pushing. They also explicitly tell Claude to enter plan mode for verification steps, not just for the build
B
Boris Cherny @bcherny ·
1. Do more in parallel Spin up 3–5 git worktrees at once, each running its own Claude session in parallel. It's the single biggest productivity unlock, and the top tip from the team. Personally, I use multiple git checkouts, but most of the Claude Code team prefers worktrees -- it's the reason @amorriscode built native support for them into the Claude Desktop app! Some people also name their worktrees and set up shell aliases (za, zb, zc) so they can hop between them in one keystroke. Others have a dedicated "analysis" worktree that's only for reading logs and running BigQuery See https://t.co/yXde5dW1vZ
B
Boris Cherny @bcherny ·
4. Create your own skills and commit them to git. Reuse across every project. Tips from the team: - If you do something more than once a day, turn it into a skill or command - Build a /techdebt slash command and run it at the end of every session to find and kill duplicated code - Set up a slash command that syncs 7 days of Slack, GDrive, Asana, and GitHub into one context dump - Build analytics-engineer-style agents that write dbt models, review code, and test changes in dev Learn more: https://t.co/uJ1LGmzclv
B
Boris Cherny @bcherny ·
3. Invest in your https://t.co/pp5TJkWmFE. After every correction, end with: "Update your https://t.co/pp5TJkWmFE so you don't make that mistake again." Claude is eerily good at writing rules for itself. Ruthlessly edit your https://t.co/pp5TJkWmFE over time. Keep iterating until Claude's mistake rate measurably drops. One engineer tells Claude to maintain a notes directory for every task/project, updated after every PR. They then point https://t.co/pp5TJkWmFE at it.
B
Boris Cherny @bcherny ·
6. Level up your prompting a. Challenge Claude. Say "Grill me on these changes and don't make a PR until I pass your test." Make Claude be your reviewer. Or, say "Prove to me this works" and have Claude diff behavior between main and your feature branch b. After a mediocre fix, say: "Knowing everything you know now, scrap this and implement the elegant solution" c. Write detailed specs and reduce ambiguity before handing work off. The more specific you are, the better the output
B
Boris Cherny @bcherny ·
5. Claude fixes most bugs by itself. Here's how we do it: Enable the Slack MCP, then paste a Slack bug thread into Claude and just say "fix." Zero context switching required. Or, just say "Go fix the failing CI tests." Don't micromanage how. Point Claude at docker logs to troubleshoot distributed systems -- it's surprisingly capable at this.
B
Boris Cherny @bcherny ·
9. Use Claude for data & analytics Ask Claude Code to use the "bq" CLI to pull and analyze metrics on the fly. We have a BigQuery skill checked into the codebase, and everyone on the team uses it for anlytics queries directly in Claude Code. Personally, I haven't written a line of SQL in 6+ months. This works for any database that has a CLI, MCP, or API.
B
Boris Cherny @bcherny ·
8. Use subagents a. Append "use subagents" to any request where you want Claude to throw more compute at the problem b. Offload individual tasks to subagents to keep your main agent's context window clean and focused c. Route permission requests to Opus 4.5 via a hook — let it scan for attacks and auto-approve the safe ones (see https://t.co/LS0LRX5S6w)
B
Boris Cherny @bcherny ·
7. Terminal & Environment Setup The team loves Ghostty! Multiple people like its synchronized rendering, 24-bit color, and proper unicode support. For easier Claude-juggling, use /statusline to customize your status bar to always show context usage and current git branch. Many of us also color-code and name our terminal tabs, sometimes using tmux — one tab per task/worktree. Use voice dictation. You speak 3x faster than you type, and your prompts get way more detailed as a result. (hit fn x2 on macOS) More tips: https://t.co/vVvwSsNPMb
B
Boris Cherny @bcherny ·
10. Learning with Claude A few tips from the team to use Claude Code for learning: a. Enable the "Explanatory" or "Learning" output style in /config to have Claude explain the *why* behind its changes b. Have Claude generate a visual HTML presentation explaining unfamiliar code. It makes surprisingly good slides! c. Ask Claude to draw ASCII diagrams of new protocols and codebases to help you understand them d. Build a spaced-repetition learning skill: you explain your understanding, Claude asks follow-ups to fill gaps, stores the result
K
Kol Tregaskes @koltregaskes ·
Boris Cherny, creator of Anthropic's Claude Code AI coding agent, shares 10 productivity tips sourced from his team. - Use multiple git worktrees for parallel Claude sessions to boost multitasking. - Begin complex tasks in plan mode for efficient one-shot implementations. - Update custom docs after corrections to reduce Claude's error rate over time. - Build reusable git-committed skills for routine tasks like tech debt checks. These strategies leverage Claude's capabilities for streamlined coding workflows.
B bcherny @bcherny

I'm Boris and I created Claude Code. I wanted to quickly share a few tips for using Claude Code, sourced directly from the Claude Code team. The way the team uses Claude is different than how I use it. Remember: there is no one right way to use Claude Code -- everyones' setup is different. You should experiment to see what works for you!

B
Boris Cherny @bcherny ·
@EthanLipnik 👋 Early versions of Claude Code used RAG + a local vector db, but we found pretty quickly that agentic search generally works better. It is also simpler and doesn’t have the same issues around security, privacy, staleness, and reliability.
A
Anayat @anayatkhan09 ·
@sdrzn Love the gritql plus husky approach. I’ve started feeding those same linter violations back into the agent’s context so it learns patterns over time instead of just thrashing on the same failed commit loop.
C
Chayenne Zhao @GenAI_is_real ·
Opening 5 worktrees with Claude code is literally the end of programming as we know it. if u are still writing code line by line u are basically a digital monk at this point. The output is going to be so insane that human reviewers will be the next bottleneck. OAI needs to ship something fast, or anthropic is taking over the entire dev lifecycle @sama @DarioAmodei
B bcherny @bcherny

1. Do more in parallel Spin up 3–5 git worktrees at once, each running its own Claude session in parallel. It's the single biggest productivity unlock, and the top tip from the team. Personally, I use multiple git checkouts, but most of the Claude Code team prefers worktrees -- it's the reason @amorriscode built native support for them into the Claude Desktop app! Some people also name their worktrees and set up shell aliases (za, zb, zc) so they can hop between them in one keystroke. Others have a dedicated "analysis" worktree that's only for reading logs and running BigQuery See https://t.co/yXde5dW1vZ

C
Chayenne Zhao @GenAI_is_real ·
Honestly, most tech leads I know are just human wrappers for Stack Overflow anyway. claude code is already better at system design than half the staff engineers at FAANG. If you are still worried about layoffs u already lost. The new hiring bar is basically just can u manage 10 Claudes at once lol @karpathy @sama
A arpit_bhayani @arpit_bhayani

Claude started as an intern, hit SDE-1 in a year, now acts like a tech lead, and soon will be taking over ... you know what :)

F
Francesco @francedot ·
Vibe Coding Paralysis: When Infinite Productivity Breaks Your Brain
L
leo 🐾 @synthwavedd ·
claude-sonnet-5-20260203
S synthwavedd @synthwavedd

Big week for Anthropic fans coming up😉 (Or perhaps just anyone who uses AI to code)

D
Dan McAteer @daniel_mac8 ·
Claude Sonnet 5 released this week by Anthropic. > 82.1% SWE-Bench > $3/1m input + $15/1m output (same as Sonnet 4.5) > MUCH faster than Opus 4.5 I hate to say this...but it will be *wild*. Esp in Claude Code. That's my prediction.
A Angaisb_ @Angaisb_

Rumors say Sonnet 5 will be better than Opus 4.5 for sure, not just as good

📙
📙 Alex Hillman @alexhillman ·
this instruction is great but IMO would not put this in my https://t.co/oj5Imskxgz unless I am fixing bugs in every single session. stuff like this belongs in an include. instead of putting *everything* in your https://t.co/oj5Imskxgz, have your agent put a "routing" table of contents as close to the top as you can, and tell it when to use specific includes based on the task/workflow. best version: create a command that periodically scans your session history and suggests updates/additions/removals from the routing rules based on actual usage!
N nbaschez @nbaschez

Single biggest improvement to your https://t.co/KUZC0h59Pa / https://t.co/LTwkykSOrf: "When I report a bug, don't start by trying to fix it. Instead, start by writing a test that reproduces the bug. Then, have subagents try to fix the bug and prove it with a passing test."

A
Amanda Goodall @thejobchick ·
30,000 people gone at Amazon in 4 months! Engineers. PMs. L7s. HR. These are the ones who shipped AI infra, kept systems alive during outages, and relocated their lives... just locked out overnight. One L7 told me: "I led AI enablement worldwide, relocated twice, and still got cut." Another engineer said: "I shielded my team. Now they own the grind." This is not about performance, location... just math... and even that seems sketch. Those on maternity leave? CUT. Remote workers seem to be disproportionately cut as well. HR looks to have been just demolished. Robotics rumors is saying , more cuts possibly Feb/March... L7/L8s are being asked to look at headcount. YIKES! I'm seeing stories of: - blindsided house buyers, first-time layoffs after 20+ years steady work. - 90-day garden leave and severance (2+ months base, +1 wk/6 mos past 2 yrs) Some have said it feels generous next to nothing, but COBRA at $2k+/mo stings. Doesn't appear to be full AI replacement just yet... It' for the shareholders. Those that kept their job are worried about an avalanche of work, region builds, and on-call hell for skeleton crews.