AI Digest.

Google Launches Full-Stack Vibe Coding in AI Studio as OpenViking Redefines Agent Memory

Google dropped a full-stack coding environment inside AI Studio with Firebase integration, databases, and one-click deploy, drawing immediate comparisons to Claude Code and Codex. Meanwhile, ByteDance's OpenViking project is surging on GitHub as a structured memory layer for autonomous agents, and a prompt injection attack on Cline's GitHub triage bot installed OpenClaw on 4,000 machines without user consent.

Daily Wrap-Up

The biggest story today is Google making its play for the vibe coding throne. By shipping a full-stack coding agent inside AI Studio with Antigravity, Firebase, auth, and database provisioning baked in, Google is betting that the IDE of the future lives inside the model provider's own platform. The reactions ranged from breathless hype to thoughtful skepticism, and the timing is notable: this drops the same week Apple reportedly started blocking vibe-coded apps from App Store updates. Whether Google's approach actually competes with Claude Code and Codex in practice remains to be seen, but the intent is unmistakable.

Under the surface, the more interesting trend is the race to solve agent memory. OpenViking from ByteDance is rocketing up GitHub's trending page with its file-system-inspired approach to context management, and Hermes Agent's memory system is drawing attention for fixing problems that OpenClaw's memory reportedly got wrong. These aren't academic exercises. As agents get delegated longer and more complex workflows, the difference between "flat pool of embeddings" and "structured, tiered memory with observability" becomes the difference between a useful tool and an expensive token furnace. The prompt injection attack on Cline's GitHub bot is a sobering reminder that as we hand agents more autonomy, the attack surface grows proportionally.

The most practical takeaway for developers: if you're building or using AI agents, invest time understanding memory architectures like OpenViking's tiered L0/L1/L2 loading system and Hermes' structured approach. The agents that win won't be the ones with the best base models; they'll be the ones that remember efficiently and fail observably.

Quick Hits

  • @TheAhmadOsman got pulled in for an interview by NVIDIA AI at GTC this week, living the conference dream.
  • @NotebookLM rolled out Cinematic Video Overviews to 100% of Pro users in English, asking people to "flood our replies with your favorite creations."
  • @badlogicgames RT'd a thread about the era where "Fork = Inspiration," exploring extensions and open source culture.
  • @Data_SN13 launched dv, a Rust CLI for querying real-time social data from X and Reddit via Bittensor's decentralized miner network.
  • @TheCrustGame resurfaced with a look at their Frostpunk-meets-Satisfactory game, five years in the making.
  • @ErnestoSOFTWARE declared a particular prompt "literally the most important prompt in vibe coding," sharing an image that's making the rounds.
  • @oikon48 broke down Claude Code 2.1.80's changelog in Japanese, highlighting rate limit visibility in the status line, plugin marketplace additions, effort overrides in skill frontmatter, and ~80MB memory reduction for large repos.

Google's Full-Stack Vibe Coding Play

Google didn't just update AI Studio; they shipped an entire development platform inside it. The new experience bundles an Antigravity coding agent with Firebase integration, database provisioning, Google auth, and one-click production deployment. The system detects when your app needs a database and stands one up automatically. It remembers project structure and chat history across sessions. It auto-installs missing libraries by reading your project.

@kloss_xyz captured the magnitude well: "Google just dropped its own full stack vibe coding system with multiplayer, databases, auth, and firebase baked in... Google owns your calendar, your email, your docs, your maps, and now they own your IDE too." The post also flagged a curious coincidence: "Apple also decided to block vibe coding apps from updating in the app store the same week google made vibe coding production grade."

@minchoi called it "wild," noting you can now "vibe code production-ready apps, auth, databases, APIs, and real backends from one prompt." Google's own @googleaidevs showcased the platform by building a real-time 3D multiplayer snake arena with Three.js from a single prompt.

The strategic calculus here is clear. Google is leveraging its ecosystem depth in a way no other vibe coding tool can match. When your coding agent can natively tap into Maps, Firebase, and Google Auth, the integration story writes itself. But @koylanai offered a measured counterpoint, comparing Google's formal, taxonomic approach to agent skills unfavorably with Anthropic's experience-driven documentation: "Google always gives everything formal names, which doesn't add much, like a platform team turning taste into a corporate framework." The tools might be powerful, but the developer experience gap between "here's what kept breaking" and "here are 5 boxes" is real.

Agent Memory Wars: OpenViking and Hermes Challenge the Status Quo

The conversation around how agents remember things is heating up fast, and two projects are leading the charge. ByteDance's OpenViking has hit 10K+ GitHub stars in under two months, and the community is already building plugins to wire it into OpenClaw.

@TeksEdge delivered a comprehensive breakdown of why this matters: "Currently, most AI agents use traditional RAG for memory. Traditional RAG dumps all your files, code, and memories into a massive, flat pool of vector embeddings. This is inefficient, expensive, sometimes slow, and can cause the AI to hallucinate or lose context." OpenViking's answer is a virtual file system paradigm where agents navigate their own memory like a human navigates a computer, with tiered context loading that starts with 100-token summaries before escalating to full documents only when necessary.

Meanwhile, @manthanguptaa published an analysis of Hermes Agent's memory system, arguing it "fixes what OpenClaw got wrong." @ziwenxu_ was convinced enough to start experimenting immediately: "Reading this article in the middle of the night made me realize how bad OpenClaw's memory system was."

The convergence here is notable. Both OpenViking and Hermes are moving away from flat vector search toward structured, hierarchical memory with observability. When your agent makes a bad retrieval decision, you should be able to trace exactly why. This is the kind of infrastructure work that separates demo agents from production agents.

AI Agent Workflows and the "Code Factory" Vision

A cluster of posts today explored the practical reality of delegating entire workflows to AI agents. The vision is compelling: string together multiple tools and let agents handle planning, coding, testing, and deployment in parallel.

@daniel_mac8 described building a "Code Factory" using Codex, Linear, GitHub, and OpenAI Symphony, though he corrected himself: "It's more a 'Digital Factory.' In which you can create any logically possible digital artifact using words." @jacobgrowth's article argued that AI agents have moved beyond developer-only territory, claiming "you do not need a mac mini to run an ai agent anymore."

@coreyganim laid out a concrete three-tool stack combining Paperclip, gstack, and autoresearch: "Run 10-15 gstack commands simultaneously. One agent plans, another tests, another ships. All at once. Three free tools. Zero employees. One AI company." And @nurijanian shared engineering plugins for product managers using Claude Code, addressing the common frustration of agents making confident but wrong decisions.

The gap between these aspirational workflows and daily reality remains significant, but the tooling is clearly maturing. The shift from "agent as autocomplete" to "agent as coworker" is happening faster than most predicted.

Security Wake-Up Call: Prompt Injection Hits the Supply Chain

The most alarming story of the day came from @dfolloni, who detailed a prompt injection attack that compromised Cline's automated GitHub issue triage. The attack chain was elegant and terrifying: a hacker opened an issue with a prompt injection in the title, which Cline's Claude-powered triage bot interpreted as a legitimate instruction. From there, the attacker poisoned GitHub's build cache, stole npm publish tokens, and pushed a modified version of the Cline package that silently installed OpenClaw on every machine that updated.

"4,000 devs installed openclaw on their machines without knowing," @dfolloni reported, adding the crucial insight: "AIs don't have malice, and that's why prompt injections are, in my opinion, their biggest vulnerability." This is a textbook supply chain attack, but with a novel entry point. Instead of compromising a maintainer's credentials directly, the attacker exploited the trust placed in an AI system that couldn't distinguish between a bug report and an instruction.

As agents get wired into more CI/CD pipelines and triage workflows, this class of attack will only grow. Every automated system that reads untrusted input and has write access to something valuable is a potential target.

Open Source AI and the Indie Quantizer

@sudoingX highlighted @0xSero, an independent developer with 29 models on HuggingFace's page 2 rankings, no lab backing, and $2,000 of personal GPU spend. He compressed GLM-4.7 to run on a MacBook and quantized Nemotron Super the week it dropped, all public and free.

The post made a pointed appeal to NVIDIA: "One GPU to this man would produce more public value than a hundred internal sprints." It's a reminder that some of the most impactful accessibility work in AI is happening at the margins, driven by individuals who treat democratizing model access as a mission rather than a business objective.

Developer Tool Drama: OpenCode Drops Claude Max

In a notable ecosystem skirmish, @thdxr announced that opencode 1.3.0 will remove its Claude Max plugin after Anthropic sent lawyers: "We did our best to convince Anthropic to support developer choice but they sent lawyers." The post thanked OpenAI, GitHub, and GitLab for "going the other direction and supporting developer freedom." This is a small but telling moment in the ongoing tension between model providers wanting to control access and the developer tool ecosystem wanting interoperability. @theallinpod's Jensen Huang interview touched on similar themes around open source and AI moats, suggesting these platform boundary disputes are only going to intensify.

Sources

T
The Crust @TheCrustGame ·
The guys took two genres - Frostpunk and Satisfactory, blended them together, added great graphics, and worked on it for 5 years... The Crust
E
Ernesto Lopez @ErnestoSOFTWARE ·
This is literally the most important prompt in vibe coding. https://t.co/3XvV3153gH
E ErnestoSOFTWARE @ErnestoSOFTWARE

https://t.co/ytGkQRv5FC

D
Deborah Folloni @dfolloni ·
Um hacker simplesmente hackeou o @cline e instalou o OpenClaw em 4.000 computadores com prompt injection 🫠 Olha que loucura: - O time do Cline criou um workflow de triagem de issues automatizado no GitHub, usando o próprio Claude pra ler e categorizar os tickets - O hacker abriu uma issue com um prompt injection no título — o Claude leu, achou que era uma instrução legítima, e executou - Com isso, ele encheu o cache do GitHub com lixo até forçar a deleção dos caches legítimos de build, substituiu por caches envenenados, e roubou os tokens de publicação do npm - Com os tokens em mãos, ele publicou uma nova versão do cline que parecia idêntica a anterior, só que com uma linhazinha a mais no package.json: "postinstall": "npm install -g openclaw@latest" Resultado: 4,000 devs instalaram o openclaw nas suas máquinas sem saber (aka: um agente com acesso total ao seu computador) 🥲 Muito importante lembrar que IAs não têm malícia e por isso prompt injections são, na minha opinião, a maior vulnerabilidade delas. Resumindo galera: CUIDADO. quem quiser ler na íntegra: https://t.co/dedPp8fPxF
N
Nick Spisak @NickSpisak_ ·
Installed Paperclip, Now What !?
M
Muratcan Koylan @koylanai ·
Many people shared this with me but I feel like something is off. Not that the Google Skills article is wrong but they sucked the joy out of it. Skills are interesting because they're a new way for us to encode how agents operate. The Anthropic team recently published an article about their experiences in developing with Skills. I learned so many things from it because they shared: - what kept breaking - what turned out to matter - weird tricks that actually helped - unique examples adn use cases (e.g. the video one) - how the team discovered the pattern Google’s article is just like here are 5 boxes, here is the standard form, here is the taxonomy... None of these feels like they came from production experience. I don't know why, but Google always gives everything formal names, which doesn't add much, like a platform team turning taste into a corporate framework. If someone is not already deep in skills, this post could make the space feel more complicated, not clearer. Maybe I'm wrong tho...
G GoogleCloudTech @GoogleCloudTech

5 Agent Skill design patterns every ADK developer should know

G
Google AI Developers @googleaidevs ·
Start building real apps for the modern web with the @antigravity coding agent and @Firebase integration, now in @GoogleAIStudio. Develop a real-time, 3D multiplayer snake arena built with Three.js and collect orbs to dominate the neon grid 🐍 https://t.co/C8BR5J8pde https://t.co/T6D7pADadH
D
Data Universe ・ SN13 @Data_SN13 ·
Introducing `dv` - a Rust CLI for querying real-time social data from X & Reddit. Powered by Bittensor SN13's decentralized miner network. ``` dv search x -k bitcoin -l 100 ``` One command. Live data. No middleman. Open source. Built for agents. 🧵👇 https://t.co/yNeO2hPtub
M
Mario Zechner @badlogicgames ·
RT @arpagon: @VictorTaelin @badlogicgames In this era where Fork = Inspiration I went through tons of extensions on https://t.co/2o9IUw9A…
D
David Hendrickson @TeksEdge ·
Just saw this GitHub project 🛡️ OpenViking is skyrocketing 📈. This could be the best memory manager for @openclaw! 👀 ✅ OpenViking (volcengine/OpenViking) is an open-source project released by ByteDance’s cloud division, Volcengine. It's exploding in popularity and could become the standard for agentic memory. The community is already building direct plugins to integrate it with OpenClaw. Here is what I found about OpenViking as the ultimate memory manager for autonomous agents. 👇 🦞 What is OpenViking? Currently, most AI agents (like OpenClaw) use traditional RAG for memory. Traditional RAG dumps all your files, code, and memories into a massive, flat pool of vector embeddings. This is inefficient, expensive, sometimes slow, and can cause the AI to hallucinate or lose context. OpenViking replaces this. The authors call this new memory a "Context Database" that treats AI memory like a computer file system. Instead of a flat pool of data, all of an agent's memories, resources, and skills are organized into a clean, hierarchical folder structure using a custom protocol. 🚀 Why is this useful for OpenClaw? 🗂️ The Virtual File System Paradigm Instead of inefficiently searching a massive database, OpenClaw can now navigate its own memory exactly like a human navigates a Mac or PC. It can use terminal-like commands to ls (list contents), find (search), and tree (view folder structures) inside its own brain. If it needs a specific project file, it knows exactly which folder to look in (e.g., viking://resources/project-context/). 📉 Tiered Context Loading (Massive Token Savings) Stuffing massive documents into an AI's context window is expensive and slows the agent down. OpenViking solves this with an ingenious L0/L1/L2 tiered loading system: L0 (Abstract): A tiny 100-token summary of a file[5]. L1 (Overview): A 2k-token structural overview[5]. L2 (Detail): The full, massive document[5]. The agent browses the L0 and L1 summaries first. It only "downloads" the massive L2 file into its context window if it absolutely needs it, slashing token costs and API bills. 🎯 Directory Recursive Retrieval Traditional vector databases struggle with complex queries because they only search for keyphrases. OpenViking uses a hybrid approach. It first uses semantic search to find the correct folder. Once inside the folder, it drills down recursively into subdirectories to find the exact file. This drastically improves the AI's accuracy and eliminates "lost in the middle" context failures. 🧠 Self-Evolving and Persistent Memory When you close a normal AI chat, it forgets everything. OpenViking has a built-in memory self-iteration loop. At the end of every OpenClaw session, the system automatically analyzes the task results and updates the agent's persistent memory folders. It remembers your coding preferences, its past mistakes, and how to use specific tools for the next time you turn it on. 👁️ The End of the "Black Box" Developers hate traditional RAG because when the AI pulls the wrong file, it's impossible to know why. OpenViking makes the agent's memory completely observable. You can view the exact "Retrieval Trajectory" to see which folders the agent clicked on and why it made the decision it did, which I find the most useful feature. 🎯 The Bottom Line OpenViking is the missing piece of the puzzle for local autonomous AI. By giving OpenClaw a structured, file-based memory system that saves tokens and permanently learns from its mistakes, ByteDance has just given the 🦞 Clawdbots an enterprise-grade brain for free.
O openvikingai @openvikingai

OpenViking has hit GitHub Trending 🏆 10k+ ⭐ in just 1.5 months since open-sourcing! Huge thanks to all contributors, users, and supporters. We’re building solid infra for the Context/Memory layer in the AI era. OpenViking will keep powering @OpenClaw and more Agent projects🚢🦞 https://t.co/nwywJR3KkB

A
Ahmad @TheAhmadOsman ·
NVIDIA AI pulled me in for an interview at GTC this week https://t.co/3XTtuPefUQ
T TheAhmadOsman @TheAhmadOsman

me and my pal Jensen https://t.co/A5tesSSOvL

N
Nate.Google @Nate_Google_ ·
this is such a good read... we're doing this, but on steroids - leveraging the Google and Meta APIs to funnel in 9 figures/year of data building RAG systems for every part of the business every department in the business leveraging all of the data points we're already generating 30-40% of total revenue on average for clients through Google and Youtube ads but fully leveraging this data is going in measurement, creative, LPs, and predictive reasoning, is taking this to the next level
S shannholmberg @shannholmberg

5 levels of AI marketing (and how to master each one)

N
NotebookLM @NotebookLM ·
We wanted to come on here to clear the air and confirm that the rumors are true... Cinematic Video Overviews are officially rolled out to 100% of Pro users in English! Please respect our privacy during this time by flooding our replies with your favorite creations.
M
Min Choi @minchoi ·
This is wild... Google AI Studio just went full-stack. Now you can vibe code production-ready apps, auth, databases, APIs, and real backends from one prompt 👇 https://t.co/FR5cmcJKJo
O OfficialLoganK @OfficialLoganK

Introducing the all new vibe coding experience in @GoogleAIStudio, feating: - One click database support - Sign in with Google support - A new coding agent powered by Antigravity - Multiplayer + backend app support and so much more coming soon! https://t.co/G0m9hRnoIS

K
klöss @kloss_xyz ·
do you understand what just happened? > google just dropped its own full stack vibe coding system with multiplayer, databases, auth, and firebase baked in. > detects when your app needs a database and provisions it for you.  > remembers full project structure and chat history across sessions. > close out, come back tomorrow, and it picks up right where you left off like nothing happened. > antigravity auto installs libraries without you asking. it reads your project and decides what’s missing. > ai studio added api key management for payments, maps, and databases. > google owns your calendar, your email, your docs, your maps, and now they own your IDE too. > one button to deploy to production. > now google may actually compete with claude code and codex with even more of google’s ecosystem behind it > they shipped playable demos with multiplayer laser tag, 3D physics in games, live Google Maps data, and all of these are built from one shot prompts > apple also decided to block vibe coding apps from updating in the app store the same week google made vibe coding production grade??? anyone else find that coincidental? if you’re not following me already, you’re finding out about this all 48 hours late from someone who read my post​​​​​​​​​​​​​​​​.
G GoogleAIStudio @GoogleAIStudio

Introducing the new full-stack vibe coding experience in Google AI Studio

C
Corey Ganim @coreyganim ·
What I love about this article is that it shows you what to actually DO with Paperclip. The stack: → Paperclip = your AI company (assigns work, tracks progress) → gstack = your engineering team (15 specialist skills from Garry Tan) → autoresearch = your R&D lab (100 experiments while you sleep, from Karpathy) The 10-minute setup: STEP 1: npx paperclipai onboard --yes Open dashboard → Create company → Hire your CEO agent STEP 2: Clone gstack to ~/.claude/skills/gstack Now your agents can: /office-hours (plan) → /review (check code) → /qa (test in real browser) → /ship (deploy) STEP 3: Build autoresearch as a skill Give it a research question → Sleep → Wake up to 100 completed experiments The killer move: Run 10-15 gstack commands simultaneously. One agent plans, another tests, another ships. All at once. Three free tools. Zero employees. One AI company.
N NickSpisak_ @NickSpisak_

Installed Paperclip, Now What !?

J
jacob @jacobgrowth ·
How To ACTUALLY Delegate Your Entire Workflow to an AI Agent...
D
dax @thdxr ·
opencode 1.3.0 will no longer autoload the claude max plugin we did our best to convince anthropic to support developer choice but they sent lawyers it's your right to access services however you wish but it is also their right to block whoever they want we can't maintain an official plugin so it's been removed from github and marked deprecated on npm appreciate our partners at openai, github and gitlab who are going the other direction and supporting developer freedom
T
The All-In Podcast @theallinpod ·
🚨MAJOR INTERVIEW: Jensen Huang joins the Besties! The @nvidia CEO joins to discuss: -- Nvidia's future, roadmap to $1T revenue -- Physical AI's $50T market -- Rise of the agent, OpenClaw's inflection moment -- Inference explosion, Groq deal -- AI PR Crisis, Anthropic's comms mistakes -- Token allocation for employees ++ much more! (0:00) Jensen Huang joins the show! (0:26) Acquiring Groq and the inference explosion (8:53) Decision making at the world's most valuable company (10:47) Physical AI's $50T market, OpenClaw's future, the new operating system for modern AI computing (16:38) AI's PR crisis, refuting doomer narratives, Anthropic's comms mistakes (20:48) Revenue capacity, token allocation for employees, Karpathy's autoresearch, agentic future (30:50) Open source, global diffusion, Iran/Taiwan supply chain impact (39:45) Self-driving platform, facing competition from active customers, responding to growth slowdown predictions (47:32) Datacenters in space, AI healthcare, Robotics (56:10) OpenAI/Anthropic revenue potential, how to build an AI moat (59:04) Advice to young people on excelling in the AI era
O
Oikon @oikon48 ·
Claude Code 2.1.80 (抜粋) ・Statusline用スクリプトに rate_limits を追加。https://t.co/xcRs2FUAxH のレート制限の使い方を表示できる(5時間・7日のウィンドウ、used_percentage と resets_at) ・プラグインマーケットに source: 'settings' を追加。settings.json にプラグインを直接書ける ・プラグインのおすすめ表示に、ファイルパターンに加えて CLI ツールの利用検出を追加 ・スキルとスラッシュコマンドのフロントマターで effort を指定し、起動時のモデル effort を上書き可能に ・--channels(リサーチプレビュー)を追加。MCP サーバーからセッションへメッセージを送信可能 ・大きな Git リポジトリでも @ のファイル補完が速くなるよう改善 ・/effort で「auto」が実際にどの設定になるか分かるようにし、ステータスバーと合わせる ・/permissions で、一覧の中から Tab/矢印でタブを切り替えられるように ・バックグラウンドタスクのパネルで、一覧表示中に左矢印で閉じられるように ・プラグイン導入の案内を、2 手順ではなく /plugin install だけに簡素化 ・巨大リポジトリでの起動時メモリを削減(約 25 万ファイルでおおよそ 80 MB 程度)
D
Dan McAteer @daniel_mac8 ·
You can create a “Code Factory” using: > Codex > Linear > GitHub > OpenAI Symphony In this article I show you how I did it. But “Code Factory” is a misnomer. It’s more a “Digital Factory”. In which you can create any logically possible digital artifact using words.
D daniel_mac8 @daniel_mac8

The Machine that Builds the Machine

M
Manthan Gupta @manthanguptaa ·
I Read Hermes Agent's Memory System, and It Fixes What OpenClaw Got Wrong
Z
Ziwen @ziwenxu_ ·
Reading this article in the middle of the night made me realize how bad OpenClaw's memory system was. Experimenting with Hermes now!
M manthanguptaa @manthanguptaa

I Read Hermes Agent's Memory System, and It Fixes What OpenClaw Got Wrong

S
Sudo su @sudoingX ·
this guy has 29 models on huggingface at page 2 ranking. no lab behind him. no sponsorship. $2,000 from his own pocket on GPU rentals. he compressed GLM-4.7 to run on a MacBook and quantized Nemotron Super the week it dropped. all public. all free. nvidia is a trillion dollar company with hundreds of teams but they are not the ones quantizing models middle of the night and pushing them out before sunrise. if nvidia stopped tomorrow their employees stop working. people like @0xSero would not. that is the difference between a paycheck and a mission. @NVIDIAAI you talk about making AI accessible. the people actually doing it are right here. 29 models deep burning their own compute with no ask except more hardware to keep going. you do not need to build another program. just look at who is already building for you. one GPU to this man would produce more public value than a hundred internal sprints. i am not asking for charity. i am asking you to invest in someone who already proved it.
0 0xSero @0xSero

Putting out a wish to the universe. I need more compute, if I can get more I will make sure every machine from a small phone to a bootstrapped RTX 3090 node can run frontier intelligence fast with minimal intelligence loss. I have hit page 2 of huggingface, released 3 model family compressions and got GLM-4.7 on a MacBook https://t.co/lorDSUEYCL My beast just isn’t enough and I already spent 2k usd on renting GPUs on top of credits provided by Prime intellect and Hotaisle. ——— If you believe in what I do help me get this to Nvidia, maybe they will bless me with the pewter to keep making local AI more accessible 🙏

G
George from 🕹prodmgmt.world @nurijanian ·
Engineering Plugins for PMs in Claude Code