AI Learning Digest.

Anthropic Study Reveals AI Coding Assistants Hurt Learning While Google Genie 3 Generates Playable 3D Worlds

Daily Wrap-Up

The most consequential development today wasn't a product launch or a new model, it was data. Anthropic ran a proper randomized controlled trial on their own junior engineers and found what many suspected but few had proven: using AI coding assistants made developers finish tasks faster but learn significantly less. The 17% drop in quiz scores is roughly two letter grades, which is hard to hand-wave away. But the nuance matters more than the headline. Engineers who used AI to ask conceptual questions and understand the code still scored well. The ones who delegated and copy-pasted suffered. This is the kind of research the industry desperately needs as companies rush to mandate AI tool adoption.

On the entertainment side, Google's Genie 3 stole the show by letting people generate playable 3D environments from text prompts. People created Breath of the Wild mock-worlds, surfer physics simulations, and surrealist French climbing games. The demos are impressive, but @aakashgupta's analysis cuts deeper: Genie 3 isn't really a gaming product. It's a training environment factory for DeepMind's embodied AI research. Consumers create diverse worlds while Google harvests data on what makes interesting training scenarios. Whether you find that brilliant or unsettling probably says something about your relationship with big tech.

Separately, the "agent-readable web" concept gained steam when @rauchg showed Vercel's pages automatically rendering as markdown for AI agents, compressing 500kb pages down to 2kb. This feels like a genuine inflection point, comparable to the responsive design revolution but for machine consumers. The most practical takeaway for developers: if you're building anything with AI assistants or agents, start implementing llms.txt or markdown content negotiation now. The pattern of serving lightweight, structured content to AI consumers is going to become as standard as mobile-responsive layouts, and early adopters will have the cleanest integrations.

Quick Hits

  • @invideoOfficial launched AI Motion Graphics powered by Anthropic, positioning it as "vibecoding for motion design" where single prompts generate professional-quality animations without After Effects or templates.
  • @shinboson offered a provocative observation that people who are best at getting LLMs to do things share a pattern: intelligent, empathetic, "definitely autistic," and possessing "some kind of will to power." Make of that what you will.
  • @TheAhmadOsman made the case for open-source AI, listing everything closed-source providers can do without telling you: quantize, distill, hot-swap checkpoints, throttle speeds, sunset models. "Buy a GPU" was the conclusion.
  • @sherwinwu from OpenAI noted that "context is king" for enterprise AI agents but remains extremely hard to get right, sharing that OpenAI has been working on solving it specifically for data warehouses.

Anthropic's AI and Learning Research

Anthropic dropped one of the more important AI research findings in recent months, and it came not from a capabilities benchmark but from a study about human cognition. In a seven-post thread, the company detailed a randomized controlled trial where junior software engineers were split into two groups: one with AI assistance and one without. Both groups worked through a coding task using an unfamiliar Python library, then took a quiz on the concepts they'd just encountered. The results were clear and uncomfortable for AI tool evangelists.

@AnthropicAI framed the stakes directly: "AI can make work faster, but a fear is that relying on it may make it harder to learn new skills on the job." The AI-assisted group finished about two minutes faster, though that difference wasn't statistically significant. The learning gap, however, was significant: "the AI group also scored significantly worse on the quiz, 17% lower, or roughly two letter grades."

The saving grace came from the details. Not everyone in the AI group performed poorly. As @AnthropicAI explained, "some in the AI group still scored highly while using AI assistance. When we looked at the ways they completed the task, we saw they asked conceptual and clarifying questions to understand the code they were working with, rather than delegating or relying on AI." This distinction between AI-as-tutor and AI-as-crutch is the key finding. The tool itself isn't the problem; the interaction pattern is.

Anthropic was explicit about why coding specifically matters here: "As software engineering grows more automated, humans will still need the skills to catch AI errors, guide its output, and ultimately provide oversight for AI deployed in high-stakes environments." This isn't just an academic concern. If the people overseeing AI-generated code never developed deep understanding of the systems they're responsible for, the entire human-oversight model breaks down. The broader implications touch AI product design and workplace policy, and Anthropic committed to continuing this research as they release more capable tools. It's refreshing to see a frontier lab studying the second-order effects of their own products.

Google Genie 3 and the World Model Revolution

Google's Genie 3 dominated the visual spectacle category today, with multiple creators sharing generated 3D worlds that look genuinely playable. The model takes text prompts and produces interactive environments with physics, lighting, and character control. It's the kind of demo that makes you do a double-take.

@minchoi captured the excitement: "Holy moly... Genie 3 just created this mock 3D game world from Breath of the Wild." Meanwhile, @ZiyangXie_ pointed to the technical sophistication underneath the flashy demos: "Genie3 is super good at simulating complex physics. It can simulate the splashes, foam, and their interaction with the surfer that are almost impossible for traditional graphics engines to render in real-time. The gap between simulation and generation is closing." And @TrueSlazac went surrealist, prompting a game about a "French woman who has to climb through a world that defies logic, flying objects everywhere."

But the most interesting take came from @aakashgupta, who argued everyone is misreading Genie 3's purpose entirely. His thesis: "Project Genie is a training gym factory for embodied AI." The 60-second generation limits, the latency on character control, the imperfect prompt following? Those are acceptable tradeoffs when your real customer is DeepMind's SIMA agent, which needs millions of diverse environments for training. "Traditional robotics simulation requires teams spending months hand-coding environments in Unity or Unreal Engine. Genie 3 generates them in seconds from text." The promptable world events feature, where you can drop objects or change weather mid-session, starts looking a lot like curriculum generation for reinforcement learning. Whether this analysis is correct or not, it reframes the entire product from "cool toy" to "infrastructure play for AGI research," which is a very different competitive story than comparing it to Sora or Cosmos.

The Machine-Readable Web

A quiet but potentially transformative pattern emerged today around making the web consumable by AI agents. @rauchg showed off a human/machine toggle by @p0 and announced that Vercel's pages now automatically render as markdown for agent consumers: "We just made it such that links automatically render as markdown when agents consume it. Page went from 500kb to 2kb. The web for agents will be very efficient!"

The community immediately recognized the significance. @Voxyz_AI drew the historical parallel: "500kb to 2kb is wild. This is basically the 'mobile-friendly' moment again but for agents. Soon every site will need a machine-readable version the same way they needed a responsive layout." And @0xCoops pointed to the existing standard that's been gaining traction: "The toggle is cute but unnecessary. Just add llms.txt at the root level."

This convergence of approaches, whether through content negotiation headers, llms.txt files, or dedicated machine endpoints, signals that the web is genuinely bifurcating into human and machine interfaces. The economics make this inevitable. When an AI agent needs to understand a documentation page, sending it 500kb of JavaScript-rendered HTML with navigation bars, cookie banners, and analytics scripts is absurd. The 2kb markdown version contains everything the agent actually needs. As agentic workflows become standard, sites without machine-readable versions will be at a real disadvantage, just like sites without responsive layouts were a decade ago.

AI Coding Tools and Competition

The coding assistant landscape continues to shift as new players emerge and existing tools evolve. @theo predicted a sentiment shift around Codex: "The perceived gap between Codex and Claude Code is about to close." Whether that's based on a specific feature announcement or general trajectory wasn't clear, but the competitive narrative is heating up.

On the practical usage side, @thdxr offered a candid field report on what appears to be a newer, cheaper model: "I've been using it for all my work for the past 24 hours and I don't see much of a difference from opus. Maybe opus is a bit smarter but this guy is so fast and so cheap." The economics of AI coding tools are compressing rapidly, with viable alternatives appearing at fractions of the cost of frontier models. @trq212 shared work on making playgrounds using Claude Code, showing the tool's versatility extending beyond straightforward coding tasks.

Models and Quantization

NVIDIA published research that could significantly change model deployment economics. @elliotarledge highlighted the key finding: "NVIDIA just dropped a banger paper on how they compressed a model from 16-bit to 4-bit and were able to maintain 99.4% accuracy, which is basically lossless." If those numbers hold up across diverse workloads, 4-bit quantization becoming standard practice would roughly quadruple the effective memory capacity for model serving, making larger models runnable on consumer hardware and dramatically reducing inference costs at scale.

On the model availability front, @opencode announced Kimi 2.5 is available for free for a limited time in their platform, with credit to Fireworks for getting the model running quickly. The pace at which new models become accessible through alternative interfaces continues to accelerate, giving developers more options for cost-performance tradeoffs.

Source Posts

G
Google DeepMind @GoogleDeepMind ·
Project Genie is rolling out to @Google AI Ultra subscribers in the U.S. (18+) With this prototype, we want to learn more about immersive user experiences to advance our research and help us better understand the future of world models. See the details → https://t.co/JsQm3hxaQ8 https://t.co/238Q3mbUra
C
Coops @0xCoops ·
@rauchg @p0 The toggle is cute but unnecessary. Just add llms.txt at the root level. I wrote about this last week. https://t.co/COPNjP4Rwm
T
Theoretically Media @TheoMediaAI ·
Google Genie is seriously mind bending. This is a Text To World prompt of a man walking down Hollywood Blvd. I am not only controlling the movement of the man, but also the camera. This is the World Model we've been waiting for. More Below! https://t.co/ojQHhpNKDM
M
Min Choi @minchoi ·
Holy moly... Genie 3 just created this mock 3D game world from Breath of the Wild. How I did it + prompts in comment. https://t.co/H33an42YNd
M Min Choi @minchoi

This is wild... Google just dropped Genie 3. This AI generates photorealistic & 3D worlds from text prompt and image... that you can explore in real-time This is a big step toward embodied AGI 10 examples + how to try (Ultra subs & US only)👇 1. We got Genie 3 before GTA 6 https://t.co/J1jDa4MtUX

E
Elliot Arledge @elliotarledge ·
NVIDIA just dropped a banger paper on how they compressed a model from 16-bit to 4-bit and were able to maintain 99.4% accuracy, which is basically lossless. This is a must read. Link below. https://t.co/zUzuL3rFQp
G
Google DeepMind @GoogleDeepMind ·
Here’s how it works: 🔵 Design your world and character using text and visual prompts. 🔵 Nano Banana Pro makes an image preview that you can adjust. 🔵 Our Genie 3 world model generates the environment in real-time as you move through. 🔵 Remix existing worlds or discover new ones in the gallery.
A
Ahmad @TheAhmadOsman ·
The Top 26 Essential Papers (+5 Bonus Resources) for Mastering LLMs and Transformers This list bridges the Transformer foundations with the reasoning, MoE, and agentic shift Recommended Reading Order 1. Attention Is All You Need (Vaswani et al., 2017) > The original Transformer paper. Covers self-attention, > multi-head attention, and the encoder-decoder structure > (even though most modern LLMs are decoder-only.) 2. The Illustrated Transformer (Jay Alammar, 2018) > Great intuition builder for understanding > attention and tensor flow before diving into implementations 3. BERT: Pre-training of Deep Bidirectional Transformers (Devlin et al., 2018) > Encoder-side fundamentals, masked language modeling, > and representation learning that still shape modern architectures 4. Language Models are Few-Shot Learners (GPT-3) (Brown et al., 2020) > Established in-context learning as a real > capability and shifted how prompting is understood 5. Scaling Laws for Neural Language Models (Kaplan et al., 2020) > First clean empirical scaling framework for parameters, data, and compute > Read alongside Chinchilla to understand why most models were undertrained 6. Training Compute-Optimal Large Language Models (Chinchilla) (Hoffmann et al., 2022) > Demonstrated that token count matters more than > parameter count for a fixed compute budget 7. LLaMA: Open and Efficient Foundation Language Models (Touvron et al., 2023) > The paper that triggered the open-weight era > Introduced architectural defaults like RMSNorm, SwiGLU > and RoPE as standard practice 8. RoFormer: Rotary Position Embedding (Su et al., 2021) > Positional encoding that became the modern default for long-context LLMs 9. FlashAttention (Dao et al., 2022) > Memory-efficient attention that enabled long context windows > and high-throughput inference by optimizing GPU memory access. 10. Retrieval-Augmented Generation (RAG) (Lewis et al., 2020) > Combines parametric models with external knowledge sources > Foundational for grounded and enterprise systems 11. Training Language Models to Follow Instructions with Human Feedback (InstructGPT) (Ouyang et al., 2022) > The modern post-training and alignment blueprint > that instruction-tuned models follow 12. Direct Preference Optimization (DPO) (Rafailov et al., 2023) > A simpler and more stable alternative to PPO-based RLHF > Preference alignment via the loss function 13. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (Wei et al., 2022) > Demonstrated that reasoning can be elicited through prompting > alone and laid the groundwork for later reasoning-focused training 14. ReAct: Reasoning and Acting (Yao et al., 2022 / ICLR 2023) > The foundation of agentic systems > Combines reasoning traces with tool use and environment interaction 15. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning (Guo et al., 2025) > The R1 paper. Proved that large-scale reinforcement learning without > supervised data can induce self-verification and structured reasoning behavior 16. Qwen3 Technical Report (Yang et al., 2025) > A modern architecture lightweight overview > Introduced unified MoE with Thinking Mode and Non-Thinking > Mode to dynamically trade off cost and reasoning depth 17. Outrageously Large Neural Networks: Sparsely-Gated Mixture of Experts (Shazeer et al., 2017) > The modern MoE ignition point > Conditional computation at scale 18. Switch Transformers (Fedus et al., 2021) > Simplified MoE routing using single-expert activation > Key to stabilizing trillion-parameter training 19. Mixtral of Experts (Mistral AI, 2024) > Open-weight MoE that proved sparse models can match dense quality > while running at small-model inference cost 20. Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints (Komatsuzaki et al., 2022 / ICLR 2023) > Practical technique for converting dense checkpoints into MoE models > Critical for compute reuse and iterative scaling 21. The Platonic Representation Hypothesis (Huh et al., 2024) > Evidence that scaled models converge toward shared > internal representations across modalities 22. Textbooks Are All You Need (Gunasekar et al., 2023) > Demonstrated that high-quality synthetic data allows > small models to outperform much larger ones 23. Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet (Templeton et al., 2024) > The biggest leap in mechanistic interpretability > Decomposes neural networks into millions of interpretable features 24. PaLM: Scaling Language Modeling with Pathways (Chowdhery et al., 2022) > A masterclass in large-scale training > orchestration across thousands of accelerators 25. GLaM: Generalist Language Model (Du et al., 2022) > Validated MoE scaling economics with massive > total parameters but small active parameter counts 26. The Smol Training Playbook (Hugging Face, 2025) > Practical end-to-end handbook for efficiently training language models Bonus Material > T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (Raffel et al., 2019) > Toolformer (Schick et al., 2023) > GShard (Lepikhin et al., 2020) > Adaptive Mixtures of Local Experts (Jacobs et al., 1991) > Hierarchical Mixtures of Experts (Jordan and Jacobs, 1994) If you deeply understand these fundamentals; Transformer core, scaling laws, FlashAttention, instruction tuning, R1-style reasoning, and MoE upcycling, you already understand LLMs better than most Time to lock-in, good luck ;)
A Ahmad @TheAhmadOsman

There are maybe ~20-25 papers that matter. Implement those and you’ve captured ~90% of the alpha behind modern LLMs. Everything else is garnish.

M
Machina @EXM7777 ·
if your goal is to find the best ways to implement AI in your work... most of your job will be deciding wether this task you're automating/delegating to an agent really adds leverage to your work i see A LOT of ai products launching, and focus on the mundane tasks: answer email, manage calendar, book restaurants or flights... what's the point in setting up workflows, having to maintain an infrastructure or pay a subscription to perform such EASY tasks? same applies to business workflows... Claude Code and other tools have people feel like they're super behind if they're not using it as their daily driver truth is you can get A LOT of shit done with just decent prompting, context engineering and MCPs you don't need a big ass setup that's time consuming, the goal with AI is to do MORE and FASTER
E
Eyad @eyad_khrais ·
I Installed Moltbot. Most Of What You're Seeing On X Is Overhyped.
O
OpenAI Developers @OpenAIDevs ·
Inside our in-house AI data agent It reasons over 600+ PB and 70k datasets, enabling natural language data analysis across Engineering, Product, Research, and more Our agent uses Codex-powered table-level knowledge plus product and organizational context https://t.co/Nr1geMcLoc
T
Theo - t3.gg @theo ·
This is going to be a huge bump to sentiment around Codex for new users. Calling it now, the perceived gap between Codex and Claude Code is about to close
d dominik kundel @dkundel

Web search is now enabled by default for the Codex CLI and IDE Extension 🎉 By default it will use a web search cache but you can toggle live results or if you use --yolo live results are enabled by default. More details in the changelog 👇 https://t.co/Ex2z1g2fUt

Z
Ziyang Xie @ZiyangXie_ ·
Genie3 is super good at simulating (or 'hallucinating') complex physics. It can simulate the splashes, foam, and their interaction with the surfer that are almost impossible for traditional graphics engines to render in real-time. The gap between simulation and generation is closing.
C
Cursor @cursor_ai ·
We're proposing an open standard for tracing agent conversations to the code they generate. It's interoperable with any coding agent or interface. https://t.co/jO4DIoIl6A
D
Dev Shah @0xDevShah ·
genie is to robotics what opus is to agents. so close to the alphafold moment for robotics. sim-to-real had been waiting for this all along. > genie computes environments > transform frames into gaussian splats (nvidia's longsplats) > splats collapse into low-poly > low poly gets an ultra-realistic touch up with shaders and textures > the whole thing gets dropped in isaac lab where autonomous machinery can learn to navigate complex world at compute cost generate > reconstruct > realisticize > simulate > train just five steps between "genie imagine a warehouse" to "robots know how to move through warehouses" @demishassabis, please open source this (or a lite version). the full robotics community is just a few miles away from infinite training envs. cc @OfficialLoganK @sundarpichai
D Demis Hassabis @demishassabis

Thrilled to launch Project Genie, an experimental prototype of the world's most advanced world model. Create entire playable worlds to explore in real-time just from a simple text prompt - kind of mindblowing really! Available to Ultra subs in the US for now - have fun exploring! https://t.co/2XDy0V0BW0

S
Sherwin Wu @sherwinwu ·
Anyone who tries to build an AI agent for an enterprise quickly realizes that context is king, but is still extremely hard to get right. Internally at OpenAI, we've been trying to solve the context problem for one vertical: data warehouses. And it's starting to work quite well!
O OpenAI Developers @OpenAIDevs

Inside our in-house AI data agent It reasons over 600+ PB and 70k datasets, enabling natural language data analysis across Engineering, Product, Research, and more Our agent uses Codex-powered table-level knowledge plus product and organizational context https://t.co/Nr1geMcLoc

A
Aparna Dhinakaran @aparnadhinak ·
Agent Harness Architectures
𝞍
𝞍 Shin Megami Boson 𝞍 @shinboson ·
as far as I can tell, the common pattern seen in people who are very good at getting LLMs to do things is: - intelligent - empathetic - definitely autistic - some kind of will to power
v
vittorio @IterIntellectus ·
WORLD MODEL IS HERE
G Google DeepMind @GoogleDeepMind

Step inside Project Genie: our experimental research prototype that lets you create, edit, and explore virtual worlds. 🌎

S
Simplifying AI @simplifyinAI ·
"I don't have a GPU" is officially dead 🤯 You can now run 70B model on a single 4GB GPU and it even scales up to the colossal Llama 3.1 405B on just 8GB of VRAM. AirLLM uses "Layer-wise Inference." Instead of loading the whole model, it loads, computes, and flushes one layer at a time → No quantization needed by default → Supports Llama, Qwen, and Mistral → Works on Linux, Windows, and macOS 100% Open Source.
A
Anthropic @AnthropicAI ·
AI can make work faster, but a fear is that relying on it may make it harder to learn new skills on the job. We ran an experiment with software engineers to learn more. Coding with AI led to a decrease in mastery—but this depended on how people used it. https://t.co/lbxgP11I4I
U
Unsloth AI @UnslothAI ·
We successfully trained an LLM without human intervention using Claude Code. We made a guide on how to do this with local LLMs via Claude Code and OpenAI Codex. Connect GLM-4.7-Flash to your server and start agentic coding locally! Guide: https://t.co/NXNX35i50r https://t.co/VFIxiEXG9i
A
Anthropic @AnthropicAI ·
These results have broader implications—on how to design AI products that facilitate learning, and how workplaces should approach AI policies. As we also continue to release more capable AI tools, we’re continuing to study their impact on work—at Anthropic, and more broadly.
k
khoi @khoiracle ·
Launching Supacode https://t.co/xsiil8wedj - A native macOS coding agent orchestrator. 📟 Claude Code, Codex, Open Code or any agents run natively 👻 libghostty as the engine so blazing fast ⇥ Tabs, panes, splits so you can bring our own tools (lazygit, emacs, magit) Try it out, hope you like it.
F
Fernando Rojo @fernandorojo ·
We just launched 𝚟𝚎𝚛𝚌𝚎𝚕-𝚌𝚘𝚖𝚙𝚘𝚜𝚒𝚝𝚒𝚘𝚗-𝚙𝚊𝚝𝚝𝚎𝚛𝚗𝚜: every lesson from the talk below, now available as a skill. Turn your React code into something you (and your LLM) enjoy working with. ▲ ~/ npx skills add vercel-labs/agent-skills https://t.co/1xQpArcB7i
F Fernando Rojo @fernandorojo

Composition is all you need. Watch the full video below. https://t.co/efP8tl0es0

S
Satya Nadella @satyanadella ·
Just reported our quarterly results. We are still in the beginning phases of AI diffusion and its broad GDP impact, and already we’ve built an AI business that is larger than some of our biggest franchises that took decades to build. Our quarterly cloud revenue crossed $50 billion for the first time. What’s striking is it was less than 10 years ago that our annual cloud revenue was $10 billion! (That is what expanding TAM + good execution looks like) A few other highlights from across the stack:
L
Lee Robinson @leerob ·
This has been fun to work on. Excited to see how the spec evolves! It should be easy to understand models/prompts used across any coding agent, IDE, CLI, etc. Might as well figure out the shared schema once versus having a hundred different versions.
C Cursor @cursor_ai

We're proposing an open standard for tracing agent conversations to the code they generate. It's interoperable with any coding agent or interface. https://t.co/jO4DIoIl6A

G
Google DeepMind @GoogleDeepMind ·
Step inside Project Genie: our experimental research prototype that lets you create, edit, and explore virtual worlds. 🌎
O
OpenCode @opencode ·
kimi 2.5 is free for a limited time in OpenCode if you ran into bugs before, upgrade OpenCode - we've fixed up a few things and we're having a great time with it now huge thanks to fireworks for getting this model running so well so quickly
A
Aakash Gupta @aakashgupta ·
Everyone's calling this a gaming toy. Google just told you exactly what they're building and nobody's repricing it. Project Genie is a training gym factory for embodied AI. The constraints tell the real story. 60-second generation limits? Latency on character control? Worlds that don't always follow prompts exactly? Those are acceptable tradeoffs when your actual customer is SIMA, DeepMind's robot training agent that needs millions of diverse environments to practice warehouse navigation, edge-case scenarios, and physics interactions. Google explicitly stated in August that Genie 3 is a "foundational building block for AGI." Now they're letting consumers create environments while quietly harvesting data on what kinds of prompts generate interesting training scenarios. The math makes this clear. Traditional robotics simulation requires teams spending months hand-coding environments in Unity or Unreal Engine. Genie 3 generates them in seconds from text. The cost per training environment just dropped by orders of magnitude. Meanwhile OpenAI's Sora generates beautiful videos you can watch. NVIDIA Cosmos targets industrial customers with explicit physics parameters. Google built something that trains its own AI agents while consumers think they're playing with a toy. The "promptable world events" feature where you can drop objects mid-session, change weather, spawn characters? That's curriculum generation for reinforcement learning. You're teaching their robots how to handle novel situations. Google AI Ultra subscribers are paying $250/month to be QA testers for DeepMind's AGI infrastructure. The "World Models as a Service" moat is being dug in plain sight.
M Meer | AI Tools & News @Meer_AIIT

📢 New from Google DeepMind: Project Genie An experimental prototype that lets users create and explore AI-generated interactive worlds in real time. Powered by Genie 3 (their world model), Nano Banana Pro, and Gemini. How it works: → Prompt with text or images to design a world and character → Preview and adjust with Nano Banana Pro before entering → Genie 3 generates the environment in real time as you move through it → Remix existing worlds or browse a gallery for inspiration Rolling out now to Google AI Ultra subscribers in the U.S. (18+).

A
Ahmad @TheAhmadOsman ·
Genuine advice If you need ANY hardware, BUY IT NOW - Phones - Laptops - Computer parts Hardware prices are about to get ridiculous I just bought my wife a new MacBook & iPhone I’m not trying to flex, just getting ahead of the supply shock before the prices get wild
S
Sasha Varlamov @savarlamov ·
@cursor_ai Thanks for including us in the drafting process Lee. We'd love agent-trace contributions to the Git AI https://t.co/XEONu8FOQg We've already got support for all the big agents, Cursor included!
G
Google AI Developers @googleaidevs ·
Access the weights on GitHub and @huggingface. https://t.co/oZDE8Wh0jH
A
Anthropic @AnthropicAI ·
Participants in the AI group finished faster by about two minutes (although this wasn’t statistically significant). But on average, the AI group also scored significantly worse on the quiz—17% lower, or roughly two letter grades. https://t.co/ko7aaBX4Rq
d
dax @thdxr ·
i've been using it for all my work for the past 24 hours and i don't see much of a difference from opus maybe opus is a bit smarter but this guy is so fast and so cheap and we're probably going to drop our prices even further
O OpenCode @opencode

kimi 2.5 is free for a limited time in OpenCode if you ran into bugs before, upgrade OpenCode - we've fixed up a few things and we're having a great time with it now huge thanks to fireworks for getting this model running so well so quickly

I
Invideo @invideoOfficial ·
We just launched AI Motion Graphics with @AnthropicAI Think vibecoding for motion design. The cost of professional motion work just dropped to zero. All generated from a single prompt. Small teams can now produce the same quality as large agencies. No After Effects, no templates, no code — just describe what you want. Try it on https://t.co/DbCkAwMecj
P
Peter H. Diamandis, MD @PeterDiamandis ·
A PROPOSAL FOR UNIVERSAL HIGH INCOME (UHI): During my recent Moonshots podcast with @elonmusk, we dove into his notion of Universal High Income (UHI) – Elon’s proposal that an AI and Robotics will enable a world of sustainable abundance for all... a life beyond basic income, towards high income and standards of living. When I asked him how this might work, he said: “You know, this is my intuition but I don’t know how to do it. I welcome ideas.” That single statement has been ringing in my head ever since. Here’s why: the economics of scarcity are flipping to the economics of Abundance. I do believe that AI and humanoid robots can produce nearly anything we need—goods, services, healthcare, education—at costs approaching zero. But there’s a gap between that vision and getting there. How do we actually fund and distribute Abundance to everyone? Today, I’m excited to share one compelling answer. I’ve been talking to Daniel Schreiber, CEO of Lemonade (the AI-insurance company that just launched 50% off premiums for Tesla FSD drivers), about a framework called the MOSAIC Model: a concrete proposal for how governments could implement Universal High Income without raising taxes on workers or businesses. (See the components of MOSAIC in my P.S. below.) Here’s the core insight that makes the math work: 1/ THE AUTOMATION PARADOX: AI Unemployment ≠ Traditional Unemployment When most people hear “mass job displacement,” they picture economic collapse: bread lines, depression, social chaos. That’s because they’re thinking about traditional unemployment, where workers disappear and nothing replaces them. AI unemployment is fundamentally different. Think of it this way: imagine sending a digital twin to work in your place. It performs your tasks faster, cheaper, and better. The company’s output increases. GDP grows. The resources exist – they just need to be redistributed. This is the Automation Paradox: AI can raise productivity while displacing labor. When workers are replaced by more productive capital, GDP rises even as fewer humans work. The challenge is not affordability. It’s capture and distribution. 2/ “AI DIVIDEND”: Where the Money Actually Comes From Daniel’s framework identifies two places the AI surplus shows up, and how to capture it without disrupting consumers or raising statutory tax rates: Channel 1: Dynamic VAT (The Deflation Dividend) AI is deflationary. When AI cuts the cost of producing something by 30%, that value creation can either flow entirely to shareholders – or be partially recaptured for society. Dynamic VAT works like this: as AI drives quality-adjusted price declines in goods and services, the VAT rate adjusts upward by exactly enough to keep consumer prices stable. Consumers pay the same. But the government captures part of the deflation dividend. It’s frictionless redistribution. Prices don’t rise. No one feels it. Channel 2: Over-Trend Profit Ring-Fencing AI is generating windfall profits for companies at the frontier. Rather than raising corporate tax rates (which drives capital flight), the MOSAIC Model proposes ring-fencing only the above-trend portion of capital income tax receipts. Baseline profits? Untouched. Normal corporate taxes? Unchanged. But what about the incremental surge in profits attributable to AI? A portion gets earmarked for the “Universal High Income” fund. Statutory rates stay the same. Companies keep most of their windfall. But society captures enough to fund a universal floor. 3/ WHAT THIS MEANS FOR FAMILIES: Here’s where it gets real. Under the MOSAIC Model’s basic implementation (before any additional policy choices), a household with two non-working parents and two children would receive income equivalent to today’s fourth decile: roughly the 30-40th percentile of current household income. To be clear, that’s not survival-level subsistence. It’s lower-middle-class security. For doing nothing. This creates a Universal Basic Floor – funded entirely by the two low-friction channels above. But this is just the starting line, not the finish line. If society chooses to capture more of the AI dividend through additional mechanisms (windfall levies, land-value capture, AI-services taxation), the floor could rise to what Daniel calls the “the UHI Benchmark”: approximately 120% of median wages. Upper-middle-class income. Universal. The surplus exists. The question is: how much do we collectively choose to redistribute? 4/ WHY TIMING IS EVERYTHING: Here’s what keeps both Daniel and me up at night: the political window for implementing this is closing. The MOSAIC Model’s political economy analysis shows something counterintuitive: feasibility is highest early in the AI transition – before capital consolidates opposition, before tech incumbents organize billion-dollar lobbying efforts, before the status quo hardens. Wait until mass displacement is undeniable? By then, it may be too late to pass anything. Act early or not at all. A good system passed in 2026 beats a perfect system proposed in 2030 that fails. 5/ THE INVITATION: Elon said he welcomes ideas. This is one. The MOSAIC Model isn’t the only answer, but it’s a rigorous, economically grounded starting point. It demonstrates that Universal High Income is not utopian dreaming. It’s an engineering problem with identifiable solutions. The AI dividend is real. The fiscal math works. The question is whether we have the collective will to build the capture mechanisms before the window closes. The full MOSAIC Model is available today at https://t.co/foAZ0mToPw for policymakers, economists, and fellow entrepreneurs to critique, improve, and implement. Read the full plan, verify the math, and let’s debate this. Because this is not a matter of any single country or company getting it right. It’s about humanity navigating the biggest economic transition in history. When AI takes our jobs, it should also pay our wages. Let’s make that happen. Peter Diamandis (in collaboration with Daniel Schreiber, @daschreiber, CEO of Lemonade and Chair of the MOSAIC AI Policy Institute) P.S. The detailed components of MOSAIC that make the model affordable: M – Multi-channel / Mechanism (Implied): The core philosophy that no single tax can fund UHI alone; it requires a “mosaic” of multiple bases. O – Over-trend Ring-fencing: Earmarking 85% of the “windfall” capital-income tax receipts (profits and capital gains) that exceed historical trends. S – Savings (Government Automation Dividend - GAD): capturing the cost savings from automating government bureaucracy (e.g., using AI for back-office admin). A – AI-linked Deflation (Captured via Dynamic VAT): The largest tile. As AI drives prices down, the VAT rate adjusts upward to capture the “deflation gap,” keeping prices stable for consumers while generating revenue. I – Income (Negative Income Tax): The distribution mechanism itself, ensuring work always pays. C – Consolidation: Rolling existing, overlapping welfare transfers into the new single payment to avoid double-spending. In short: The MOSAIC is the Fiscal Architecture. It argues that while one tax (like a “wealth tax”) is politically impossible or insufficient, a mosaic of VAT + Windfall Profits + Efficiency Savings + Legacy Consolidation creates a robust funding base for a poverty-ending income floor.
G
Google AI Developers @googleaidevs ·
AlphaGenome, a new breakthrough AI model for genomics, is our most accurate and comprehensive DNA sequence model to date. Watch the video to learn how it works. 🧬 https://t.co/5u5StRAiAE
P
Prince Canuma @Prince_Canuma ·
Wow, this is incredible almost fooled me! 🔥 And it only took 28 mins to generate? Guess the latest optimizations were worth it. This is why I built mlx-video, to enable creatives. https://t.co/2JyYR7qfFY
m mr @JakiTreehorne

Just used @openclaw to produce a 25-second "Her"-style commercial 100% locally: 🎬 MLX-Video + LTX-2 (19B) on M4 series Mac 128G 🎙️ ElevenLabs VO 🎵 Epidemic Sound 10 scenes with continuity. 28 min generation. Zero cloud render costs. Huge thanks to @Prince_Canuma for mlx-video 🔥 Local AI filmmaking is here.

A
Anthropic @AnthropicAI ·
We were particularly interested in coding because as software engineering grows more automated, humans will still need the skills to catch AI errors, guide its output, and ultimately provide oversight for AI deployed in high-stakes environments.
R
Ryan Carson @ryancarson ·
If you can do this, you're in the top 1% of engineers right now. Most engineers in enterprise are barely using agents (and if they are, most are stuck with copilot). If you can add looping at night, you go next level. It's not hard though. Just point your agent at this article (or copy/paste) and say "Help me set this up". Trigger the crons manually and iron out any bugs, then set it and wake up tomorrow to see what you've got.
R Ryan Carson @ryancarson

How to make your agent learn and ship while you sleep

a
a16z @a16z ·
The hottest role in tech — the forward-deployed engineer — was "the ugliest duckling" for a decade. In this conversation, Akshay Krishnaswamy, Chief Architect of Palantir, joins a16z GP Erin Price-Wright to cover: - Why a good team of engineers is like a hive mind - The archetypes of people that thrive as FDEs - Why pain tolerance is a hiring filter - Managing high-agency engineers without hierarchy and more. 00:00 Introduction 02:17 Defining forward-deployed engineering 04:49 Differences between FDE and other roles 06:09 Building and managing teams 09:55 Challenges and evolution of FDE 15:27 Maintaining product focus and customer relationships @hyperindexed @espricewright
A
Ashpreet Bedi @ashpreetbedi ·
Building Pal: Personal Agent that Learns
M
Michael Feldstein @msfeldstein ·
My favorite way of using cursor is asking it to deconstruct things i want to understand, show it to me step by step rather than one shot generations of things i dont understand. You can build your own interactive explainers. https://t.co/e4Z37aoJB3
X Xor @XorDev

Rocaille 2 vec2 p=(FC.xy*2.-r)/r.y/.3,v;for(float i,f;i++<1e1;o+=(cos(i+vec4(0,1,2,3))+1.)/6./length(v))for(v=p,f=0.;f++<9.;v+=sin(v.yx*f+i+t)/f);o=tanh(o*o); https://t.co/PRJ99gngf5

A
Anthropic @AnthropicAI ·
For more details on this research, see the full paper: https://t.co/V06Q83Luhv
E
Ethan Mollick @emollick ·
Had early access to Genie 3 world modelling. Huge leap forward in modelling/physics but some issues remain Here is a bit of an otter airline pilot with a duck on its head walking through a Rothko inspired airport and an otter in a wingsuit flying through a city of gothic towers. https://t.co/Aot58bxAOP
H
Hugo @striedinger ·
Imagine naming your company after the metaverse and not coming up with this
G Google DeepMind @GoogleDeepMind

Step inside Project Genie: our experimental research prototype that lets you create, edit, and explore virtual worlds. 🌎

A
Anthropic @AnthropicAI ·
However, some in the AI group still scored highly while using AI assistance. When we looked at the ways they completed the task, we saw they asked conceptual and clarifying questions to understand the code they were working with—rather than delegating or relying on AI. https://t.co/6H5Hnxiv7O
x
xAI @xai ·
We are excited about partnering with @fal on the new Grok Imagine API!
f fal @fal

fal is proud to partner with @xai as Grok Imagine’s day-0 platform partner xAI's latest image & video gen + editing model ✨ Stunning photorealistic images/videos from text ⚡ Lightning-fast generation 🎥 Dynamic animations with precise control 🎨 Edit elements, styles & more https://t.co/1RwkhlJA9w

G
Google AI Developers @googleaidevs ·
And check out the @Nature article to learn more. https://t.co/q3GCp4l9Uv
G
Google AI @GoogleAI ·
Last August, we previewed Genie 3: a general-purpose world model that turns a single text prompt into a dynamic, interactive environment. Since then, trusted testers have taken it further than we ever imagined — experimenting, exploring, and pioneering entirely new interactive worlds. Now, it’s your turn. Starting today, we're rolling out access to Project Genie for Google AI Ultra subscribers in the U.S. (18+). We know what you create will be out of this world 🚀
S
Slazac 🇪🇺 🇺🇦 🇹🇼 🌐 @TrueSlazac ·
Wow. Just made my first AI video game with Google’s Genie 3 The prompt: “French woman has to climb through a word that defies logic, flying objects everywhere” Is it the end of the gaming industry? https://t.co/X7tG7sECJ9
G Google AI @GoogleAI

Last August, we previewed Genie 3: a general-purpose world model that turns a single text prompt into a dynamic, interactive environment. Since then, trusted testers have taken it further than we ever imagined — experimenting, exploring, and pioneering entirely new interactive worlds. Now, it’s your turn. Starting today, we're rolling out access to Project Genie for Google AI Ultra subscribers in the U.S. (18+). We know what you create will be out of this world 🚀

D
Dan ⚡️ @d4m1n ·
just in: Agent Skills apparently suck because they introduce a decision point the new reco is... compress the sh💩t out of your instructions and paste them all into AGENTS․md?! this produced code with 100% pass rate vs 79% with skills
V
Vox @Voxyz_AI ·
@rauchg @p0 500kb to 2kb is wild. this is basically the "mobile-friendly" moment again but for agents. soon every site will need a machine-readable version the same way they needed a responsive layout
P
Pierce Boggan @pierceboggan ·
Introducing Primer: Get your repo ready for AI - Generate high-quality instructions for your repos - Lightweight eval framework to ensure instructions improve agent outcomes - Batch processing with auto PR submission for organizations and teams to scale AI initiatives Try it: https://t.co/0bHvfksvap
A
Anthropic @AnthropicAI ·
In a randomized-controlled trial, we assigned one group of junior engineers to an AI-assistance group and another to a no-AI group. Both groups completed a coding task using a Python library they’d never seen before. Then they took a quiz covering concepts they’d just used. https://t.co/JRXJq9e0dy
k
kanav @kanavtwt ·
Someone made it possible to write AWS infrastructure using React components. And it outputs production-grade Terraform too 😭 https://t.co/TJ5x9rrtdx https://t.co/7HHEe9iK9I
T
Thariq @trq212 ·
Making Playgrounds using Claude Code
E
Everlier @Everlier ·
@cursor_ai To save a click, here's how a sample edit looks like. https://t.co/VDpe78myvQ
C
Cheng Lou @_chenglou ·
Waiting for Opus 5 to clean up the mess I’ve made with Opus 4.5
G
Guillermo Rauch @rauchg ·
This ◉ ʜᴜᴍᴀɴ ○ ᴍᴀᴄʜɪɴᴇ toggle by @p0 is brilliant. It's a beautiful illustration of what the web will "look like" to agents. It will look like a whole lotta markdown 😄 Incidentally, we just made it such that https://t.co/mIlnkwx1ph links automatically render as markdown when agents consume it (we do the same for /𝚍𝚘𝚌𝚜). Page went from 500kb to 2kb. The web for agents will be very efficient! Try: curl -H 'accept: text/markdown' https://t.co/LrMKUHyJim
A
Ahmad @TheAhmadOsman ·
a reminder that, in closed source AI from companies like OpenAI & Anthropic you have zero control over how the models behave, and they can > quantize it > distill it > hot-swap to a cheaper/weaker checkpoint > make the model manipulative > fine-tune it in ways that break safety or depth > drop its IQ > run experiments on you and/or your data > throttle output speed or raise prices > sunset the entire model/version > block your request for any made-up bs reason they have all the knobs & you're at their mercy you won't even get a changelog opensource FTW Buy a GPU
W
Wes Bos @wesbos ·
started the week with Clawdbot, ended the week with enterprise Moltworker
C Cloudflare @Cloudflare

Moltworker is a middleware Worker and adapted scripts that allows running Moltbot (formerly Clawdbot) on Cloudflare's Sandbox SDK and our Developer Platform APIs. So you can self-host an AI personal assistant — without any new hardware. https://t.co/BUlxsyu1fa

T
Theo - t3.gg @theo ·
Calling it now: all these agent coding TUIs are a phase and it will be short lived. Most devs will be back in GUIs and IDEs in a few months.